Compare commits

..

577 Commits
v1.4.3 ... test

Author SHA1 Message Date
79ee256b18 trace writes
All checks were successful
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 36s
Test / test_write_no_same (push) Successful in 8s
Test / test_write_iothreads (push) Successful in 39s
Test / test_resize (push) Successful in 15s
Test / test_resize_auto (push) Successful in 9s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_local_read (push) Successful in 2m21s
Test / test_heal_pg_size_2 (push) Successful in 2m25s
Test / test_heal_ec (push) Successful in 2m23s
Test / test_heal_antietcd (push) Successful in 2m35s
Test / test_heal_csum_32k_dmj (push) Successful in 2m27s
Test / test_heal_csum_32k_dj (push) Successful in 2m31s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_heal_csum_4k (push) Successful in 2m23s
2025-06-08 14:44:53 +03:00
97bb809b54 Release 2.2.2
Some checks reported warnings
Test / test_osd_tags (push) Blocked by required conditions
Test / test_enospc (push) Blocked by required conditions
Test / test_enospc_xor (push) Blocked by required conditions
Test / test_enospc_imm (push) Blocked by required conditions
Test / test_enospc_imm_xor (push) Blocked by required conditions
Test / test_scrub (push) Blocked by required conditions
Test / test_scrub_zero_osd_2 (push) Blocked by required conditions
Test / test_scrub_xor (push) Blocked by required conditions
Test / test_scrub_pg_size_3 (push) Blocked by required conditions
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Blocked by required conditions
Test / test_scrub_ec (push) Blocked by required conditions
Test / test_nfs (push) Blocked by required conditions
Test / buildenv (push) Successful in 10s
Test / make_test (push) Has been cancelled
Test / npm_lint (push) Has been cancelled
Test / test_add_osd (push) Has been cancelled
Test / test_cas (push) Has been cancelled
Test / build (push) Has been cancelled
Test / test_change_pg_count (push) Has been cancelled
Test / test_change_pg_count_ec (push) Has been cancelled
Test / test_change_pg_size (push) Has been cancelled
Test / test_create_nomaxid (push) Has been cancelled
Test / test_etcd_fail (push) Has been cancelled
Test / test_etcd_fail_antietcd (push) Has been cancelled
Test / test_interrupted_rebalance (push) Has been cancelled
Test / test_interrupted_rebalance_imm (push) Has been cancelled
Test / test_interrupted_rebalance_ec (push) Has been cancelled
Test / test_interrupted_rebalance_ec_imm (push) Has been cancelled
Test / test_create_halfhost (push) Has been cancelled
Test / test_failure_domain (push) Has been cancelled
- Fix a bug introduced in 2.2.0 - pg_locks weren't disabled for pools without local_reads
  correctly which could lead to inactive pools during various operations
- Fix an old bug where OSDs could send sub-operations to incorrect peer OSDs when their
  connections were stopped and reestablished quickly, in 2.2.0 it was usually leading
  to "sequencing broken" messages in OSD logs
- Fix debug use_sync_send_recv mode
2025-06-07 12:56:48 +03:00
6022a61329 Decouple break_pg_locks from outbound OSD disconnections
All checks were successful
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 36s
Test / test_write_no_same (push) Successful in 7s
Test / test_write_iothreads (push) Successful in 41s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_local_read (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m27s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_resize (push) Successful in 13s
Test / test_resize_auto (push) Successful in 8s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_heal_csum_4k (push) Successful in 2m21s
2025-06-05 02:48:54 +03:00
a3c1996101 Do not accidentally clear incorrect osd_peer_fds entries
Some checks failed
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 34s
Test / test_write_no_same (push) Successful in 8s
Test / test_write_iothreads (push) Successful in 39s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m25s
Test / test_heal_csum_4k_dj (push) Successful in 2m22s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m18s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 11s
Test / test_scrub_ec (push) Successful in 16s
Test / test_heal_local_read (push) Failing after 10m10s
2025-06-05 02:22:13 +03:00
8d2a1f0297 Fix PG lock auto-enabling/auto-disabling in the default configuration 2025-06-05 02:22:01 +03:00
91cbc313c2 Change "on osd -123" logging to "on peer 123" for unknown connections 2025-06-05 02:22:01 +03:00
f0a025428e Postpone read/write handlers using timerfd in the debug use_sync_send_recv mode 2025-06-05 02:22:01 +03:00
67071158bd Cancel outbound operations only in the osd_client_t destructor
This is required to prevent disconnected peers from sometimes receiving messages
suited for other peers - stop_client was freeing the operations even though they
were still references in the io_uring requests in progress. This was leading to
OSDs sometimes receiving garbage and "broken sequencing" errors in logs as the
memory was usually already reallocated for other operations
2025-06-05 02:09:41 +03:00
cd028612c8 Use a separate osd_client_t::in_osd_num for inbound OSD connections 2025-06-05 02:09:41 +03:00
f390e73dae Log broken sequence numbers in "sequencing" errors 2025-06-05 02:09:41 +03:00
de2539c491 Correct Proxmox version 2025-06-03 01:56:09 +03:00
957a4fce7e Release 2.2.1
All checks were successful
Test / test_rebalance_verify_ec_imm (push) Successful in 1m47s
Test / test_write_no_same (push) Successful in 9s
Test / test_write_xor (push) Successful in 36s
Test / test_write_iothreads (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_local_read (push) Successful in 2m21s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m29s
Test / test_heal_csum_32k_dj (push) Successful in 2m24s
Test / test_heal_csum_32k (push) Successful in 2m29s
Test / test_heal_csum_4k_dmj (push) Successful in 2m27s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 18s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 13s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k_dj (push) Successful in 2m24s
Test / test_heal_csum_4k (push) Successful in 2m27s
Test / test_rebalance_verify (push) Successful in 1m51s
- Fix vitastor-disk purge broken after adding the "OSD is still running" check
- Fix iothreads hanging after adding zero-copy send support
- Fix enabling localized reads online (without restarting OSDs) in the default PG lock mode
2025-05-25 01:04:48 +03:00
f201ecdd51 Fix missing mutex unlock with zero-copy and iothreads O_o
All checks were successful
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 36s
Test / test_write_no_same (push) Successful in 8s
Test / test_write_iothreads (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_local_read (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m26s
Test / test_heal_csum_32k (push) Successful in 2m30s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 13s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_heal_csum_4k (push) Successful in 2m19s
2025-05-24 00:56:31 +03:00
4afb617f59 Also zero-init sqe
Some checks failed
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 8s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_local_read (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m25s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m28s
Test / test_resize (push) Successful in 14s
Test / test_resize_auto (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_osd_tags (push) Successful in 7s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 11s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m19s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_nfs (push) Successful in 14s
Test / test_scrub_ec (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 19s
Test / test_rebalance_verify (push) Successful in 1m55s
Test / test_write_iothreads (push) Failing after 3m5s
2025-05-23 21:18:37 +03:00
d3fde0569f Add a test with enabled iothreads
Some checks failed
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 32s
Test / test_write_no_same (push) Successful in 8s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_local_read (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_write_iothreads (push) Failing after 3m5s
Test / test_heal_csum_4k_dmj (push) Has been cancelled
Test / test_heal_csum_4k_dj (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
Test / test_resize (push) Has been cancelled
Test / test_resize_auto (push) Has been cancelled
Test / test_snapshot_pool2 (push) Has been cancelled
Test / test_osd_tags (push) Has been cancelled
Test / test_enospc (push) Has been cancelled
Test / test_enospc_xor (push) Has been cancelled
Test / test_enospc_imm (push) Has been cancelled
Test / test_enospc_imm_xor (push) Has been cancelled
Test / test_scrub (push) Has been cancelled
Test / test_scrub_zero_osd_2 (push) Has been cancelled
Test / test_scrub_xor (push) Has been cancelled
Test / test_scrub_pg_size_3 (push) Has been cancelled
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Has been cancelled
Test / test_scrub_ec (push) Has been cancelled
Test / test_nfs (push) Has been cancelled
Test / test_heal_csum_32k (push) Has been cancelled
Test / test_heal_csum_32k_dmj (push) Has been cancelled
Test / test_heal_antietcd (push) Has been cancelled
Test / test_heal_csum_32k_dj (push) Has been cancelled
2025-05-23 21:05:18 +03:00
438b64f6c3 Allow to enable PG locks online when changing local_reads in pool configuration
Some checks reported warnings
Test / test_switch_primary (push) Has been cancelled
Test / test_write (push) Has been cancelled
Test / test_write_xor (push) Has been cancelled
Test / test_write_no_same (push) Has been cancelled
Test / test_heal_pg_size_2 (push) Has been cancelled
Test / test_heal_local_read (push) Has been cancelled
Test / test_heal_ec (push) Has been cancelled
Test / test_heal_antietcd (push) Has been cancelled
Test / test_heal_csum_32k_dmj (push) Has been cancelled
Test / test_heal_csum_32k_dj (push) Has been cancelled
Test / test_heal_csum_32k (push) Has been cancelled
Test / test_heal_csum_4k_dmj (push) Has been cancelled
Test / test_heal_csum_4k_dj (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
Test / test_resize (push) Has been cancelled
Test / test_resize_auto (push) Has been cancelled
Test / test_snapshot_pool2 (push) Has been cancelled
Test / test_osd_tags (push) Has been cancelled
Test / test_enospc (push) Has been cancelled
Test / test_enospc_xor (push) Has been cancelled
Test / test_enospc_imm (push) Has been cancelled
Test / test_enospc_imm_xor (push) Has been cancelled
Test / test_scrub (push) Has been cancelled
Test / test_scrub_zero_osd_2 (push) Has been cancelled
Test / test_scrub_xor (push) Has been cancelled
Test / test_scrub_pg_size_3 (push) Has been cancelled
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Has been cancelled
Test / test_scrub_ec (push) Has been cancelled
Test / test_nfs (push) Has been cancelled
Test / make_test (push) Has been cancelled
2025-05-23 20:54:47 +03:00
2b0a802ea1 Fix iothreads sometimes hanging after adding zerocopy support
Some checks reported warnings
Test / test_rebalance_verify_imm (push) Successful in 1m39s
Test / test_dd (push) Successful in 11s
Test / test_rebalance_verify_ec (push) Successful in 1m44s
Test / test_root_node (push) Successful in 7s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m44s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_csum_32k_dmj (push) Has been cancelled
Test / test_heal_csum_32k_dj (push) Has been cancelled
Test / test_heal_csum_32k (push) Has been cancelled
Test / test_heal_csum_4k_dmj (push) Has been cancelled
Test / test_heal_csum_4k_dj (push) Has been cancelled
Test / test_heal_ec (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
Test / test_resize (push) Has been cancelled
Test / test_heal_pg_size_2 (push) Has been cancelled
Test / test_resize_auto (push) Has been cancelled
Test / test_heal_local_read (push) Has been cancelled
Test / test_snapshot_pool2 (push) Has been cancelled
Test / test_heal_antietcd (push) Has been cancelled
Test / test_osd_tags (push) Has been cancelled
Test / test_enospc (push) Has been cancelled
Test / test_enospc_xor (push) Has been cancelled
Test / test_enospc_imm (push) Has been cancelled
Test / test_enospc_imm_xor (push) Has been cancelled
Test / test_scrub (push) Has been cancelled
Test / test_scrub_zero_osd_2 (push) Has been cancelled
Test / test_scrub_xor (push) Has been cancelled
2025-05-23 20:54:03 +03:00
0dd49c1d67 Followup to "allow to purge running OSDs again"
All checks were successful
Test / test_write_no_same (push) Successful in 7s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m44s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_local_read (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m16s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 13s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 13s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 9s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_heal_csum_4k (push) Successful in 2m18s
Test / test_rebalance_verify (push) Successful in 1m37s
2025-05-22 01:10:05 +03:00
410170db96 Add notes about VNPL in English 2025-05-20 02:12:49 +03:00
7d8523e0e5 Add more notes about VNPL in Russian 2025-05-19 02:41:34 +03:00
db915184c6 Allow to purge running OSDs again, as in 2.1.0 and earlier
All checks were successful
Test / test_rebalance_verify_ec_imm (push) Successful in 1m40s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 31s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m14s
Test / test_heal_local_read (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_resize (push) Successful in 13s
Test / test_resize_auto (push) Successful in 9s
Test / test_osd_tags (push) Successful in 6s
Test / test_snapshot_pool2 (push) Successful in 13s
Test / test_enospc (push) Successful in 9s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 12s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m18s
Test / test_nfs (push) Successful in 10s
Test / test_enospc_imm (push) Successful in 11s
2025-05-11 13:59:28 +03:00
5ae6fea49c Add a note about local reads 2025-05-11 01:23:48 +03:00
95ec750b8c Release 2.2.0
All checks were successful
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_local_read (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m29s
Test / test_heal_csum_32k_dj (push) Successful in 2m29s
Test / test_heal_csum_32k (push) Successful in 2m22s
Test / test_heal_csum_4k_dmj (push) Successful in 2m30s
Test / test_resize (push) Successful in 16s
Test / test_resize_auto (push) Successful in 10s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_heal_csum_4k (push) Successful in 2m23s
Test / test_heal_ec (push) Successful in 2m16s
New features:

- [Localized read support](https://vitastor.io/docs/config/pool.html#local_reads) for multi-datacenter setups.
- io_uring-based [zero-copy send support](https://vitastor.io/docs/config/network.html#min_zerocopy_send_size) -
  read the instruction carefully for optimal performance!
- Improve and speedup data distribution, especially in cases of very large hosts (100 OSD+).
  Previously, PG optimization speed depended on the number of OSDs, now it only depends on
  the number of failure domains. Distribution over specific OSDs is now also more even and
  becomes strictly more even when you increase the number of PGs.
- Add a [very interesting instruction](https://vitastor.io/en/docs/usage/nfs.html#linux-nfs-write-size) to change NFS_MAX_FILE_IO_SIZE
- Check operation sequencing and stop connections when it breaks - should help catch some
  very rare RDMA packet loss problems.
- `vitastor-cli rm-osd` now refuses to remove OSDs which are still up and suggests to use `vitastor-disk purge`.
- Allow removal of direntries referring non-existent in VitastorFS.
- Change default vitastor-etcd data dir to /var/lib/etcd/vitastor.

Bug fixes:

- Fix compatibility with ISA-L 2.31+. ⚠️Very important: please upgrade Vitastor before upgrading ISA-L to 2.31+.
- Fix in-memory state cleanup for incomplete PGs.
- Fix monitor crash with non-existent node_placement nodes.
- Slightly speedup `vitastor-kv dump` command by adding output buffering.
- Fix theoretically possible slowdowns in OSD sub-operation failure handling code.
- Fix very rare stack overflows in vitastor-kv.
- Fix a possible crash in VitastorFS during handling of file creation race condition.
- Fix modify-pool -s PG_SIZE which didn't work without --pg_minsize.
- Fix marking peer OSDs as alive on receiving data from them via RDMA - in theory,
  the bug could result in instability with RDMA under high load with slow disks.
- Fix a rare OSD crash due to double handle_primary_subop() call.
- Fix latency aggregation in global stats (/vitastor/stats in etcd) - do not sum it.
- Hide "Ran out of journal space" log messages by default.
- Wait for RDMA-CM EVENT_ESTABLISHED after rdma_accept(), handle rdma_accept() before acking the event.
- Fix VitastorFS total & free numbers multiplied by extra 2.
- Fix systemd unit name in make-etcd.
- Do not allow reweight > 1 in vitastor-cli modify-osd.
- Fix docker build.
2025-05-11 00:26:08 +03:00
90b1de307b Support local reads in client
All checks were successful
Test / test_write_no_same (push) Successful in 9s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m51s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_local_read (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 11s
Test / test_scrub_ec (push) Successful in 15s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_heal_csum_4k (push) Successful in 2m19s
Test / test_rebalance_verify (push) Successful in 1m32s
2025-05-10 16:42:02 +03:00
7e6a95c678 Support primary-reads from clean replicated PGs on secondary OSDs 2025-05-10 16:42:02 +03:00
b2416afb28 Lock PGs on secondary OSDs to allow local reads and guarantee splitbrain prevention 2025-05-10 15:18:00 +03:00
66dc116f60 Cleanup PG_INCOMPLETE peering states
All checks were successful
Test / test_root_node (push) Successful in 7s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 30s
Test / test_write_no_same (push) Successful in 9s
Test / test_write_xor (push) Successful in 33s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m35s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m26s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_resize (push) Successful in 14s
Test / test_resize_auto (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_osd_tags (push) Successful in 6s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 13s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m21s
2025-05-10 02:51:26 +03:00
0cb8629ab6 Remove finish_stop_pg shortcut
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 1m32s
Test / test_rebalance_verify_ec (push) Successful in 1m37s
Test / test_write (push) Successful in 33s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m38s
Test / test_write_no_same (push) Successful in 8s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_resize (push) Successful in 11s
Test / test_resize_auto (push) Successful in 8s
Test / test_osd_tags (push) Successful in 6s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_heal_csum_4k_dj (push) Successful in 2m29s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 10s
Test / test_heal_csum_4k (push) Successful in 2m18s
2025-05-08 16:14:35 +03:00
b7322a405a Move gethostname_str to utils
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m35s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m33s
Test / test_write_no_same (push) Successful in 9s
Test / test_write (push) Successful in 30s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 13s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m15s
2025-05-05 02:16:06 +03:00
5692630005 Move check_sequencing indication into config response features subkey
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m33s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 32s
Test / test_write_no_same (push) Successful in 9s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m33s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 12s
Test / test_osd_tags (push) Successful in 9s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 11s
Test / test_scrub_ec (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m23s
2025-05-05 02:07:50 +03:00
00ced7cea7 Fix monitor crash with non-existing node_placement nodes 2025-05-05 02:05:04 +03:00
ebdb75e287 Fix typo in docker docs
All checks were successful
Test / test_rebalance_verify_ec_imm (push) Successful in 1m46s
Test / test_switch_primary (push) Successful in 31s
Test / test_write_no_same (push) Successful in 8s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 13s
Test / test_snapshot_pool2 (push) Successful in 13s
Test / test_osd_tags (push) Successful in 9s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m25s
Test / test_rebalance_verify (push) Successful in 1m33s
2025-05-04 18:09:33 +03:00
f397fe9c6a Add compatibility with ISA-L 2.31+ 2025-05-04 18:09:33 +03:00
28560b4ae5 Write K/V listings in buffered manner
All checks were successful
Test / test_dd (push) Successful in 13s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m39s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m14s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 13s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 17s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m19s
2025-05-03 15:06:59 +03:00
2d07449e74 Postpone cb() to set_immediate() to prevent stack overflows in kv_db 2025-05-03 15:06:59 +03:00
80c4e8c20f Add missing wakeup in ringloop->set_immediate to prevent slowdowns in code using set_immediate
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m48s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m45s
Test / test_write_no_same (push) Successful in 7s
Test / test_write (push) Successful in 30s
Test / test_switch_primary (push) Successful in 33s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m25s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 12s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 13s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 13s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 10s
Test / test_heal_csum_4k (push) Successful in 2m25s
Test / test_heal_antietcd (push) Successful in 2m39s
2025-05-03 14:40:48 +03:00
2ab0ae3bc9 Check operation sequencing and stop clients when it breaks
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m33s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m35s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m25s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 12s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 13s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 13s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m17s
2025-05-02 17:01:50 +03:00
05e59c1b4f Fix MSG_WAITALL assertion added in the zero-copy patch
All checks were successful
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 36s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m59s
Test / test_write_no_same (push) Successful in 10s
Test / test_write_xor (push) Successful in 39s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_rebalance_verify (push) Successful in 1m28s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m16s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_resize (push) Successful in 13s
Test / test_osd_tags (push) Successful in 6s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 14s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_heal_csum_4k (push) Successful in 2m18s
Test / test_resize_auto (push) Successful in 9s
Test / test_scrub_pg_size_3 (push) Successful in 13s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 12s
2025-05-02 17:01:43 +03:00
e6e1c5b962 Check if OSDs are still up in rm-osd
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m49s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m47s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_resize (push) Successful in 13s
Test / test_resize_auto (push) Successful in 8s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m19s
2025-05-02 13:05:59 +03:00
9556eeae45 Implement io_uring zero-copy send support
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m47s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m46s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 12s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 13s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 13s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m18s
2025-05-01 18:47:10 +03:00
96b5a72630 Allow removal of bad direntries in VitastorFS (direntries referring non-existent inodes)
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m49s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m45s
Test / test_switch_primary (push) Successful in 32s
Test / test_write_no_same (push) Successful in 10s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m15s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m15s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 13s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m16s
2025-05-01 01:14:23 +03:00
ef80f121f6 Fix "duplicate inode during create" deletion in VitastorFS
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m45s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m42s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m25s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_resize (push) Successful in 13s
Test / test_resize_auto (push) Successful in 8s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 13s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 13s
Test / test_scrub_ec (push) Successful in 13s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m15s
2025-04-30 20:37:49 +03:00
bbdd1f3aa7 Fix modify-pool -s PG_SIZE without --pg_minsize
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m46s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_resize (push) Successful in 14s
Test / test_resize_auto (push) Successful in 8s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 11s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 10s
Test / test_scrub_ec (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m16s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
2025-04-28 02:20:54 +03:00
5dd37f519a Fix node folding in case of empty rules (pool with size 1), add a test
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m42s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m42s
Test / test_write_no_same (push) Successful in 8s
Test / test_write (push) Successful in 30s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m22s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 13s
Test / test_osd_tags (push) Successful in 8s
Test / test_enospc (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
2025-04-28 02:16:49 +03:00
a2278be84d Improve data distribution: solve LP task on failure domains instead of individual OSDs
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m41s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m40s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 13s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 13s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m17s
This greatly speeds up PG placement and makes it more uniform both because the LP task
becomes simpler and because the distribution of individual OSDs is optimised manually
2025-04-27 01:44:46 +03:00
1393a2671c Change default vitastor-etcd data dir to /var/lib/etcd/vitastor 2025-04-27 01:44:46 +03:00
9fa8ae5384 Reset OSD ping state on receiving data from it via RDMA
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m42s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m44s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 13s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 10s
Test / test_scrub_ec (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m19s
2025-04-26 14:17:09 +03:00
169a35a067 Followup to latency aggregation fix
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m49s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m50s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 33s
Test / test_heal_pg_size_2 (push) Successful in 2m13s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m16s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 12s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 13s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_imm (push) Successful in 9s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m18s
2025-04-26 01:32:46 +03:00
2b2a10581d Prevent double handle_primary_subop in rare cases
Some checks failed
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m16s
Test / test_change_pg_count (push) Failing after 18s
Test / test_change_pg_count_ec (push) Failing after 20s
Test / test_heal_csum_32k_dj (push) Successful in 2m16s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 12s
Test / test_enospc_xor (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
Test / test_enospc_imm (push) Has been cancelled
Test / test_enospc_imm_xor (push) Has been cancelled
Test / test_scrub (push) Has been cancelled
Test / test_scrub_zero_osd_2 (push) Has been cancelled
Test / test_scrub_xor (push) Has been cancelled
Test / test_scrub_pg_size_3 (push) Has been cancelled
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Has been cancelled
Test / test_scrub_ec (push) Has been cancelled
Test / test_nfs (push) Has been cancelled
Test / test_enospc (push) Has been cancelled
Test / test_osd_tags (push) Has been cancelled
Test / test_snapshot_pool2 (push) Has been cancelled
2025-04-26 01:16:53 +03:00
10fd51862a Fix latency aggregation in global stats (/vitastor/stats in etcd)
All checks were successful
Test / test_switch_primary (push) Successful in 39s
Test / test_write_no_same (push) Successful in 19s
Test / test_write (push) Successful in 54s
Test / test_rebalance_verify_ec_imm (push) Successful in 5m22s
Test / test_write_xor (push) Successful in 1m3s
Test / test_heal_pg_size_2 (push) Successful in 2m37s
Test / test_heal_ec (push) Successful in 2m39s
Test / test_heal_csum_32k_dmj (push) Successful in 2m41s
Test / test_heal_antietcd (push) Successful in 2m49s
Test / test_heal_csum_32k_dj (push) Successful in 2m55s
Test / test_heal_csum_32k (push) Successful in 2m54s
Test / test_heal_csum_4k_dj (push) Successful in 2m55s
Test / test_heal_csum_4k_dmj (push) Successful in 2m57s
Test / test_resize (push) Successful in 44s
Test / test_resize_auto (push) Successful in 20s
Test / test_osd_tags (push) Successful in 16s
Test / test_snapshot_pool2 (push) Successful in 33s
Test / test_enospc (push) Successful in 23s
Test / test_enospc_xor (push) Successful in 24s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 12s
Test / test_scrub_ec (push) Successful in 18s
Test / test_heal_csum_4k (push) Successful in 2m55s
Test / test_failure_domain (push) Successful in 10s
2025-04-25 00:08:10 +03:00
15d0204f96 Hide "Ran out of journal space" log messages by default
All checks were successful
Test / test_root_node (push) Successful in 14s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m52s
Test / test_write_no_same (push) Successful in 14s
Test / test_write (push) Successful in 37s
Test / test_switch_primary (push) Successful in 39s
Test / test_write_xor (push) Successful in 40s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m22s
Test / test_heal_antietcd (push) Successful in 2m24s
Test / test_heal_csum_32k_dmj (push) Successful in 2m24s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m24s
Test / test_heal_csum_4k_dmj (push) Successful in 2m25s
Test / test_heal_csum_4k_dj (push) Successful in 2m24s
Test / test_resize_auto (push) Successful in 13s
Test / test_resize (push) Successful in 19s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_osd_tags (push) Successful in 14s
Test / test_enospc (push) Successful in 16s
Test / test_enospc_xor (push) Successful in 17s
Test / test_enospc_imm (push) Successful in 17s
Test / test_enospc_imm_xor (push) Successful in 19s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_zero_osd_2 (push) Successful in 19s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 21s
Test / test_scrub_ec (push) Successful in 20s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m19s
2025-04-20 01:21:03 +03:00
21d6e88a1b Add instructions to change NFS_MAX_FILE_IO_SIZE 2025-04-18 13:39:32 +03:00
df2847df2d Wait for RDMA-CM EVENT_ESTABLISHED after rdma_accept(), handle rdma_accept() before acking the event
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m50s
Test / test_dd (push) Successful in 18s
Test / test_write_no_same (push) Successful in 14s
Test / test_switch_primary (push) Successful in 39s
Test / test_write (push) Successful in 38s
Test / test_write_xor (push) Successful in 40s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m22s
Test / test_heal_antietcd (push) Successful in 2m23s
Test / test_heal_csum_32k_dmj (push) Successful in 2m25s
Test / test_heal_csum_32k_dj (push) Successful in 2m24s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_heal_csum_4k_dmj (push) Successful in 2m26s
Test / test_heal_csum_4k_dj (push) Successful in 2m23s
Test / test_resize_auto (push) Successful in 13s
Test / test_resize (push) Successful in 20s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_osd_tags (push) Successful in 13s
Test / test_enospc (push) Successful in 16s
Test / test_enospc_xor (push) Successful in 19s
Test / test_enospc_imm (push) Successful in 17s
Test / test_enospc_imm_xor (push) Successful in 18s
Test / test_scrub (push) Successful in 20s
Test / test_scrub_zero_osd_2 (push) Successful in 21s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 20s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 24s
Test / test_scrub_ec (push) Successful in 20s
Test / test_nfs (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m26s
2025-04-15 15:19:36 +03:00
327c98a4b6 Fix index_tree
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m49s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m49s
Test / test_write_no_same (push) Successful in 14s
Test / test_switch_primary (push) Successful in 38s
Test / test_write (push) Successful in 37s
Test / test_write_xor (push) Successful in 40s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m21s
Test / test_heal_antietcd (push) Successful in 2m23s
Test / test_heal_csum_32k_dmj (push) Successful in 2m27s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m22s
Test / test_heal_csum_4k_dmj (push) Successful in 2m24s
Test / test_heal_csum_4k_dj (push) Successful in 2m22s
Test / test_resize (push) Successful in 18s
Test / test_resize_auto (push) Successful in 14s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_osd_tags (push) Successful in 13s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 17s
Test / test_enospc_imm (push) Successful in 17s
Test / test_enospc_imm_xor (push) Successful in 21s
Test / test_scrub (push) Successful in 19s
Test / test_scrub_zero_osd_2 (push) Successful in 20s
Test / test_scrub_xor (push) Successful in 20s
Test / test_scrub_pg_size_3 (push) Successful in 21s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 21s
Test / test_scrub_ec (push) Successful in 20s
Test / test_nfs (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m23s
2025-04-13 16:13:44 +03:00
3cc0abfd81 Fix NFS total & free multiplied by extra 2
All checks were successful
Test / test_dd (push) Successful in 19s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m51s
Test / test_write_no_same (push) Successful in 14s
Test / test_write (push) Successful in 35s
Test / test_switch_primary (push) Successful in 39s
Test / test_write_xor (push) Successful in 42s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m23s
Test / test_heal_antietcd (push) Successful in 2m22s
Test / test_heal_csum_32k_dmj (push) Successful in 2m24s
Test / test_heal_csum_32k_dj (push) Successful in 2m23s
Test / test_heal_csum_32k (push) Successful in 2m26s
Test / test_heal_csum_4k_dmj (push) Successful in 2m26s
Test / test_heal_csum_4k_dj (push) Successful in 2m25s
Test / test_resize_auto (push) Successful in 14s
Test / test_resize (push) Successful in 18s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_osd_tags (push) Successful in 13s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 18s
Test / test_enospc_imm (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 20s
Test / test_scrub (push) Successful in 19s
Test / test_scrub_zero_osd_2 (push) Successful in 19s
Test / test_scrub_xor (push) Successful in 21s
Test / test_scrub_pg_size_3 (push) Successful in 20s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 23s
Test / test_scrub_ec (push) Successful in 21s
Test / test_nfs (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m20s
2025-04-12 19:09:26 +03:00
80e5f8ba76 Add missing WITH_RDMACM defines
All checks were successful
Test / test_dd (push) Successful in 18s
Test / test_rebalance_verify_ec_imm (push) Successful in 2m2s
Test / test_write_no_same (push) Successful in 14s
Test / test_switch_primary (push) Successful in 38s
Test / test_write (push) Successful in 36s
Test / test_write_xor (push) Successful in 40s
Test / test_heal_pg_size_2 (push) Successful in 2m22s
Test / test_heal_ec (push) Successful in 2m22s
Test / test_heal_antietcd (push) Successful in 2m23s
Test / test_heal_csum_32k_dmj (push) Successful in 2m24s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m22s
Test / test_heal_csum_4k_dmj (push) Successful in 2m25s
Test / test_heal_csum_4k_dj (push) Successful in 2m25s
Test / test_resize (push) Successful in 18s
Test / test_resize_auto (push) Successful in 13s
Test / test_snapshot_pool2 (push) Successful in 22s
Test / test_osd_tags (push) Successful in 15s
Test / test_enospc (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 16s
Test / test_enospc_xor (push) Successful in 18s
Test / test_enospc_imm_xor (push) Successful in 18s
Test / test_scrub (push) Successful in 21s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_zero_osd_2 (push) Successful in 23s
Test / test_scrub_pg_size_3 (push) Successful in 21s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 23s
Test / test_scrub_ec (push) Successful in 22s
Test / test_nfs (push) Successful in 18s
Test / test_heal_csum_4k (push) Successful in 2m22s
2025-04-12 19:06:27 +03:00
4b660f1ce8 Fix systemd unit name in make-etcd
Some checks failed
Test / test_rebalance_verify_ec_imm (push) Successful in 2m4s
Test / test_dd (push) Successful in 21s
Test / test_write_no_same (push) Successful in 14s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 36s
Test / test_write_xor (push) Successful in 40s
Test / test_heal_pg_size_2 (push) Successful in 2m21s
Test / test_heal_ec (push) Successful in 2m21s
Test / test_heal_antietcd (push) Successful in 2m22s
Test / test_heal_csum_32k_dmj (push) Failing after 2m34s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_heal_csum_4k_dmj (push) Successful in 2m24s
Test / test_resize_auto (push) Successful in 14s
Test / test_resize (push) Successful in 18s
Test / test_heal_csum_4k_dj (push) Successful in 2m26s
Test / test_osd_tags (push) Successful in 15s
Test / test_snapshot_pool2 (push) Successful in 20s
Test / test_enospc (push) Successful in 16s
Test / test_enospc_xor (push) Successful in 17s
Test / test_enospc_imm (push) Successful in 18s
Test / test_enospc_imm_xor (push) Successful in 19s
Test / test_scrub (push) Successful in 20s
Test / test_scrub_zero_osd_2 (push) Successful in 21s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 20s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 22s
Test / test_scrub_ec (push) Successful in 20s
Test / test_nfs (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m15s
2025-04-11 02:05:08 +03:00
dfde0e60f0 Do not allow reweight > 1
All checks were successful
Test / test_dd (push) Successful in 20s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m48s
Test / test_write_no_same (push) Successful in 14s
Test / test_write (push) Successful in 36s
Test / test_switch_primary (push) Successful in 39s
Test / test_write_xor (push) Successful in 40s
Test / test_heal_pg_size_2 (push) Successful in 2m21s
Test / test_heal_ec (push) Successful in 2m21s
Test / test_heal_antietcd (push) Successful in 2m23s
Test / test_heal_csum_32k_dmj (push) Successful in 2m24s
Test / test_heal_csum_32k_dj (push) Successful in 2m31s
Test / test_heal_csum_32k (push) Successful in 2m24s
Test / test_heal_csum_4k_dmj (push) Successful in 2m26s
Test / test_heal_csum_4k_dj (push) Successful in 2m25s
Test / test_resize (push) Successful in 18s
Test / test_resize_auto (push) Successful in 13s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_osd_tags (push) Successful in 12s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 18s
Test / test_enospc_imm_xor (push) Successful in 19s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_zero_osd_2 (push) Successful in 19s
Test / test_scrub_xor (push) Successful in 20s
Test / test_scrub_pg_size_3 (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 23s
Test / test_scrub_ec (push) Successful in 20s
Test / test_nfs (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m22s
2025-04-05 12:21:14 +03:00
013f688ffe Run check_peer_config on RDMA-CM connections too
All checks were successful
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 37s
Test / test_write_no_same (push) Successful in 15s
Test / test_write_xor (push) Successful in 40s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m46s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m23s
Test / test_heal_antietcd (push) Successful in 2m23s
Test / test_heal_csum_32k_dmj (push) Successful in 2m24s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_resize (push) Successful in 19s
Test / test_resize_auto (push) Successful in 12s
Test / test_snapshot_pool2 (push) Successful in 20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m24s
Test / test_osd_tags (push) Successful in 13s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 18s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 18s
Test / test_scrub (push) Successful in 18s
Test / test_heal_csum_4k_dj (push) Successful in 2m24s
Test / test_scrub_zero_osd_2 (push) Successful in 19s
Test / test_scrub_xor (push) Successful in 18s
Test / test_heal_csum_4k (push) Successful in 2m23s
Test / test_scrub_pg_size_3 (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 23s
Test / test_nfs (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 22s
Test / test_etcd_fail_antietcd (push) Successful in 44s
2025-04-02 01:32:28 +03:00
cf9738ddbe Fix docker 2.1.0 build :) 2025-04-01 22:46:22 +03:00
891b2811c7 Release 2.1.0
All checks were successful
Test / test_root_node (push) Successful in 13s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m52s
Test / test_write_no_same (push) Successful in 14s
Test / test_write (push) Successful in 37s
Test / test_switch_primary (push) Successful in 39s
Test / test_write_xor (push) Successful in 42s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m22s
Test / test_heal_antietcd (push) Successful in 2m22s
Test / test_heal_csum_32k_dmj (push) Successful in 2m24s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m24s
Test / test_heal_csum_4k_dmj (push) Successful in 2m24s
Test / test_heal_csum_4k_dj (push) Successful in 2m22s
Test / test_resize_auto (push) Successful in 13s
Test / test_resize (push) Successful in 17s
Test / test_snapshot_pool2 (push) Successful in 21s
Test / test_osd_tags (push) Successful in 12s
Test / test_enospc (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 16s
Test / test_enospc_xor (push) Successful in 19s
Test / test_enospc_imm_xor (push) Successful in 19s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_zero_osd_2 (push) Successful in 21s
Test / test_scrub_xor (push) Successful in 20s
Test / test_scrub_pg_size_3 (push) Successful in 20s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 22s
Test / test_scrub_ec (push) Successful in 19s
Test / test_nfs (push) Successful in 17s
Test / test_heal_csum_4k (push) Successful in 2m22s
New features:

- Support separate OSD cluster network - [osd_cluster_network](https://vitastor.io/docs/config/network.html#osd_cluster_network)
  and, in general, multiple OSD networks, including RDMA
- Add an alternative RDMA implementation via RDMA-CM - [use_rdmacm](https://vitastor.io/docs/config/network.html#use_rdmacm),
  required for iWARP and, maybe, for some IB setups (but not for RoCE)
- Change default PG behaviour to wait for all "up" OSDs to be connected before starting it.
  The old behaviour may be returned by enabling a new [allow_net_split](https://vitastor.io/docs/config/osd.html#allow_net_split)
  option.
- Add a patch for QEMU 9.2

Bug fixes:

- Fix incorrect "has_xxx" PG state names in ls-pgs
- Fix possible QEMU crashes after detaching of Vitastor disks (and update all QEMU builds in Vitastor repos)
- Fix clients sometimes spamming OSDs with infinite reconnections when some PGs are offline
- Fall back to TCP on RDMA connection failures
- Add missing logging of RDMA ibv_modify_qp() errors
- Add a minimum interval for etcd_state_client to reload state
2025-04-01 20:16:27 +03:00
01590df6da Update QEMU version in vitastor-csi Dockerfile 2025-04-01 20:16:27 +03:00
3e5f0be52c Use separate port numbers for RDMA-CM
All checks were successful
Test / test_dd (push) Successful in 20s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m54s
Test / test_write_no_same (push) Successful in 13s
Test / test_switch_primary (push) Successful in 38s
Test / test_write (push) Successful in 37s
Test / test_write_xor (push) Successful in 40s
Test / test_heal_pg_size_2 (push) Successful in 2m21s
Test / test_heal_ec (push) Successful in 2m22s
Test / test_heal_antietcd (push) Successful in 2m23s
Test / test_heal_csum_32k_dmj (push) Successful in 2m24s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_heal_csum_4k_dmj (push) Successful in 2m24s
Test / test_heal_csum_4k_dj (push) Successful in 2m24s
Test / test_resize (push) Successful in 18s
Test / test_resize_auto (push) Successful in 13s
Test / test_snapshot_pool2 (push) Successful in 20s
Test / test_osd_tags (push) Successful in 13s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 17s
Test / test_enospc_imm (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 18s
Test / test_scrub (push) Successful in 20s
Test / test_scrub_zero_osd_2 (push) Successful in 21s
Test / test_scrub_xor (push) Successful in 18s
Test / test_scrub_pg_size_3 (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 21s
Test / test_scrub_ec (push) Successful in 20s
Test / test_nfs (push) Successful in 17s
Test / test_heal_csum_4k (push) Successful in 2m24s
2025-04-01 16:16:03 +03:00
58af897e73 s/listen on/listen to/ :) 2025-04-01 12:07:15 +03:00
dbf9ecd171 Move osd_network to config/network docs 2025-03-31 21:12:09 +03:00
8508e78288 Add an alternative RDMA implementation via RDMA-CM
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m48s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m48s
Test / test_write_no_same (push) Successful in 15s
Test / test_switch_primary (push) Successful in 38s
Test / test_write (push) Successful in 37s
Test / test_write_xor (push) Successful in 41s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m23s
Test / test_heal_csum_32k_dmj (push) Successful in 2m24s
Test / test_heal_csum_32k_dj (push) Successful in 2m24s
Test / test_heal_csum_32k (push) Successful in 2m22s
Test / test_heal_csum_4k_dmj (push) Successful in 2m24s
Test / test_heal_csum_4k_dj (push) Successful in 2m24s
Test / test_resize (push) Successful in 19s
Test / test_resize_auto (push) Successful in 14s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_osd_tags (push) Successful in 14s
Test / test_enospc (push) Successful in 17s
Test / test_enospc_imm (push) Successful in 16s
Test / test_enospc_xor (push) Successful in 19s
Test / test_enospc_imm_xor (push) Successful in 20s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 20s
Test / test_scrub_xor (push) Successful in 20s
Test / test_scrub_pg_size_3 (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 23s
Test / test_scrub_ec (push) Successful in 20s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m23s
Required for non-RoCE cards: iWARP and, possibly, Infiniband
2025-03-31 21:01:25 +03:00
f32dea02bf Support multiple RDMA networks 2025-03-31 21:01:25 +03:00
a103065d12 Support multiple OSD networks and separate OSD cluster network 2025-03-31 21:01:15 +03:00
5d2e28d4a9 Remove unused used_max_cqe from nfs_proxy_rdma
Some checks failed
Test / test_dd (push) Successful in 18s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m52s
Test / test_write_no_same (push) Successful in 14s
Test / test_switch_primary (push) Successful in 38s
Test / test_write (push) Successful in 36s
Test / test_write_xor (push) Successful in 41s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m23s
Test / test_heal_antietcd (push) Successful in 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m32s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_heal_csum_4k_dmj (push) Successful in 2m25s
Test / test_resize_auto (push) Successful in 13s
Test / test_resize (push) Successful in 18s
Test / test_heal_csum_4k_dj (push) Successful in 2m25s
Test / test_osd_tags (push) Successful in 13s
Test / test_snapshot_pool2 (push) Successful in 21s
Test / test_enospc (push) Successful in 17s
Test / test_enospc_xor (push) Successful in 17s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 18s
Test / test_scrub (push) Successful in 19s
Test / test_scrub_zero_osd_2 (push) Successful in 19s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 23s
Test / test_scrub_ec (push) Successful in 20s
Test / test_nfs (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Failing after 2s
2025-03-30 02:13:24 +03:00
18e14eed11 Fix --pg_count formula in docs/usage/cli 2025-03-29 17:54:53 +03:00
ccc32b9e68 Use TCP on RDMA connection failure
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m54s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m55s
Test / test_write_no_same (push) Successful in 15s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 37s
Test / test_write_xor (push) Successful in 39s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m23s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m23s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m24s
Test / test_heal_csum_4k_dj (push) Successful in 2m25s
Test / test_resize (push) Successful in 18s
Test / test_resize_auto (push) Successful in 14s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_osd_tags (push) Successful in 13s
Test / test_enospc (push) Successful in 16s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 19s
Test / test_scrub (push) Successful in 19s
Test / test_scrub_zero_osd_2 (push) Successful in 19s
Test / test_scrub_xor (push) Successful in 18s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 21s
Test / test_scrub_ec (push) Successful in 19s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m21s
2025-03-23 12:04:23 +03:00
ebaf3fee79 Add an assertion to prevent sending message to TCP channel when switched to RDMA 2025-03-23 12:04:09 +03:00
196d28e987 Fix typo 2025-03-23 12:00:20 +03:00
8f243b2328 Fix qemu buster build and bullseye version 2025-03-23 02:46:52 +03:00
7a835fcd8f Add allow_net_split parameter 2025-03-23 02:12:32 +03:00
8b0389b4e8 Log RDMA ibv_modify_qp() errors
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m53s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m54s
Test / test_write_no_same (push) Successful in 14s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 36s
Test / test_write_xor (push) Successful in 39s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m21s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m25s
Test / test_heal_csum_4k_dmj (push) Successful in 2m24s
Test / test_heal_csum_4k_dj (push) Successful in 2m23s
Test / test_resize_auto (push) Successful in 13s
Test / test_resize (push) Successful in 16s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_osd_tags (push) Successful in 12s
Test / test_enospc (push) Successful in 16s
Test / test_enospc_xor (push) Successful in 17s
Test / test_enospc_imm (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 19s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 21s
Test / test_scrub_ec (push) Successful in 21s
Test / test_nfs (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m23s
2025-03-22 15:58:13 +03:00
f544c350ba %l* -> %j*
All checks were successful
Test / test_dd (push) Successful in 19s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m42s
Test / test_write_no_same (push) Successful in 14s
Test / test_switch_primary (push) Successful in 40s
Test / test_write (push) Successful in 38s
Test / test_write_xor (push) Successful in 41s
Test / test_heal_pg_size_2 (push) Successful in 2m22s
Test / test_heal_ec (push) Successful in 2m22s
Test / test_heal_antietcd (push) Successful in 2m25s
Test / test_heal_csum_32k_dmj (push) Successful in 2m25s
Test / test_heal_csum_32k_dj (push) Successful in 2m26s
Test / test_heal_csum_32k (push) Successful in 2m33s
Test / test_heal_csum_4k_dmj (push) Successful in 2m41s
Test / test_heal_csum_4k_dj (push) Successful in 2m37s
Test / test_resize (push) Successful in 17s
Test / test_resize_auto (push) Successful in 14s
Test / test_osd_tags (push) Successful in 13s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_enospc (push) Successful in 16s
Test / test_enospc_xor (push) Successful in 18s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 18s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 19s
Test / test_scrub_xor (push) Successful in 18s
Test / test_scrub_pg_size_3 (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 21s
Test / test_scrub_ec (push) Successful in 21s
Test / test_nfs (push) Successful in 17s
Test / test_heal_csum_4k (push) Successful in 2m22s
2025-03-22 15:32:07 +03:00
4eafb55b5c Add a patch for QEMU 9.2, fix debian bookworm QEMU build 2025-03-22 15:30:52 +03:00
5030396f71 Clear QEMU eventfd handler on vitastor block driver destruction
All checks were successful
Test / test_root_node (push) Successful in 14s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m54s
Test / test_write_no_same (push) Successful in 12s
Test / test_write (push) Successful in 35s
Test / test_switch_primary (push) Successful in 37s
Test / test_write_xor (push) Successful in 39s
Test / test_heal_pg_size_2 (push) Successful in 2m21s
Test / test_heal_ec (push) Successful in 2m22s
Test / test_heal_antietcd (push) Successful in 2m22s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m24s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m23s
Test / test_heal_csum_4k_dj (push) Successful in 2m23s
Test / test_resize_auto (push) Successful in 14s
Test / test_resize (push) Successful in 19s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_osd_tags (push) Successful in 13s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 19s
Test / test_scrub (push) Successful in 19s
Test / test_scrub_zero_osd_2 (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 21s
Test / test_scrub_pg_size_3 (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 21s
Test / test_scrub_ec (push) Successful in 20s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m21s
2025-03-21 20:47:17 +03:00
be22c363ca Do not skip client_retry_interval on reconnecting OSDs to prevent OSD spam
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m54s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m52s
Test / test_write_no_same (push) Successful in 13s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 36s
Test / test_write_xor (push) Successful in 39s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m21s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m23s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_heal_csum_4k_dmj (push) Successful in 2m26s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize (push) Successful in 18s
Test / test_resize_auto (push) Successful in 13s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_osd_tags (push) Successful in 13s
Test / test_enospc (push) Successful in 17s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 18s
Test / test_scrub (push) Successful in 18s
Test / test_scrub_zero_osd_2 (push) Successful in 19s
Test / test_scrub_xor (push) Successful in 20s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 21s
Test / test_scrub_ec (push) Successful in 20s
Test / test_nfs (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m23s
2025-03-20 00:12:38 +03:00
0f80c87b43 Add a minimum interval for etcd_state_client to reload state
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m55s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m55s
Test / test_write_no_same (push) Successful in 12s
Test / test_write (push) Successful in 35s
Test / test_switch_primary (push) Successful in 38s
Test / test_write_xor (push) Successful in 39s
Test / test_heal_pg_size_2 (push) Successful in 2m21s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m22s
Test / test_heal_csum_32k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m22s
Test / test_heal_csum_4k_dj (push) Successful in 2m24s
Test / test_resize (push) Successful in 17s
Test / test_resize_auto (push) Successful in 13s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_osd_tags (push) Successful in 12s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 17s
Test / test_enospc_imm (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 17s
Test / test_scrub (push) Successful in 18s
Test / test_scrub_zero_osd_2 (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 21s
Test / test_scrub_ec (push) Successful in 21s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m14s
Test / test_heal_csum_32k_dj (push) Successful in 2m27s
(To prevent excessive load on etcd during outages)
2025-03-19 02:36:09 +03:00
e0953fd502 Wait for all "up" OSDs to be connected before starting PG 2025-03-19 02:36:09 +03:00
6e0ae47938 Add Proxmox QEMU 9.2 patch 2025-03-19 02:36:02 +03:00
b8f19e85ad Fix pg state formatting in ls-pgs
All checks were successful
Test / test_dd (push) Successful in 17s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m51s
Test / test_write_no_same (push) Successful in 14s
Test / test_write (push) Successful in 34s
Test / test_switch_primary (push) Successful in 38s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m26s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m23s
Test / test_resize_auto (push) Successful in 12s
Test / test_resize (push) Successful in 17s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_osd_tags (push) Successful in 12s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 19s
Test / test_enospc_imm (push) Successful in 18s
Test / test_enospc_imm_xor (push) Successful in 17s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_zero_osd_2 (push) Successful in 20s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 21s
Test / test_nfs (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m22s
2025-03-17 01:37:58 +03:00
b7636e595f Update version in docker docs 2025-03-16 16:53:57 +03:00
48c026bfa0 Release 2.0.0
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m49s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m51s
Test / test_write_no_same (push) Successful in 13s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 36s
Test / test_write_xor (push) Successful in 39s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m21s
Test / test_heal_antietcd (push) Successful in 2m22s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m24s
Test / test_heal_csum_4k_dmj (push) Successful in 2m24s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 13s
Test / test_resize (push) Successful in 16s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_osd_tags (push) Successful in 13s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 19s
Test / test_enospc_imm (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 18s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_zero_osd_2 (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 22s
Test / test_scrub_ec (push) Successful in 19s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m21s
No breaking features, it's 2.0.0 just because it includes S3 and because
there are already too many 1.x releases :).

New features:

- S3 is finally available: https://vitastor.io/docs/installation/s3.html
- node.js addon is now packaged as a Debian package
- Support listing PGs by OSDs in `vitastor-cli ls-pgs`
- Implement offline TRIM support: [vitastor-disk trim](https://vitastor.io/docs/usage/disk.html#trim),
  [discard_on_start](https://vitastor.io/docs/config/osd.html#discard_on_start)
- Change used_for_fs pool option to used_for_app

Bug fixes:

- Fix several bugs in the node.js addon (a memory leak, an incorrectly triggered event loop)
- Fix a client crash (vitastor-cli rm) during deletion when writeback is enabled
- Fix PG object count statistics on deletion of non-existing objects
- Fix vitastor-nbd crash when mapping by ID instead of inode name
- Fix a client memory leak with enabled immediate_commit and write-back cache
- Add seccomp=unconfined for vitastor docker OSDs to not break io_uring
- Add udev and systemd to vitastor docker image
- Fix upgrading from pre-0.7.1 (very old) systemd units O_o
- Fix total object count calculation in rm_data
2025-03-16 14:34:31 +03:00
a73b2a26b6 Fix blockstore initialization after moving clean_dyn_size calc to calc_lengths
All checks were successful
Test / test_root_node (push) Successful in 13s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m50s
Test / test_write_no_same (push) Successful in 14s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 36s
Test / test_write_xor (push) Successful in 40s
Test / test_heal_pg_size_2 (push) Successful in 2m21s
Test / test_heal_ec (push) Successful in 2m22s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_heal_csum_4k_dmj (push) Successful in 2m24s
Test / test_heal_csum_4k_dj (push) Successful in 2m22s
Test / test_resize_auto (push) Successful in 13s
Test / test_resize (push) Successful in 16s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_osd_tags (push) Successful in 12s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 17s
Test / test_enospc_imm_xor (push) Successful in 18s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_zero_osd_2 (push) Successful in 19s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 22s
Test / test_scrub_ec (push) Successful in 19s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m24s
2025-03-16 13:44:02 +03:00
f3192b610d Fix vitastor-disk in Docker installations 2025-03-16 13:44:01 +03:00
a950889976 Add missing docs for discard_on_start 2025-03-16 12:29:22 +03:00
ef5194d93c Add S3 installation docs 2025-03-16 01:17:09 +03:00
f904576ab1 Fix total calculation in rm_data
All checks were successful
Test / test_dd (push) Successful in 15s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m53s
Test / test_write_no_same (push) Successful in 13s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 37s
Test / test_write_xor (push) Successful in 39s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m22s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_heal_csum_4k_dmj (push) Successful in 2m24s
Test / test_heal_csum_4k_dj (push) Successful in 2m23s
Test / test_resize_auto (push) Successful in 13s
Test / test_resize (push) Successful in 17s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_osd_tags (push) Successful in 13s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 17s
Test / test_enospc_imm_xor (push) Successful in 18s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_zero_osd_2 (push) Successful in 20s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 22s
Test / test_scrub_ec (push) Successful in 20s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m22s
2025-03-15 17:01:10 +03:00
4f9b1f2f62 Support listing PGs by OSDs
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m54s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m52s
Test / test_write_no_same (push) Successful in 14s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 35s
Test / test_write_xor (push) Successful in 39s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m24s
Test / test_heal_csum_32k (push) Successful in 2m22s
Test / test_heal_csum_4k_dmj (push) Successful in 2m23s
Test / test_heal_csum_4k_dj (push) Successful in 2m23s
Test / test_resize_auto (push) Successful in 13s
Test / test_resize (push) Successful in 18s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_osd_tags (push) Successful in 12s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 17s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 22s
Test / test_scrub_ec (push) Successful in 21s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m23s
2025-03-15 16:42:57 +03:00
1d94afbd51 Implement offline TRIM support
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m51s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m52s
Test / test_write_no_same (push) Successful in 12s
Test / test_switch_primary (push) Successful in 38s
Test / test_write (push) Successful in 37s
Test / test_write_xor (push) Successful in 39s
Test / test_heal_pg_size_2 (push) Successful in 2m21s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m31s
Test / test_heal_csum_4k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m25s
Test / test_heal_csum_4k_dj (push) Successful in 2m23s
Test / test_resize_auto (push) Successful in 13s
Test / test_resize (push) Successful in 18s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_osd_tags (push) Successful in 12s
Test / test_enospc (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 18s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_zero_osd_2 (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 21s
Test / test_scrub_ec (push) Successful in 19s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m20s
2025-03-14 01:37:16 +03:00
3634f005f1 Fix upgrading from pre-0.7.1 systemd units O_o 2025-03-14 01:37:16 +03:00
263a3b5ad6 Rename allocator to allocator_t 2025-03-13 00:53:34 +03:00
b760951aa7 Add seccomp=unconfined for vitastor docker OSDs to not break io_uring 2025-03-11 00:42:10 +03:00
c8321b8ed1 Add udev and systemd to vitastor docker image 2025-03-11 00:40:39 +03:00
21066a095b Fix a memory leak with enabled immediate_commit and write-back cache
All checks were successful
Test / test_dd (push) Successful in 17s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m45s
Test / test_write_no_same (push) Successful in 11s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 35s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m21s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m35s
Test / test_heal_csum_32k (push) Successful in 2m24s
Test / test_heal_csum_4k_dmj (push) Successful in 2m23s
Test / test_heal_csum_4k_dj (push) Successful in 2m23s
Test / test_resize_auto (push) Successful in 12s
Test / test_resize (push) Successful in 17s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_osd_tags (push) Successful in 13s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 18s
Test / test_scrub (push) Successful in 18s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 18s
Test / test_scrub_pg_size_3 (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 19s
Test / test_scrub_ec (push) Successful in 20s
Test / test_nfs (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m21s
Remove dirty buffers after writing when immediate_commit is on instead
of saving them for repeating later
2025-03-11 00:40:18 +03:00
a96900b696 Explicitly destroy Nan::Persistents, otherwise it leaks memory 2025-03-09 16:45:10 +03:00
8a6e461322 Fix license (VNPL 1.1, not 2.0) 2025-03-08 17:17:23 +03:00
0b6a0463a4 Save a reference to the buffer during write 2025-03-08 16:00:26 +03:00
35d4047f46 Fix vitastor-nbd crash when mapping by ID instead of inode name
All checks were successful
Test / test_root_node (push) Successful in 13s
Test / test_rebalance_verify_ec (push) Successful in 1m51s
Test / test_write_no_same (push) Successful in 13s
Test / test_write (push) Successful in 33s
Test / test_switch_primary (push) Successful in 37s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m21s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m24s
Test / test_heal_csum_32k_dj (push) Successful in 2m23s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m22s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_resize (push) Successful in 16s
Test / test_resize_auto (push) Successful in 13s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 18s
Test / test_scrub_zero_osd_2 (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 20s
Test / test_scrub_pg_size_3 (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 20s
Test / test_scrub_ec (push) Successful in 20s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m18s
2025-03-08 15:52:57 +03:00
819f1125ae Support used_for_app instead of used_for_fs
All checks were successful
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 37s
Test / test_write_no_same (push) Successful in 12s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m28s
Test / test_write_xor (push) Successful in 39s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m26s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_resize (push) Successful in 16s
Test / test_resize_auto (push) Successful in 12s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_osd_tags (push) Successful in 11s
Test / test_enospc (push) Successful in 14s
Test / test_heal_csum_4k_dmj (push) Successful in 2m22s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 18s
Test / test_scrub (push) Successful in 18s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_scrub_zero_osd_2 (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 19s
Test / test_heal_csum_4k (push) Successful in 2m21s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 20s
Test / test_nfs (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 20s
Test / test_etcd_fail (push) Successful in 46s
2025-03-07 01:03:43 +03:00
108df7329f Fix PG object count statistics on deletion of non-existing objects
All checks were successful
Test / test_rebalance_verify_ec_imm (push) Successful in 1m35s
Test / test_rebalance_verify_ec (push) Successful in 1m46s
Test / test_write_no_same (push) Successful in 12s
Test / test_write (push) Successful in 34s
Test / test_switch_primary (push) Successful in 37s
Test / test_write_xor (push) Successful in 39s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_ec (push) Successful in 2m38s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m22s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_resize (push) Successful in 15s
Test / test_resize_auto (push) Successful in 12s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_osd_tags (push) Successful in 13s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 18s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_zero_osd_2 (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 19s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m18s
2025-03-04 00:40:56 +03:00
d32edf6cdf Fix deletion writeback 2025-03-04 00:40:35 +03:00
dca436d7e6 Trigger event loop automatically in libvitastor_c
All checks were successful
Test / test_dd (push) Successful in 16s
Test / test_rebalance_verify_ec (push) Successful in 1m56s
Test / test_write_no_same (push) Successful in 14s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 35s
Test / test_write_xor (push) Successful in 39s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_ec (push) Successful in 2m24s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m22s
Test / test_resize_auto (push) Successful in 12s
Test / test_resize (push) Successful in 17s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_osd_tags (push) Successful in 11s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 17s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 20s
Test / test_scrub_ec (push) Successful in 20s
Test / test_nfs (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m24s
2025-03-03 00:57:09 +03:00
8129a0b4e3 Loop once after registering eventfd to prevent skipping previous events 2025-03-03 00:57:00 +03:00
704c87d512 Trigger initial epoll when adding an FD 2025-03-03 00:56:17 +03:00
10216a5fb5 Build node.js addon as a Debian package 2025-03-02 18:04:56 +03:00
3932eb7ff6 Trigger event loop once after each vitastor_c_* call
All checks were successful
Test / test_rebalance_verify_ec_imm (push) Successful in 1m42s
Test / test_rebalance_verify_ec (push) Successful in 1m54s
Test / test_write_no_same (push) Successful in 12s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m28s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m27s
Test / test_heal_csum_4k_dj (push) Successful in 2m25s
Test / test_resize (push) Successful in 20s
Test / test_resize_auto (push) Successful in 12s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_osd_tags (push) Successful in 16s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 18s
Test / test_scrub (push) Successful in 20s
Test / test_scrub_zero_osd_2 (push) Successful in 21s
Test / test_scrub_xor (push) Successful in 18s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 20s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m20s
2025-03-02 01:23:41 +03:00
69cbe7bbb2 Release 1.11.0
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m47s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m35s
Test / test_write_no_same (push) Successful in 12s
Test / test_switch_primary (push) Successful in 38s
Test / test_write (push) Successful in 37s
Test / test_write_xor (push) Successful in 39s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m29s
Test / test_heal_ec (push) Successful in 2m45s
Test / test_heal_csum_32k_dj (push) Successful in 2m30s
Test / test_heal_csum_32k (push) Successful in 2m29s
Test / test_heal_csum_4k_dmj (push) Successful in 2m33s
Test / test_resize (push) Successful in 19s
Test / test_heal_csum_4k_dj (push) Successful in 2m28s
Test / test_resize_auto (push) Successful in 13s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 17s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 17s
Test / test_scrub_pg_size_3 (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 20s
Test / test_scrub_ec (push) Successful in 21s
Test / test_nfs (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m27s
New features:

- Support containerized Vitastor installations: http://vitastor.io/docs/installation/docker.html
- Add new functions to the node.js binding: delete(), get_immediate_commit(), on_ready(),
  get_min_io_size(), get_max_atomic_write_size()
- S3 (Zenko Cloudserver with Vitastor support) is coming shortly and will be released separately

Bug fixes:

- Use IP-derived etcd node names in make-etcd
- Set short name of the OSD process to display in `top`
- Fix snap-create without pool_id failing when there are multiple pools
- Several bugs are fixed in the write-back cache, it should now be stable:
  - Fix incorrect snapshot reads from dirty write-back cache
  - Do not try to repeat pending writebacks on OSD reconnections
  - Fix client hangs with multiple SYNCs in the writeback queue
  - Fix client hangs do to incorrect calculation of the writeback queue size
- Several improvements for NBD mapping/unmapping:
  - Add a workaround for race condition in the Linux kernel NBD driver leading
    to vitastor-nbd sometimes breaking a previously mapped device instead of
    setting up a new one
  - Check if the device is actually mapped in vitastor-nbd unmap
  - Fix device name/number validation in vitastor-nbd
- Fix OSD crashes after starting with corrupted metadata - from now it will skip
  corrupted metadata entries and heal itself
- Fix scrubbing of misplaced objects and object state recalculation after
  vitastor-cli fix - previously, an OSD restart could be required to fix object states
- Make primary OSD distribution more stable by using murmur3 hash instead of the old pseudo-rng
- Fix monitor sometimes racing with itself - do not touch /pool/stats from stats
  aggregation if PG recheck is active
- Sort vitastor-cli ls output by name by default
- Update antietcd to 1.1.2
2025-03-01 13:39:42 +03:00
4950a1636c Allow "infinite" startup for clients if explicitly requested 2025-03-01 13:39:42 +03:00
2eb20dff28 Do not crash on io_uring initialization failure in node-vitastor 2025-03-01 13:29:48 +03:00
59f0b0427c Support containerized Vitastor installations
All checks were successful
Test / test_dd (push) Successful in 17s
Test / test_rebalance_verify_ec (push) Successful in 1m44s
Test / test_write_no_same (push) Successful in 11s
Test / test_write (push) Successful in 34s
Test / test_switch_primary (push) Successful in 37s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m22s
Test / test_resize (push) Successful in 16s
Test / test_resize_auto (push) Successful in 13s
Test / test_osd_tags (push) Successful in 12s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 17s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_zero_osd_2 (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 21s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m21s
2025-02-27 20:06:15 +03:00
124162ad38 Use IP-derived etcd node names in make-etcd 2025-02-26 11:54:37 +03:00
391c92af1a Set OSD process name 2025-02-26 11:54:37 +03:00
c3d8fdd855 Fix snap-create without pool_id ID generation with multiple pools
All checks were successful
Test / test_dd (push) Successful in 17s
Test / test_rebalance_verify_ec (push) Successful in 1m40s
Test / test_write_no_same (push) Successful in 11s
Test / test_write (push) Successful in 34s
Test / test_switch_primary (push) Successful in 37s
Test / test_write_xor (push) Successful in 39s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_ec (push) Successful in 2m30s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m29s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m22s
Test / test_resize (push) Successful in 15s
Test / test_resize_auto (push) Successful in 12s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 20s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 19s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 20s
Test / test_scrub_ec (push) Successful in 20s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m20s
2025-02-26 11:54:28 +03:00
9ccf3af97b Add qemu-block-extra and qemu-utils 2025-02-23 15:08:16 +03:00
568a209f0d Update docker image to debian bookworm 2025-02-23 13:27:32 +03:00
b151013201 Fix snapshot reads from a dirty write-back cache
All checks were successful
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec (push) Successful in 1m46s
Test / test_write_no_same (push) Successful in 13s
Test / test_write (push) Successful in 34s
Test / test_switch_primary (push) Successful in 37s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 17s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_osd_tags (push) Successful in 11s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 17s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 19s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m19s
2025-02-23 02:31:19 +03:00
4a763725fe Add free() to bindiff.c 2025-02-22 16:52:19 +03:00
b8d83cd7f4 No, it's not a good idea to destroy client in the child nbd process
Some checks failed
Test / test_rebalance_verify_ec_imm (push) Failing after 59s
Test / test_write_no_same (push) Successful in 13s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 35s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m21s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 12s
Test / test_resize (push) Successful in 18s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 21s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 20s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m21s
Test / test_rebalance_verify (push) Failing after 52s
Should probably have been an obvious side effect :-)

Child process gets open file descriptors to parent's epoll/timerfd,
and it's totally OK to just close() all of them, but it's absolutely NOT
OK to run destructors - they modify the kernel state of epoll/timerfd
before destroying. So, basically, when we destroy the client in the child
process, we break it in the parent too. This also means that cluster_client_t
doesn't support fork(). :-)
2025-02-22 15:10:27 +03:00
2e9ee2fe20 Do not try to repeat pending writebacks
Some checks reported warnings
Test / test_snapshot (push) Has been cancelled
Test / test_snapshot_ec (push) Has been cancelled
Test / test_minsize_1 (push) Has been cancelled
Test / test_move_reappear (push) Has been cancelled
Test / test_rm (push) Has been cancelled
Test / test_rm_degraded (push) Has been cancelled
Test / test_snapshot_chain (push) Has been cancelled
Test / test_snapshot_chain_ec (push) Has been cancelled
Test / test_snapshot_down (push) Has been cancelled
Test / test_snapshot_down_ec (push) Has been cancelled
Test / test_splitbrain (push) Has been cancelled
Test / test_rebalance_verify (push) Has been cancelled
Test / test_rebalance_verify_imm (push) Has been cancelled
Test / test_rebalance_verify_ec (push) Has been cancelled
Test / test_rebalance_verify_ec_imm (push) Has been cancelled
Test / test_dd (push) Has been cancelled
Test / test_root_node (push) Has been cancelled
Test / test_switch_primary (push) Has been cancelled
Test / test_write (push) Has been cancelled
Test / test_write_xor (push) Has been cancelled
Test / test_write_no_same (push) Has been cancelled
Test / test_heal_pg_size_2 (push) Has been cancelled
Test / test_heal_ec (push) Has been cancelled
Test / test_heal_antietcd (push) Has been cancelled
Test / test_heal_csum_32k_dmj (push) Has been cancelled
Test / test_heal_csum_32k_dj (push) Has been cancelled
Test / test_heal_csum_32k (push) Has been cancelled
Test / test_heal_csum_4k_dmj (push) Has been cancelled
Test / test_heal_csum_4k_dj (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
2025-02-22 14:16:44 +03:00
508ae852e4 Fix trap in test_rebalance_verify
Some checks failed
Test / test_dd (push) Successful in 16s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 33s
Test / test_write_no_same (push) Successful in 11s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_rebalance_verify_ec_imm (push) Failing after 6m26s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m25s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 12s
Test / test_resize (push) Successful in 16s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 18s
Test / test_scrub_pg_size_3 (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 19s
Test / test_scrub_ec (push) Successful in 20s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_heal_csum_4k (push) Successful in 2m21s
2025-02-22 02:18:41 +03:00
97ee400505 Add a workaround for race condition in the Linux kernel NBD driver
Some checks failed
Test / test_switch_primary (push) Successful in 37s
Test / test_write_no_same (push) Successful in 11s
Test / test_write_xor (push) Successful in 36s
Test / test_rebalance_verify_ec_imm (push) Failing after 6m22s
Test / test_rebalance_verify_ec (push) Failing after 6m32s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m22s
Test / test_heal_csum_32k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m24s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_resize (push) Successful in 15s
Test / test_resize_auto (push) Successful in 12s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_osd_tags (push) Successful in 12s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 15s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m26s
Test / test_heal_csum_4k (push) Successful in 2m23s
Test / test_enospc_imm_xor (push) Successful in 17s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_nfs (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 20s
Test / test_scrub_ec (push) Successful in 19s
Test / test_rebalance_verify (push) Failing after 18s
Do all NBD configuration in the child process, after the last fork.
Why? It's needed because there is a race condition in the Linux kernel nbd driver
in nbd_add_socket() - it saves `current` task pointer as `nbd->task_setup` and
then rechecks if the new `current` is the same. Problem is that if that process
is already dead, `current` may be freed and then replaced by another process
with the same pointer value. So the check passes and NBD allows a different process
to set up a device which is already set up. Proper fix would have to be done in the
kernel code, but the workaround is obviously to perform NBD setup from the process
which will then actually call NBD_DO_IT. That process stays alive during the whole
time of NBD device execution and the (nbd->task_setup != current) check always
works correctly, and we don't accidentally break previous NBD devices while setting
up a new device. Forking to check every device is of course rather slow, so we also
do an additional check by calling list_mapped() before searching for a free NBD device.
2025-02-21 13:17:37 +03:00
5ee4894fab Check if mapped in vitastor-nbd unmap
Some checks failed
Test / test_rebalance_verify_imm (push) Successful in 1m41s
Test / test_write (push) Successful in 33s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m46s
Test / test_write_no_same (push) Successful in 11s
Test / test_write_xor (push) Successful in 37s
Test / test_rebalance_verify_ec (push) Failing after 3m31s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m22s
Test / test_heal_antietcd (push) Successful in 2m28s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m24s
Test / test_heal_csum_32k (push) Successful in 2m28s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_resize (push) Successful in 16s
Test / test_resize_auto (push) Successful in 12s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 18s
Test / test_scrub (push) Successful in 16s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 18s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 19s
Test / test_nfs (push) Successful in 14s
Test / test_scrub_ec (push) Successful in 18s
Test / test_heal_csum_4k (push) Successful in 2m20s
2025-02-21 01:28:06 +03:00
125dcafb11 Prevent OSD crashes when metadata is corrupted
All checks were successful
Test / test_write_no_same (push) Successful in 13s
Test / test_rebalance_verify_imm (push) Successful in 1m37s
Test / test_write_xor (push) Successful in 40s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m44s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m27s
Test / test_heal_csum_32k (push) Successful in 2m26s
Test / test_heal_csum_4k_dmj (push) Successful in 2m30s
Test / test_resize (push) Successful in 22s
Test / test_heal_csum_4k_dj (push) Successful in 2m30s
Test / test_resize_auto (push) Successful in 13s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 19s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 17s
Test / test_scrub_pg_size_3 (push) Successful in 25s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 27s
Test / test_scrub_ec (push) Successful in 23s
Test / test_nfs (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m29s
Test / test_rebalance_verify (push) Successful in 2m4s
Test / test_rebalance_verify_ec (push) Successful in 2m21s
2025-02-20 02:19:32 +03:00
9f44cf71df Fix device name/number validation in vitastor-nbd
All checks were successful
Test / test_rebalance_verify (push) Successful in 1m44s
Test / test_rebalance_verify_imm (push) Successful in 1m44s
Test / test_write_no_same (push) Successful in 11s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m51s
Test / test_write (push) Successful in 35s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 19s
Test / test_osd_tags (push) Successful in 12s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 18s
Test / test_scrub_zero_osd_2 (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 20s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m20s
2025-02-20 01:33:11 +03:00
df3c63ca7f Sort vitastor-cli ls by name by default 2025-02-20 01:32:49 +03:00
be66edd09f Prevent infinite loops on syncs in writeback_overflow
All checks were successful
Test / test_dd (push) Successful in 17s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m54s
Test / test_write_no_same (push) Successful in 11s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 35s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m26s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 12s
Test / test_resize (push) Successful in 18s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_osd_tags (push) Successful in 11s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 18s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 21s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m22s
2025-02-19 01:44:12 +03:00
ccbc0c5928 Add assert !writeback_bytes
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 2m26s
Test / test_write (push) Successful in 43s
Test / test_rebalance_verify_ec (push) Successful in 2m40s
Test / test_write_no_same (push) Successful in 18s
Test / test_rebalance_verify_ec_imm (push) Successful in 2m39s
Test / test_write_xor (push) Successful in 41s
Test / test_heal_pg_size_2 (push) Successful in 2m23s
Test / test_heal_ec (push) Successful in 2m24s
Test / test_heal_antietcd (push) Successful in 2m30s
Test / test_heal_csum_32k_dmj (push) Successful in 2m43s
Test / test_heal_csum_32k (push) Successful in 2m36s
Test / test_heal_csum_32k_dj (push) Successful in 2m48s
Test / test_heal_csum_4k_dmj (push) Successful in 2m33s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 16s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_osd_tags (push) Successful in 11s
Test / test_enospc (push) Successful in 13s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc_xor (push) Successful in 17s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 17s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 19s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m28s
2025-02-19 01:15:46 +03:00
78ca4538bf Fix qemu docker build for ubuntu
All checks were successful
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify_ec_imm (push) Successful in 2m37s
Test / test_write_no_same (push) Successful in 18s
Test / test_switch_primary (push) Successful in 40s
Test / test_write (push) Successful in 49s
Test / test_write_xor (push) Successful in 53s
Test / test_heal_pg_size_2 (push) Successful in 2m34s
Test / test_heal_ec (push) Successful in 2m31s
Test / test_heal_antietcd (push) Successful in 2m32s
Test / test_heal_csum_32k_dmj (push) Successful in 2m36s
Test / test_heal_csum_32k_dj (push) Successful in 2m47s
Test / test_heal_csum_32k (push) Successful in 2m39s
Test / test_heal_csum_4k_dmj (push) Successful in 2m41s
Test / test_heal_csum_4k_dj (push) Successful in 2m32s
Test / test_resize (push) Successful in 28s
Test / test_resize_auto (push) Successful in 22s
Test / test_snapshot_pool2 (push) Successful in 32s
Test / test_osd_tags (push) Successful in 17s
Test / test_enospc (push) Successful in 18s
Test / test_enospc_imm (push) Successful in 20s
Test / test_enospc_xor (push) Successful in 22s
Test / test_enospc_imm_xor (push) Successful in 29s
Test / test_scrub (push) Successful in 23s
Test / test_scrub_zero_osd_2 (push) Successful in 26s
Test / test_scrub_xor (push) Successful in 25s
Test / test_scrub_pg_size_3 (push) Successful in 26s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 24s
Test / test_nfs (push) Successful in 13s
Test / test_scrub_ec (push) Successful in 21s
Test / test_heal_csum_4k (push) Successful in 2m48s
2025-02-18 23:44:16 +03:00
86b5760ec1 Fix writeback incorrectly calculating queue size which was leading to client hangs
All checks were successful
Test / test_root_node (push) Successful in 13s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m52s
Test / test_write_no_same (push) Successful in 11s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 41s
Test / test_write_xor (push) Successful in 42s
Test / test_heal_pg_size_2 (push) Successful in 2m30s
Test / test_heal_ec (push) Successful in 2m22s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m24s
Test / test_heal_csum_32k_dj (push) Successful in 2m27s
Test / test_heal_csum_32k (push) Successful in 2m27s
Test / test_heal_csum_4k_dmj (push) Successful in 2m43s
Test / test_resize (push) Successful in 24s
Test / test_heal_csum_4k_dj (push) Successful in 2m45s
Test / test_resize_auto (push) Successful in 12s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 17s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 20s
Test / test_scrub_pg_size_3 (push) Successful in 25s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 26s
Test / test_scrub_ec (push) Successful in 22s
Test / test_nfs (push) Successful in 18s
Test / test_heal_csum_4k (push) Successful in 2m40s
2025-02-18 23:42:55 +03:00
27f3803d2f Add vitastor_c_delete() and delete() to the node.js binding
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m56s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m55s
Test / test_write_no_same (push) Successful in 16s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 36s
Test / test_write_xor (push) Successful in 42s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_resize (push) Successful in 17s
Test / test_resize_auto (push) Successful in 12s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 17s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 17s
Test / test_scrub (push) Successful in 19s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 17s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 21s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m17s
2025-02-15 18:27:17 +03:00
2ead06e126 Add ubuntu jammy to docs 2025-02-12 15:32:35 +03:00
a5d5559f8e Add get_immediate_commit() to the node.js binding 2025-02-06 01:35:48 +03:00
e8e7ba8fde Add FIXME for CAS in non-immediate_commit mode 2025-02-06 01:35:48 +03:00
6fd831a299 Add on_ready(), get_min_io_size(), get_max_atomic_write_size() to the node.js binding 2025-02-06 01:35:48 +03:00
069808dfce Fix --config_path option in docs
All checks were successful
Test / test_dd (push) Successful in 16s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 13s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 39s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m25s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 16s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_osd_tags (push) Successful in 11s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 17s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 19s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m16s
2025-01-24 17:21:11 +03:00
bcefa42bc0 Scrub all chunks, not just 1 chunk per position
Some checks failed
Test / test_rebalance_verify_ec (push) Successful in 1m41s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 11s
Test / test_write (push) Successful in 34s
Test / test_switch_primary (push) Successful in 36s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Failing after 2m30s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 15s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 18s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 17s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 21s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m19s
2025-01-23 02:02:55 +03:00
4636e02d43 Remove scheme, pg_size, pg_data_size from op_data 2025-01-23 01:20:31 +03:00
e4c7d1c147 s/3/4/ 2025-01-23 01:20:31 +03:00
a4677f3e69 Mention P5530 2025-01-23 01:20:31 +03:00
7cbf207d65 Use murmur3 to select primary OSD instead of old pseudo-rng
Some checks failed
Test / test_rebalance_verify_ec (push) Successful in 1m39s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m39s
Test / test_write_no_same (push) Successful in 11s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 15s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 17s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_xor (push) Failing after 17s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m20s
Test / test_scrub_zero_osd_2 (push) Failing after 19s
2025-01-18 12:28:54 +03:00
7c9711af20 Do not touch /pool/stats from stats aggregation if PG recheck is active
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m41s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 12s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m24s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 17s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 18s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 19s
Test / test_scrub_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m14s
2025-01-16 20:41:16 +03:00
33ef701464 Update antietcd to 1.1.2
All checks were successful
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m44s
Test / test_write_no_same (push) Successful in 12s
Test / test_write (push) Successful in 33s
Test / test_switch_primary (push) Successful in 35s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m14s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m22s
Test / test_resize (push) Successful in 16s
Test / test_resize_auto (push) Successful in 10s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m10s
2025-01-04 02:13:36 +03:00
61ededa230 Release 1.10.1
All checks were successful
Test / test_rebalance_verify_ec_imm (push) Successful in 1m47s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m24s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize (push) Successful in 17s
Test / test_resize_auto (push) Successful in 11s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 18s
Test / test_scrub_pg_size_3 (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m13s
Test / test_etcd_fail_antietcd (push) Successful in 41s
New features:

- Add "deleted" image flag which is set when vitastor-cli rm starts to delete an image,
  but can't delete it fully due to inactive PGs or stopped OSDs
- Support JSON output in vitastor-disk prepare and purge
- Show backfillfull pools in vitastor-cli status
- Make object listings consistent (used in vitastor-cli rm/rm-data/merge/etc).
  This means that there is now a guarantee that if a data block is present when you invoke rm,
  rm will attempt to delete it, even if rm is invoked when the PG switches state. Previously in
  such cases rm could skip and leave some objects behind as garbage, and merge probably could
  incorrectly move data between snapshots.
- Make deletions (rm/rm-data) consistent. This means that rm/rm-data will either complete
  successfully and delete all requested image data or complete with an error if some objects
  could not be deleted or if there is a possibility that some data is left on stopped OSDs.
  Previously, when some PGs or OSDs were inactive at the moment of deletion, rm-data was
  behaving incorrectly: it wasn't retrying deletions failed due to dropped OSD connections,
  it could hang waiting for PGs to activate, and it could return with a successful error
  code while some garbage was still possibly left on some OSDs. Deletions are not fully atomic
  cluster-wide yet, which means that you still have to repeat the deletion request after you
  return stopped OSDs back, but now you always know for sure if you have to repeat it.

Bug fixes:

- Fix vitastor-cli rm --exact / --matching command not working
- Finally fix "Unexpected status" in the Proxmox plugin
- Fix vitastor-cli create-snap incorrectly linking multiple snapshots in a different pool
- Fix incomplete image parent_id loop check in OSD
- Fix reads from snapshots in a different pool not working if there are more than 2 snapshots
- Fix append of VITASTOR_CONF to cmdline in the opennebula prebackup script
- Fix OSDs crashing again when the cluster is full with EC (was meant to work since 1.6.0 but didn't)
- Improve logging of subop failures
2025-01-03 16:22:09 +03:00
d9d90d3183 Fix build for debian buster 2025-01-03 16:21:56 +03:00
9dbcdbcec9 Return left_on_dead OSD list in DELETE replies and use it in rm-data
All checks were successful
Test / test_rebalance_verify_ec_imm (push) Successful in 1m48s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 37s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m26s
Test / test_heal_ec (push) Successful in 2m21s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m32s
Test / test_heal_csum_32k_dj (push) Successful in 2m35s
Test / test_heal_csum_32k (push) Successful in 2m24s
Test / test_heal_csum_4k_dmj (push) Successful in 2m27s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize (push) Successful in 16s
Test / test_resize_auto (push) Successful in 11s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 17s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 24s
Test / test_scrub_pg_size_3 (push) Successful in 23s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 26s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m29s
Test / test_etcd_fail_antietcd (push) Successful in 41s
2025-01-03 15:57:09 +03:00
a147f7e7dc Copy & repeat deletions too
All checks were successful
Test / test_root_node (push) Successful in 11s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m38s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m18s
2025-01-03 00:21:52 +03:00
0e6bf66734 Add bindiff for tests
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m45s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m46s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m14s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 17s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m15s
2025-01-02 19:59:04 +03:00
ab822d3050 Support consistent listings in client (rm-data, merge and etc)
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m50s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m49s
Test / test_write_no_same (push) Successful in 12s
Test / test_write (push) Successful in 32s
Test / test_switch_primary (push) Successful in 35s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_ec (push) Successful in 2m43s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m32s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_resize (push) Successful in 16s
Test / test_resize_auto (push) Successful in 10s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m18s
2025-01-02 18:07:12 +03:00
d5366a0767 Support listings from primary OSDs (for consistent deletions) 2025-01-02 11:07:24 +03:00
40b8a8b0da Add wait_up_timeout support to cluster_client and use it in vitastor-cli rm-data & merge
All checks were successful
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m48s
Test / test_write_no_same (push) Successful in 11s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 15s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_osd_tags (push) Successful in 10s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m18s
2025-01-01 17:57:58 +03:00
5c5119aba4 Pass min_offset/max_offset to list_inode()
All checks were successful
Test / test_dd (push) Successful in 16s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m44s
Test / test_write_no_same (push) Successful in 10s
Test / test_write (push) Successful in 33s
Test / test_switch_primary (push) Successful in 36s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m22s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 16s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 19s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m12s
2025-01-01 15:40:12 +03:00
4edda88903 Wait for OSDs to either connect or stop infinitely during listing, not for peer_connect_timeout
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m48s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m47s
Test / test_write_no_same (push) Successful in 11s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 35s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 16s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m26s
2025-01-01 15:29:42 +03:00
80dda3ca94 Remove separate list_inode_next()
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m47s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m50s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 35s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 16s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_osd_tags (push) Successful in 10s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m21s
2025-01-01 14:19:18 +03:00
c8decb32e8 Rename to client_wait_up_timeout
All checks were successful
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m48s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 15s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_osd_tags (push) Successful in 11s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
2025-01-01 11:26:57 +03:00
4995592e61 Retry listings on broken OSD connections
Some checks reported warnings
Test / test_rebalance_verify_ec (push) Successful in 1m52s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m54s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 36s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_antietcd (push) Has started running
Test / test_heal_csum_32k_dj (push) Has been cancelled
Test / test_heal_csum_32k (push) Has been cancelled
Test / test_heal_csum_4k_dmj (push) Has been cancelled
Test / test_heal_csum_4k_dj (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
Test / test_resize (push) Has been cancelled
Test / test_resize_auto (push) Has been cancelled
Test / test_snapshot_pool2 (push) Has been cancelled
Test / test_osd_tags (push) Has been cancelled
Test / test_enospc (push) Has been cancelled
Test / test_enospc_xor (push) Has been cancelled
Test / test_enospc_imm (push) Has been cancelled
Test / test_enospc_imm_xor (push) Has been cancelled
Test / test_scrub (push) Has been cancelled
Test / test_scrub_zero_osd_2 (push) Has been cancelled
Test / test_scrub_xor (push) Has been cancelled
Test / test_scrub_pg_size_3 (push) Has been cancelled
Test / test_heal_pg_size_2 (push) Has been cancelled
Test / test_heal_csum_32k_dmj (push) Has been cancelled
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Has been cancelled
Test / test_scrub_ec (push) Has been cancelled
Test / test_nfs (push) Has been cancelled
Test / test_heal_ec (push) Has been cancelled
2025-01-01 11:14:36 +03:00
d9f9b0bca5 Start listings consistently with the current PG state, add wait_up_timeout
Some checks reported warnings
Test / test_dd (push) Successful in 15s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m49s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m24s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 15s
Test / test_osd_tags (push) Successful in 12s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 17s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
This still doesn't make listings 100% consistent yet; for 100% consistent
listings we have to receive listings only from the primary OSD, not from all
peer OSDs, but this issue will be fixed separately.
2025-01-01 10:58:22 +03:00
d0396267d0 Clear retry_timeout when the client is destroyed 2025-01-01 10:58:22 +03:00
b46d5db115 Support JSON output in vitastor-disk prepare and purge
Some checks failed
Test / test_rebalance_verify_ec (push) Successful in 1m50s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m50s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_resize (push) Successful in 16s
Test / test_resize_auto (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_osd_tags (push) Successful in 9s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m20s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 14s
Test / test_heal_ec (push) Failing after 10m10s
2024-12-29 15:19:44 +03:00
ecd92655fe Fix rm --exact / --matching not removing one uppermost image in each chain
All checks were successful
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 33s
Test / test_write_no_same (push) Successful in 11s
Test / test_write_xor (push) Successful in 35s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m42s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m26s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_resize (push) Successful in 14s
Test / test_resize_auto (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_osd_tags (push) Successful in 10s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 12s
Test / test_heal_csum_4k_dj (push) Successful in 2m13s
Test / test_enospc_imm_xor (push) Successful in 17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m12s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 13s
Test / test_scrub_ec (push) Successful in 18s
Test / test_etcd_fail (push) Successful in 43s
2024-12-28 21:53:49 +03:00
383712148b Fix rm --exact / --matching not being invoked at all O_o
All checks were successful
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m51s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m23s
Test / test_heal_csum_32k (push) Successful in 2m12s
Test / test_heal_csum_4k_dmj (push) Successful in 2m15s
Test / test_resize (push) Successful in 15s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 9s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m24s
2024-12-28 21:47:00 +03:00
42d40153ff Do not intercept STDERR in Proxmox plugin (finally fixes "unexpected status"!) 2024-12-28 21:18:49 +03:00
561b36a4c1 Use revision from txn response header, not from put subresponse
Some checks failed
Test / test_rebalance_verify_ec (push) Successful in 1m41s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m41s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_ec (push) Failing after 2m22s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m10s
2024-12-28 21:01:15 +03:00
685af019f5 Allow :: and 0.0.0.0 as local IPs in antietcd_adapter 2024-12-28 20:52:27 +03:00
a31592d131 Print sizes in "Auto-selecting" as "4K", not "4 K"
All checks were successful
Test / test_root_node (push) Successful in 11s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m53s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_resize (push) Successful in 14s
Test / test_resize_auto (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_osd_tags (push) Successful in 11s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m18s
2024-12-28 19:15:23 +03:00
28b0a2597d Add a test for multiple snapshots in a second pool
All checks were successful
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m42s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 15s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 17s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m15s
2024-12-28 18:57:30 +03:00
de6b345473 Fix create-snap taking parent_pool from incorrect key parent_pool_id 2024-12-28 18:53:29 +03:00
8bf52d6e96 Fix inode parent_id loop check 2024-12-28 18:40:17 +03:00
5623dca02c Fix vitastor client passing incorrect mod_revision for snapshotted images
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m37s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m41s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m29s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m23s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 11s
Test / test_enospc (push) Successful in 12s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m19s
This was leading to reads only working for the image itself and for its latest snapshot
2024-12-28 16:01:35 +03:00
abdc207297 Fix append of VITASTOR_CONF to cmdline in the opennebula prebackup script 2024-12-28 13:33:24 +03:00
044e621b62 Add test_rm_degraded to CI
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m40s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m42s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 9s
Test / test_enospc (push) Successful in 12s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m11s
2024-12-27 18:31:58 +03:00
ba9aabf187 Return listing errors from list_inode_start(), abort merging and fail deletion on unsuccessfull listings 2024-12-27 18:31:21 +03:00
5c890e4a12 Fix rm-data hanging when some OSDs are inactive, add a test for it
All checks were successful
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m39s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m13s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 10s
Test / test_enospc (push) Successful in 13s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m14s
There's also another case which also needs to be fixed - we shouldn't retry
deletions for indefinite time if an OSD is stopped during deletion
2024-12-27 16:29:33 +03:00
0b0c2afbce Implement "deleted" flag
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m33s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m34s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 15s
Test / test_osd_tags (push) Successful in 11s
Test / test_enospc (push) Successful in 12s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m21s
2024-12-27 01:18:55 +03:00
651c055bd9 Show backfillfull pools in vitastor-cli status
Some checks reported warnings
Test / test_rm (push) Has been cancelled
Test / test_snapshot_chain (push) Has been cancelled
Test / test_snapshot_chain_ec (push) Has been cancelled
Test / test_snapshot_down (push) Has been cancelled
Test / test_snapshot_down_ec (push) Has been cancelled
Test / test_splitbrain (push) Has been cancelled
Test / test_rebalance_verify (push) Has been cancelled
Test / test_rebalance_verify_imm (push) Has been cancelled
Test / buildenv (push) Has been cancelled
Test / test_rebalance_verify_ec (push) Has been cancelled
Test / test_rebalance_verify_ec_imm (push) Has been cancelled
Test / test_dd (push) Has been cancelled
Test / test_root_node (push) Has been cancelled
Test / test_switch_primary (push) Has been cancelled
Test / test_write (push) Has been cancelled
Test / test_write_xor (push) Has been cancelled
Test / test_write_no_same (push) Has been cancelled
Test / test_heal_pg_size_2 (push) Has been cancelled
Test / test_heal_ec (push) Has been cancelled
Test / test_heal_antietcd (push) Has been cancelled
Test / test_heal_csum_32k_dmj (push) Has been cancelled
Test / test_heal_csum_32k_dj (push) Has been cancelled
Test / test_heal_csum_32k (push) Has been cancelled
Test / test_heal_csum_4k_dmj (push) Has been cancelled
Test / test_heal_csum_4k_dj (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
Test / test_resize (push) Has been cancelled
Test / test_resize_auto (push) Has been cancelled
Test / test_snapshot_pool2 (push) Has been cancelled
Test / test_osd_tags (push) Has been cancelled
2024-12-26 12:17:47 +03:00
42eebfc1bd Fix OSDs still crashing when the cluster is full with EC
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m37s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m39s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m13s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m23s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m22s
Test / test_heal_csum_4k_dj (push) Successful in 2m22s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 14s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_osd_tags (push) Successful in 9s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m26s
ENOSPC handling was introduced in 1.6.0 but it was not complete; now it is

P.S: See also client_retry_enospc (true by default)
2024-12-26 01:56:33 +03:00
cef98052f5 Improve logging of subop failures 2024-12-26 01:54:40 +03:00
7fbb04fdfa Release 1.10.0
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m54s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m58s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m22s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 15s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_osd_tags (push) Successful in 8s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m19s
New features:

- Implement basic VitastorFS support in [CSI](https://vitastor.io/docs/installation/kubernetes.html)
- Implement [NFS RDMA](https://vitastor.io/docs/usage/nfs.html#rdma) support
- Pause pool rebalance when monitor detects that it can lead to any OSD becoming full ([osd_backfillfull_ratio](https://vitastor.io/docs/config/monitor.html#osd_backfillfull_ratio))
- Auto-select correct [RDMA device and GID](https://vitastor.io/docs/config/network.html#rdma_device) based on osd_network and RoCEv2 priority
- Report slow ops in OSD stats in etcd and show them in vitastor-cli status

Bug fixes:

- Fix possibly incorrect linked list deserialization in NFS
- Fix possible crash in vitastor-nfs --block READDIR operation
- Map netlink after forking to show correct PID in vitastor-nbd ls
- Simplify and fix create-pool OSD count checks for the case of hosts split into sub-nodes
- Make monitor print "Waiting to become master" just once, not every 5s
- Take out_size from oimg if not specified in vitastor-cli dd
- Do not report OSDs with empty statistics as "full" in status
- Trigger double autosync when switching PG state to prevent leaving garbage in non-immediate_commit clusters
- Fix a lack of connection timeout for etcd websockets in OSD leading to slower etcd failover (~70s instead of ~10s)
- Fix a rare OSD crash during client disconnect
- Fix PGs sometimes sticking until OSD restart in the "has_unclean" state with EC pools
- Fix metadata partition zeroing in vitastor-disk prepare
- Add patches for qemu 9.1 and pve-qemu 9.0 and 9.1
- Fix libvirt 8 patch
2024-12-19 15:49:19 +03:00
63b85b6bfb Fix clang warnings/errors
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m52s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m54s
Test / test_write_no_same (push) Successful in 11s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m19s
2024-12-19 15:30:31 +03:00
2f5959e3fa Add pve-qemu 9.1 patch
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m48s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m47s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 42s
Test / test_write_xor (push) Successful in 42s
Test / test_heal_pg_size_2 (push) Successful in 2m26s
Test / test_heal_ec (push) Successful in 2m22s
Test / test_heal_antietcd (push) Successful in 2m23s
Test / test_heal_csum_32k_dmj (push) Successful in 2m26s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m33s
Test / test_resize (push) Successful in 17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m35s
Test / test_heal_csum_4k_dj (push) Successful in 2m29s
Test / test_resize_auto (push) Successful in 10s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 18s
Test / test_enospc_xor (push) Successful in 23s
Test / test_enospc_imm_xor (push) Successful in 23s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub (push) Successful in 19s
Test / test_scrub_xor (push) Successful in 18s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 19s
Test / test_scrub_ec (push) Successful in 19s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m23s
2024-12-19 14:05:12 +03:00
a4a286ed95 Document NFS-RDMA 2024-12-19 14:05:12 +03:00
b8009bad5e Add librdmacm-dev to build dockerfile 2024-12-19 14:05:12 +03:00
9be3d27dc9 Document VitastorFS-based CSI 2024-12-19 13:06:47 +03:00
a19d2066c2 Document osd_backfillfull_ratio 2024-12-19 02:15:02 +03:00
2a8780b4b5 Add a note about slow ops 2024-12-19 02:02:37 +03:00
109f51a015 Implement basic VitastorFS support in CSI
All checks were successful
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 11s
Test / test_write (push) Successful in 32s
Test / test_switch_primary (push) Successful in 36s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize (push) Successful in 15s
Test / test_resize_auto (push) Successful in 9s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m8s
2024-12-17 02:26:23 +03:00
8a86c123c3 Allow to auto-select and print the port
All checks were successful
Test / test_root_node (push) Successful in 11s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m51s
Test / test_write_no_same (push) Successful in 9s
Test / test_write (push) Successful in 32s
Test / test_switch_primary (push) Successful in 36s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 17s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_osd_tags (push) Successful in 9s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m10s
2024-12-14 16:55:13 +03:00
b856524e0c Workaround for Linux bug: return post_op_attr for NFS-RDMA READ3
Linux NFS RDMA transport has a stupid bug - when the reply doesn't contain
post_op_attr, the data gets offsetted by 84 bytes (size of attributes) and
first 84 bytes are filled with probably random data.
2024-12-11 21:09:36 +03:00
ae3ca7451f Use per-connection RDMA device contexts 2024-12-11 21:09:36 +03:00
1dbbb0c3f8 Implement NFS RDMA support 2024-12-11 21:09:36 +03:00
64db31ec10 Fix slow op warning format
All checks were successful
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m48s
Test / test_write_no_same (push) Successful in 8s
Test / test_write (push) Successful in 32s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m22s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m13s
Test / test_heal_csum_4k_dmj (push) Successful in 2m13s
Test / test_resize_auto (push) Successful in 9s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize (push) Successful in 16s
Test / test_osd_tags (push) Successful in 9s
Test / test_enospc (push) Successful in 13s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m17s
2024-12-11 21:09:36 +03:00
76470686b3 Fix possibly incorrect linked list deserialization in NFS
All checks were successful
Test / test_root_node (push) Successful in 11s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m40s
Test / test_write_no_same (push) Successful in 10s
Test / test_write (push) Successful in 32s
Test / test_switch_primary (push) Successful in 35s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_4k_dmj (push) Successful in 2m16s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m17s
2024-12-08 02:54:13 +03:00
652ca631bb Fix possible crash in nfs_block readdir
All checks were successful
Test / test_write_no_same (push) Successful in 10s
Test / test_rebalance_verify_imm (push) Successful in 1m33s
Test / test_rebalance_verify_ec (push) Successful in 1m38s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_resize (push) Successful in 13s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_resize_auto (push) Successful in 9s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m19s
Test / test_rebalance_verify (push) Successful in 1m57s
Test / test_rebalance_verify_ec_imm (push) Successful in 2m9s
2024-12-01 18:04:49 +03:00
2105f4b654 Add lost netlink daemonize
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m42s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m44s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m25s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m17s
2024-11-27 17:13:30 +03:00
0d01573da3 Fix typos 2024-11-26 14:31:47 +03:00
d84b84f58d Fix new backfillfull feature, add more logs
All checks were successful
Test / test_rebalance_verify_ec_imm (push) Successful in 1m41s
Test / test_write_no_same (push) Successful in 9s
Test / test_write (push) Successful in 31s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m31s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 14s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 9s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m17s
Test / test_etcd_fail_antietcd (push) Successful in 42s
2024-11-23 01:08:13 +03:00
8cfe705d7a Map netlink after forking to show correct PID in vitastor-nbd ls
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m35s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m36s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m25s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m19s
2024-11-23 00:46:44 +03:00
66c9271cbd Radically simplify create-pool pg_size check
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m38s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m40s
Test / test_write_no_same (push) Successful in 10s
Test / test_write (push) Successful in 32s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 15s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m18s
2024-11-22 01:44:14 +03:00
7b37ba921d Pause pool rebalance when monitor detects that it can lead to any OSD becoming full
Some checks failed
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m39s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m27s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Failing after 11s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m19s
2024-11-22 01:01:07 +03:00
262c581400 Fix create-pool for the case of hosts split into sub-nodes
Some checks failed
Test / test_rebalance_verify_ec (push) Successful in 1m38s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m40s
Test / test_switch_primary (push) Successful in 23s
Test / test_write_no_same (push) Successful in 9s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 16s
Test / test_snapshot_pool2 (push) Failing after 11s
Test / test_osd_tags (push) Successful in 7s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m14s
2024-11-22 01:01:07 +03:00
ad3b6b7267 Add a note about GID and RDMA device auto-selection 2024-11-21 23:54:05 +03:00
1f6a061283 Move ibv_query_gid under #ifdef to only build it with libibverbs 32+
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m39s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m39s
Test / test_write_no_same (push) Successful in 13s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 35s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_resize (push) Successful in 13s
Test / test_resize_auto (push) Successful in 9s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m26s
2024-11-21 23:47:57 +03:00
fc4d97da10 Print "Waiting to become master" just once
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m38s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m40s
Test / test_write_no_same (push) Successful in 8s
Test / test_write (push) Successful in 30s
Test / test_switch_primary (push) Successful in 33s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m26s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m22s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 9s
Test / test_enospc (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m18s
2024-11-21 00:55:22 +03:00
c7a4ce7341 Take out_size from dd oimg if not specified
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m41s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 9s
Test / test_write (push) Successful in 32s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 13s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m8s
2024-11-19 02:13:34 +03:00
ddea31d86d Auto-select first RDMA device only if RoCE is not found, add rocev2->rocev1->ib priority
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m43s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 10s
Test / test_write (push) Successful in 29s
Test / test_switch_primary (push) Successful in 32s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_resize (push) Successful in 13s
Test / test_resize_auto (push) Successful in 9s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m19s
2024-11-19 01:54:00 +03:00
156d005412 Add serialize_overlap to test_heal
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m44s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m44s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 13s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m22s
2024-11-17 01:26:35 +03:00
7e076c7049 Do not report OSDs with empty statistics as full
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m36s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m38s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m15s
Test / test_heal_csum_4k_dj (push) Successful in 2m14s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m19s
2024-11-16 23:36:16 +03:00
7de38250ad Auto-select RDMA device based on osd_network
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m49s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m50s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m12s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 15s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m16s
2024-11-16 18:38:57 +03:00
9c59d30e83 Report slow ops in OSD stats in etcd and show them in vitastor-cli status
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m42s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 8s
Test / test_write (push) Successful in 30s
Test / test_switch_primary (push) Successful in 33s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 8s
Test / test_enospc (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m9s
2024-11-16 15:11:16 +03:00
5db02cdf6e Add pve-qemu 9.0 patch 2024-11-16 11:20:47 +03:00
8202ee9d74 Trigger double autosync when switching PG state to prevent leaving garbage in non-immediate_commit clusters
All checks were successful
Test / test_dd (push) Successful in 12s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m39s
Test / test_write_no_same (push) Successful in 8s
Test / test_write (push) Successful in 29s
Test / test_switch_primary (push) Successful in 33s
Test / test_write_xor (push) Successful in 33s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 12s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m10s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
2024-11-15 01:26:36 +03:00
5864bd067c Add missing connection timeout for etcd websockets in OSD
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m43s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m44s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m14s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m16s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m11s
2024-11-12 02:28:07 +03:00
c312557ace Do not execute remaining operations if the client is stopped during read
All checks were successful
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m46s
Test / test_write_no_same (push) Successful in 10s
Test / test_write (push) Successful in 31s
Test / test_switch_primary (push) Successful in 33s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m28s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m15s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_resize (push) Successful in 13s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_resize_auto (push) Successful in 9s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m20s
2024-11-10 16:44:13 +03:00
5ce20116d8 Postpone trigger_nearest to prevent timer callbacks called from setTimer/clearTimer
All checks were successful
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m48s
Test / test_write_no_same (push) Successful in 8s
Test / test_write (push) Successful in 31s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 12s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 10s
Test / test_heal_csum_4k (push) Successful in 2m18s
2024-11-10 15:51:16 +03:00
be66791e59 Add another note about 1.8 upgrade 2024-11-09 00:57:58 +03:00
141cec2383 Add missing refcounting for flush_batch errors 2024-11-09 00:46:38 +03:00
1ce4b1b417 Fix stop condition in osd_flush
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 2m1s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m59s
Test / test_write_no_same (push) Successful in 8s
Test / test_write (push) Successful in 32s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 13s
Test / test_osd_tags (push) Successful in 7s
Test / test_enospc (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 11s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m9s
Could probably lead to PGs hung in peering states on OSD restart in EC pools,
fixable by primary OSD restart
2024-11-08 00:30:40 +03:00
ebf24bac9a Fix partition zeroing during prepare
All checks were successful
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m46s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_resize_auto (push) Successful in 8s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_resize (push) Successful in 13s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 10s
Test / test_heal_csum_4k (push) Successful in 2m12s
Previously it zeroed area beginning with 0 instead of actual metadata offset
which was leading to non-zeroed metadata when the disk is very small
2024-11-08 00:14:37 +03:00
edd9051f81 Fix arch.en toc 2024-11-08 00:14:18 +03:00
662ca86dc0 Fix libvirt 8 patch 2024-11-07 12:21:32 +03:00
a1ca573168 Support QEMU 9.1
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m42s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m44s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m13s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_resize (push) Successful in 12s
Test / test_resize_auto (push) Successful in 9s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 9s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m11s
2024-11-07 12:21:13 +03:00
f69f801ffb Release 1.9.3
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m54s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m56s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m16s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m11s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m13s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 13s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m17s
- Support custom hybrid OSD creation (`vitastor-disk prepare --hybrid --fast-devices /dev/xxx,/dev/yyy`)
- Auto-change partition paths to /dev/disk/by-partuuid/ in `vitastor-disk prepare`
- Allow to select cached I/O in vitastor-disk commands
- Fix multiple bugs in vitastor-disk resize & add tests for them
- Fix vitastor-disk write-meta/write-journal in superblock-based mode writing it to an incorrect device
- Fix vitastor-disk prepare sometimes again not seeing new partitions
- Cleanup PG history and stats of deleted pools
- Fix "is already mounted" checks in CSI
2024-11-07 01:28:31 +03:00
af92cbdfcc Dynamic device size in test
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m52s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m53s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m25s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 13s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m17s
2024-11-06 14:16:58 +03:00
a775db10cc Also allow cached I/O in dsk.open_*() in disk_tool
Some checks failed
Test / test_rebalance_verify_ec (push) Successful in 1m52s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m53s
Test / test_write_no_same (push) Successful in 9s
Test / test_write (push) Successful in 30s
Test / test_switch_primary (push) Successful in 32s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_resize_auto (push) Failing after 8s
Test / test_resize (push) Successful in 13s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 12s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m8s
2024-11-06 13:52:25 +03:00
eafce26049 Add resize and resize-auto tests
Some checks failed
Test / test_rebalance_verify_ec (push) Successful in 1m46s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m48s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m13s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_resize (push) Failing after 12s
Test / test_resize_auto (push) Failing after 9s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_osd_tags (push) Successful in 7s
Test / test_enospc (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m19s
2024-11-06 13:30:51 +03:00
625c74294f Support direct I/O 2024-11-06 13:30:12 +03:00
ef8c21ad6f Change %lu to %ju 2024-11-06 02:58:51 +03:00
2bb8e8999e Do not check length in "data alignment mismatch" 2024-11-06 02:58:26 +03:00
c2e7c28672 Fix calc_lengths data size recalc during auto-resize
All checks were successful
Test / test_dd (push) Successful in 12s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec (push) Successful in 1m58s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m55s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m14s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m26s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_osd_tags (push) Successful in 8s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 10s
Test / test_heal_csum_4k (push) Successful in 2m13s
2024-11-06 02:27:17 +03:00
bd22beefb5 Auto-extend new_data_len if new_data_offset is changed too
All checks were successful
Test / test_dd (push) Successful in 11s
Test / test_root_node (push) Successful in 7s
Test / test_rebalance_verify_ec (push) Successful in 1m48s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m49s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m14s
Test / test_heal_ec (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m12s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m13s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 14s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m22s
2024-11-06 02:13:30 +03:00
e7038ab99c Auto-change partition paths to /dev/disk/by-partuuid/
Some checks failed
Test / test_dd (push) Successful in 12s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec (push) Successful in 1m54s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m57s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Failing after 2m18s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m16s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m16s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 13s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 13s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m18s
2024-11-06 01:04:05 +03:00
b6f75ebcfd Add missing I/O path description in english 2024-11-06 00:43:17 +03:00
9def199981 Auto-reduce new_data_len in resize
All checks were successful
Test / test_dd (push) Successful in 11s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec (push) Successful in 1m45s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m46s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 33s
Test / test_heal_pg_size_2 (push) Successful in 2m14s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m15s
Test / test_heal_csum_32k_dmj (push) Successful in 2m16s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 9s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m18s
2024-11-05 02:57:11 +03:00
c72e8e649e Support test mode for vitastor-disk
All checks were successful
Test / test_dd (push) Successful in 11s
Test / test_rebalance_verify_ec (push) Successful in 1m51s
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m54s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 33s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m21s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_osd_tags (push) Successful in 7s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 14s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m19s
2024-11-05 02:43:55 +03:00
8bdb3e8786 Write meta/journal to correct device when used in superblock mode 2024-11-05 02:43:55 +03:00
a87e236c70 Fix resize --data-size, particularly when expanding the device
Some checks failed
Test / test_root_node (push) Successful in 8s
Test / test_dd (push) Successful in 12s
Test / test_rebalance_verify_ec (push) Successful in 1m47s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m47s
Test / test_write_no_same (push) Successful in 7s
Test / test_write (push) Successful in 30s
Test / test_switch_primary (push) Successful in 32s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m14s
Test / test_heal_antietcd (push) Successful in 2m15s
Test / test_heal_csum_32k_dmj (push) Successful in 2m15s
Test / test_heal_csum_32k_dj (push) Failing after 2m24s
Test / test_heal_csum_32k (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_osd_tags (push) Successful in 7s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 9s
Test / test_enospc_imm (push) Successful in 9s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 11s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 14s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m17s
2024-11-04 18:55:03 +03:00
16f67cf6f1 Fix missing metadata checksums after resize
All checks were successful
Test / test_dd (push) Successful in 12s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec (push) Successful in 1m47s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m50s
Test / test_write_no_same (push) Successful in 7s
Test / test_write (push) Successful in 30s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 33s
Test / test_heal_pg_size_2 (push) Successful in 2m22s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m26s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m25s
Test / test_heal_csum_4k_dmj (push) Successful in 2m26s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 10s
Test / test_heal_csum_4k (push) Successful in 2m16s
2024-11-04 18:36:35 +03:00
56de4a520d Support custom hybrid OSD creation (--hybrid --fast-devices /dev/xxx,/dev/yyy)
All checks were successful
Test / test_dd (push) Successful in 13s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec (push) Successful in 1m41s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m42s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 29s
Test / test_write_xor (push) Successful in 33s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m12s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m15s
2024-11-04 17:52:29 +03:00
adca162278 Note that osd_per_disk is also incompatible
All checks were successful
Test / test_dd (push) Successful in 12s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec (push) Successful in 1m43s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m45s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 31s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m14s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m25s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 13s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 9s
Test / test_heal_csum_4k (push) Successful in 2m18s
2024-11-04 15:20:01 +03:00
490b314d72 Rework & fix new partition waiting code
All checks were successful
Test / test_dd (push) Successful in 11s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec (push) Successful in 1m46s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m48s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m14s
Test / test_heal_ec (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m14s
Test / test_heal_csum_4k_dmj (push) Successful in 2m11s
Test / test_heal_csum_32k_dj (push) Successful in 2m24s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_osd_tags (push) Successful in 7s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_snapshot_pool2 (push) Successful in 13s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 9s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 11s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m11s
2024-11-04 15:16:30 +03:00
9f52074e1e Delete PG history and stats of deleted pools
All checks were successful
Test / test_dd (push) Successful in 11s
Test / test_rebalance_verify_ec (push) Successful in 1m38s
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m40s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m13s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 11s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m17s
2024-11-01 02:38:31 +03:00
2b3e877546 Add notes about vitastor-disk in disable_data_fsync 2024-11-01 02:38:18 +03:00
01d55e5420 Merge pull request #64 from 0x00ace/fio_version_fix
use fio 3.35-1 for AlmaLinux 9
2024-10-31 11:55:40 +03:00
f5aa5cfdfe Fix "is already mounted" checks in CSI 2024-10-26 14:06:21 +03:00
2826bb9e7e Add more logging to CSI 2024-10-24 02:07:55 +03:00
30d1ad0f66 Add Intel D5-P4320 2024-10-22 23:22:48 +03:00
79719e44ac Release 1.9.2
All checks were successful
Test / test_root_node (push) Successful in 8s
Test / test_dd (push) Successful in 12s
Test / test_rebalance_verify_ec (push) Successful in 1m40s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m41s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 31s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m16s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 13s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m14s
New features:
- Support resizing normal vitastor-disk partitions and moving journal/metadata: [vitastor-disk resize](https://vitastor.io/docs/usage/disk.html#resize)
- Support simple forms of vitastor-disk {dump,write}-{meta,journal} for OSD partitions

Bug fixes:
- Fix block RWX volumes broken after introducing stage/unstage support
- Do not allow to create non-block RWX volumes in CSI
- Fix vitastor-disk prepare not seeing the newly created partition in rare cases
- Fix non-array tags not showing up in ls-osd/osd-tree
- Make OpenNebula oned.conf patching during installation smarter
- Fix iseek option in vitastor-cli dd not working
- Validate conv=, iflag=, oflag= options in vitastor-cli dd
- Fix vitastor-disk write-meta not writing header checksum to the disk
- Fix JSON format in vitastor-disk dump-meta
- Fix read_chain_bitmap not working for snapshot in another pool
- Fix a possible OSD crash during parallel read & write to an image with snapshots
- Several followups to the READ_CHAIN_BITMAP fix: avoid data reads, fix possible overflow in is_zero(), fix bitmap size
2024-10-20 01:49:13 +03:00
f5626655df Add new disk command docs 2024-10-20 01:47:46 +03:00
7e2dde2702 Fix block RWX volumes broken after introducing stage/unstage support 2024-10-19 11:56:56 +03:00
3b0ab317cf Validate non-block RWX in CSI 2024-10-18 01:55:38 +03:00
18eb99c494 Implement resizing partitions created with vitastor-disk 2024-10-18 01:55:19 +03:00
4e8a1a8895 Run partprobe in add_partition() if /dev/disk/by-partuuid symlink is not present 2024-10-12 18:07:53 +03:00
d27a8bdabc Make get_parent_device return full path 2024-10-12 13:44:52 +03:00
ebd616e42f Extract clear_osd_superblock() 2024-10-12 13:44:52 +03:00
b18d296e01 Extract check_existing_partition(), get_device_size() 2024-10-12 13:44:52 +03:00
a03508320e Move json_is_true/json_is_false to json_util.cpp
All checks were successful
Test / test_dd (push) Successful in 12s
Test / test_rebalance_verify_ec (push) Successful in 1m37s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m40s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 31s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m14s
Test / test_heal_ec (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m15s
Test / test_heal_csum_32k_dmj (push) Successful in 2m16s
Test / test_heal_csum_32k_dj (push) Successful in 2m16s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m14s
Test / test_osd_tags (push) Successful in 7s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_snapshot_pool2 (push) Successful in 13s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 11s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 14s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m9s
2024-10-12 00:40:39 +03:00
c9ccc790ec Fix non-array tags not showing up in ls-osd/osd-tree
All checks were successful
Test / test_dd (push) Successful in 13s
Test / test_rebalance_verify_ec (push) Successful in 1m39s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m41s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 33s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m16s
2024-10-11 18:33:35 +03:00
db2d9c5b3d Fix tables in NFS doc 2024-10-08 00:20:10 +03:00
09f15f44c9 Fix Toshiba MG and VDUSE Debian kernel note in docs 2024-10-08 00:17:14 +03:00
c5a58c2e81 Support reading parameters automatically from the superblock in vitastor-disk {dump,write}-{meta,journal}
Some checks reported warnings
Test / test_dd (push) Has been cancelled
Test / test_root_node (push) Has been cancelled
Test / test_switch_primary (push) Has been cancelled
Test / test_write (push) Has been cancelled
Test / test_write_xor (push) Has been cancelled
Test / test_write_no_same (push) Has been cancelled
Test / test_heal_pg_size_2 (push) Has been cancelled
Test / build (push) Has been cancelled
Test / test_heal_ec (push) Has been cancelled
Test / test_heal_antietcd (push) Has been cancelled
Test / test_heal_csum_32k_dmj (push) Has been cancelled
Test / test_heal_csum_32k_dj (push) Has been cancelled
Test / test_heal_csum_32k (push) Has been cancelled
Test / test_heal_csum_4k_dmj (push) Has been cancelled
Test / test_heal_csum_4k_dj (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
Test / test_snapshot_pool2 (push) Has been cancelled
Test / test_osd_tags (push) Has been cancelled
Test / test_enospc (push) Has been cancelled
Test / test_enospc_xor (push) Has been cancelled
Test / test_add_osd (push) Has been cancelled
Test / test_enospc_imm (push) Has been cancelled
Test / test_enospc_imm_xor (push) Has been cancelled
Test / test_scrub (push) Has been cancelled
Test / test_scrub_zero_osd_2 (push) Has been cancelled
Test / test_scrub_xor (push) Has been cancelled
Test / test_scrub_pg_size_3 (push) Has been cancelled
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Has been cancelled
Test / test_scrub_ec (push) Has been cancelled
Test / test_nfs (push) Has been cancelled
2024-10-07 02:21:58 +03:00
30e7c2ad1e Add custom OpenNebula oned.conf patcher (it uses a SHITTY configuration file format) 2024-10-06 13:46:05 +03:00
2e76ceabbe Fix iseek option in vitastor-cli dd 2024-10-05 18:25:38 +03:00
3df088c207 Validate conv=, iflag=, oflag= options in vitastor-cli dd 2024-10-05 18:02:36 +03:00
d882a19eab Fix vitastor-disk write-meta not writing header checksum to the disk... 2024-10-05 17:32:55 +03:00
702be3da7a Fix JSON format in vitastor-disk dump-meta 2024-10-05 16:08:34 +03:00
99533e1c2f Fix .yml links 2024-10-02 00:38:07 +03:00
a6cceb43bf Fix read_chain_bitmap not working for snapshot in another pool
All checks were successful
Test / test_dd (push) Successful in 13s
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec (push) Successful in 1m43s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m45s
Test / test_write_no_same (push) Successful in 8s
Test / test_write (push) Successful in 31s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m14s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_osd_tags (push) Successful in 7s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m17s
2024-10-02 00:24:48 +03:00
745d89459a Fix link, add title 2024-09-29 22:05:56 +03:00
48f023292d Fix extra data reads on read_chain
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 1m35s
Test / test_dd (push) Successful in 12s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify_ec (push) Successful in 1m43s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m44s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m22s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_osd_tags (push) Successful in 8s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 11s
Test / test_scrub_ec (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m12s
2024-09-21 17:05:42 +03:00
b58bf3ada5 Fix possible OSD crash during parallel read & write to an image with snapshots
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 1m39s
Test / test_dd (push) Successful in 12s
Test / test_rebalance_verify_ec (push) Successful in 1m45s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m45s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 31s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_osd_tags (push) Successful in 10s
Test / test_enospc (push) Successful in 11s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 11s
Test / test_scrub_ec (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
OSDs could crash with the following "assertion failed" message (crash didn't affect data
and was caused by OSD thinking upper blocks are full while they weren't). Reproduction
without introducing artificial delays is hard because you have to force OSD to read an
object with enqueued but not handled write which fills previously non-full bitmap. O_o.

```
vitastor-osd: ./src/osd/osd_primary_chain.cpp:613: void osd_t::send_chained_read_results(pg_t&, osd_op_t*): Assertion `stripes[role].read_buf' failed.
```
2024-09-21 13:44:36 +03:00
f18a749324 READ_CHAIN fix was incomplete :-)
Some checks failed
Test / test_rebalance_verify_imm (push) Successful in 1m33s
Test / test_dd (push) Successful in 12s
Test / test_rebalance_verify_ec (push) Successful in 1m39s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Failing after 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m15s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_osd_tags (push) Successful in 8s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m13s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m20s
2024-09-21 13:40:31 +03:00
6e9307c522 Fix possible overflow in is_zero() 2024-09-21 13:40:10 +03:00
99adbb9483 Release 1.9.1
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 1m36s
Test / test_dd (push) Successful in 12s
Test / test_rebalance_verify_ec (push) Successful in 1m40s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m42s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_osd_tags (push) Successful in 9s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m25s
Hotfixes for OpenNebula and upgrade hotfix for 1.7

- Fix deploy.vitastor, save.vitastor, restore.vitastor scripts not working for nodes other than master oned
- Fix deploy.vitastor not working for VMs without Vitastor disks
- Disable clearing old PG configuration when upgrading from 1.7 or older versions (it was breaking old clients)
2024-09-14 19:17:30 +03:00
b489a611a9 Add 1.8 upgrade note 2024-09-14 19:17:30 +03:00
c6c0b8957a Stop updating old PG configuration when the user manually deletes it
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 1m32s
Test / test_root_node (push) Successful in 9s
Test / test_dd (push) Successful in 17s
Test / test_rebalance_verify_ec (push) Successful in 1m42s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 9s
Test / test_write (push) Successful in 32s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m32s
Test / test_heal_csum_32k_dmj (push) Successful in 2m31s
Test / test_heal_csum_32k_dj (push) Successful in 2m25s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_osd_tags (push) Successful in 8s
Test / test_heal_csum_4k_dmj (push) Successful in 2m23s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 17s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 14s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m19s
2024-09-14 19:15:40 +03:00
5d40d2a459 Fix oned.conf patch 2024-09-14 19:08:44 +03:00
f449c28c3b Always write decoded base64 deployment file (otherwise it breaks VMs without Vitastor disks) 2024-09-14 15:25:02 +03:00
a6274f58cc Same fix for save/restore: they also need to ssh to target node 2024-09-14 02:46:48 +03:00
ac29ffea6a Add ssh to target node to deploy.vitastor - without it it always tried to deploy VMs on oned host 2024-09-14 02:15:24 +03:00
bc06acc153 Disable clearing old PG configuration - we can not be sure that old clients do not need it
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 1m43s
Test / test_dd (push) Successful in 11s
Test / test_rebalance_verify_ec (push) Successful in 1m49s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m51s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m34s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_osd_tags (push) Successful in 9s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m17s
2024-09-13 19:00:12 +03:00
fe8e611e23 Release 1.9.0
All checks were successful
Test / test_dd (push) Successful in 9s
Test / test_rebalance_verify_ec (push) Successful in 1m33s
Test / test_root_node (push) Successful in 9s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 31s
Test / test_write_no_same (push) Successful in 8s
Test / test_write_xor (push) Successful in 33s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m33s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_osd_tags (push) Successful in 7s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub_xor (push) Successful in 13s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_nfs (push) Successful in 11s
Test / test_scrub_ec (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m19s
Test / test_etcd_fail (push) Successful in 42s
- OpenNebula support! [Installation instructions](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/installation/opennebula.en.md)
- Added [vitastor-cli rm --exact|--matching](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/usage/cli.en.md#rm) command
- Added [vitastor-cli dd](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/usage/cli.en.md#dd) command - copy files between Vitastor images, files and pipes
- Add a startup timeout to vitastor-cli to not wait for etcd infinitely
- Fix non-working OSD_OP_READ_CHAIN_BITMAP O_o
- Autodetect block_size/bitmap_granularity/immediate_commit when creating pools
- Do not allow to create multiple pools with the same name from vitastor-cli
- Fix skip_cache_check option not applied due to type issue (see github issue #70)
2024-09-06 01:46:16 +03:00
7636f9c726 Turn off brp-python-bytecompile in RPM specs 2024-09-06 01:44:44 +03:00
d5f7005ddd Add dd and rm --exact|--matching documentation 2024-09-05 02:22:05 +03:00
70d6fcd32a Add OpenNebula to README 2024-09-05 02:00:14 +03:00
17caaa59af vitastor-opennebula is probably more correct than opennebula-vitastor
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 1m29s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec (push) Successful in 1m35s
Test / test_dd (push) Successful in 12s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m37s
Test / test_write_no_same (push) Successful in 9s
Test / test_write (push) Successful in 31s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m25s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_osd_tags (push) Successful in 9s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 14s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m19s
2024-09-05 01:44:16 +03:00
2dac6ee38b Fix OpenNebula reinstall
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 1m27s
Test / test_dd (push) Successful in 11s
Test / test_root_node (push) Successful in 7s
Test / test_rebalance_verify_ec (push) Successful in 1m35s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m36s
Test / test_write_no_same (push) Successful in 9s
Test / test_write (push) Successful in 30s
Test / test_switch_primary (push) Successful in 32s
Test / test_write_xor (push) Successful in 33s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m25s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_osd_tags (push) Successful in 7s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m21s
2024-09-04 11:05:56 +03:00
8be67a2d5b Fix OpenNebula save/restore 2024-09-04 11:05:56 +03:00
9c2132882c Fix unaligned last block read/write in cli_dd 2024-09-04 11:05:56 +03:00
9f25bb059b Use just IMAGE_PREFIX, not IMAGE_PREFIX+"one" 2024-09-04 01:23:00 +03:00
ee3094c5e5 Add OpenNebula plugin docs 2024-09-04 01:22:39 +03:00
ba9f263b75 Add wildcard removal command
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 1m27s
Test / test_root_node (push) Successful in 8s
Test / test_dd (push) Successful in 13s
Test / test_rebalance_verify_ec (push) Successful in 1m35s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m36s
Test / test_write_no_same (push) Successful in 8s
Test / test_write (push) Successful in 31s
Test / test_switch_primary (push) Successful in 33s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_osd_tags (push) Successful in 9s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m9s
2024-08-31 14:13:09 +03:00
30eaa1a8e6 Add vitastor-cli ls --exact 2024-08-31 02:36:25 +03:00
6a8daedbe2 rm --wildcard 2024-08-31 02:36:25 +03:00
2b96ac0b44 Implement OpenNebula driver 2024-08-30 23:46:37 +03:00
986cd11705 Implement CLI "dd" command - copy data between Vitastor images, files and pipes 2024-08-30 02:31:06 +03:00
b804051eaf Remove debug print in nbd-proxy 2024-08-30 02:31:06 +03:00
3cc326500e Fix non-working OSD_OP_READ_CHAIN_BITMAP O_o 2024-08-30 01:25:05 +03:00
f848c450a4 Clients should not wait infinitely for etcd to start if it's unavailable 2024-08-28 02:03:35 +03:00
4121c66281 Autodetect block_size/bitmap_granularity/immediate_commit when creating pools 2024-08-28 02:03:35 +03:00
b3716fbe23 Validate pool name when creating a pool 2024-08-28 02:03:35 +03:00
97f49d7d94 Fix #70 from github - skip_cache_check type issue
All checks were successful
Test / test_rebalance_verify (push) Successful in 1m21s
Test / test_rebalance_verify_imm (push) Successful in 1m22s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify_ec (push) Successful in 1m30s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m30s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 31s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m27s
Test / test_heal_csum_32k (push) Successful in 2m26s
Test / test_osd_tags (push) Successful in 9s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_enospc (push) Successful in 9s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 14s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m17s
2024-08-14 01:35:43 +03:00
131de4b790 Disable trace in header 2024-08-13 11:21:35 +03:00
ce359c5a69 Release 1.8.0
All checks were successful
Test / test_rebalance_verify (push) Successful in 1m25s
Test / test_rebalance_verify_imm (push) Successful in 1m25s
Test / test_root_node (push) Successful in 7s
Test / test_rebalance_verify_ec (push) Successful in 1m30s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m32s
Test / test_write_no_same (push) Successful in 7s
Test / test_write (push) Successful in 30s
Test / test_switch_primary (push) Successful in 32s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m16s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_osd_tags (push) Successful in 8s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m16s
Bugfix release, would be 1.7.2, but etcd layout changes mandate it to be 1.8.0. :-)

- Change etcd layout: /config/pgs is now /pg/config, /pg/stats/* is now /pgstats/*.
  This is required to fix a rare PG history tracking issue caused by non-atomic
  delivery of etcd events sometimes resulting in `incomplete` objects in EC pools
  after mass OSD restarts. Upgrading can be performed freely, downgrade requires
  additional action: [1.8.0 to 1.7.1](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/usage/admin.en.md#1-8-0-to-1-7-1)
- Fix a rare client hang on PG primary OSD switch
- Fix vitastor-nfs started using mount command sometimes not stopping automatically after unmount
- Fix vitastor-nfs mounts started using mount command sometimes hanging after daemonizing
- Fix merge/flatten into a pool with different object size (image migration between pools case)
- Do not print extra "PG disappeared after reload" verbose log messages for non-existing PGs
- Fix clustered Antietcd support and persistence filter
- Do not try to purge the same OSD multiple times if its multiple devices are passed to purge
- Various node.js binding fixes
2024-08-11 14:28:31 +03:00
521e867b10 Run check_exit also on deferred stop. Now vitastor-nfs should finally always stop on umount
All checks were successful
Test / test_rebalance_verify (push) Successful in 1m27s
Test / test_rebalance_verify_imm (push) Successful in 1m28s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec (push) Successful in 1m34s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m35s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 31s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m26s
Test / test_osd_tags (push) Successful in 8s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 10s
Test / test_heal_csum_4k (push) Successful in 2m17s
2024-08-11 00:05:20 +03:00
333c54ebbf Cleanup clients correctly during stop(). Was also affecting #67, but could also reproduce during normal operation
All checks were successful
Test / test_rebalance_verify (push) Successful in 1m27s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify_imm (push) Successful in 1m29s
Test / test_rebalance_verify_ec (push) Successful in 1m34s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m35s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 31s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_osd_tags (push) Successful in 7s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_ec (push) Successful in 13s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 10s
Test / test_heal_csum_4k (push) Successful in 2m21s
2024-08-11 00:00:13 +03:00
58d3da95c8 Fix github issue #67 by closing active NFS sockets before daemonize()
All checks were successful
Test / test_rebalance_verify (push) Successful in 1m26s
Test / test_rebalance_verify_imm (push) Successful in 1m28s
Test / test_root_node (push) Successful in 7s
Test / test_rebalance_verify_ec (push) Successful in 1m32s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m34s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 31s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m26s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_osd_tags (push) Successful in 8s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m10s
2024-08-10 20:13:37 +03:00
4e90e752eb Fix merge/flatten into a pool with different object size
All checks were successful
Test / test_rebalance_verify (push) Successful in 1m25s
Test / test_rebalance_verify_imm (push) Successful in 1m26s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify_ec (push) Successful in 1m31s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m30s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 33s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_osd_tags (push) Successful in 9s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m12s
2024-08-10 19:23:26 +03:00
09342d7189 node.js binding fixes 2024-08-05 00:10:37 +03:00
eb3e8b8c19 Do not print "PG disappeared after reload" verbose log messages when *it* was not present
All checks were successful
Test / test_rebalance_verify (push) Successful in 1m22s
Test / test_rebalance_verify_imm (push) Successful in 1m22s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec (push) Successful in 1m30s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m30s
Test / test_write_no_same (push) Successful in 9s
Test / test_write (push) Successful in 29s
Test / test_switch_primary (push) Successful in 32s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_osd_tags (push) Successful in 7s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 11s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 13s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 10s
Test / test_heal_csum_4k (push) Successful in 2m14s
2024-08-04 01:42:05 +03:00
e2ca3ad99e Add a note about storage ID in proxmox storage config doc 2024-07-31 01:19:44 +03:00
dd4b0aed2b Support scattered write in node.js binding 2024-07-31 01:17:06 +03:00
42851a061c Always continue operations to not miss resuming after the lack of PG primary
Should fix spurious client hangs during PG primary switchover
2024-07-31 01:17:03 +03:00
8e0f242d30 Add downgrade docs 2024-07-31 01:15:37 +03:00
0daa8ea39b Support seamless upgrade to new PG config and stats etcd key names 2024-07-31 01:15:37 +03:00
b263d311ef Use separate watch revisions for different watchers 2024-07-31 01:15:37 +03:00
8720185780 Run tests in CI in memory (in tmpfs) 2024-07-31 01:15:37 +03:00
20584414d8 Report OSD version in /osd/state/ and /osd/stats/ (for the future) 2024-07-31 01:15:37 +03:00
306a3db7f3 Rename VERSION define to VITASTOR_VERSION 2024-07-31 01:15:37 +03:00
5b0aebada4 Rename /config/pgs to /pg/config and /pg/stats/* to /pgstats/* 2024-07-31 01:15:37 +03:00
d6f0b480c8 Fix broken link 2024-07-22 14:01:53 +03:00
f1f8531fd4 Make tests compatible with antietcd, add 2 antietcd tests to CI
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 4m31s
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify (push) Successful in 5m18s
Test / test_switch_primary (push) Successful in 38s
Test / test_write (push) Successful in 48s
Test / test_write_no_same (push) Successful in 19s
Test / test_write_xor (push) Successful in 53s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m58s
Test / test_rebalance_verify_ec (push) Successful in 7m50s
Test / test_heal_pg_size_2 (push) Successful in 4m10s
Test / test_heal_antietcd (push) Successful in 4m16s
Test / test_heal_ec (push) Successful in 4m54s
Test / test_heal_csum_32k_dmj (push) Successful in 5m52s
Test / test_heal_csum_32k_dj (push) Successful in 6m29s
Test / test_heal_csum_32k (push) Successful in 6m14s
Test / test_osd_tags (push) Successful in 35s
Test / test_heal_csum_4k_dmj (push) Successful in 6m51s
Test / test_enospc (push) Successful in 1m42s
Test / test_enospc_xor (push) Successful in 2m32s
Test / test_enospc_imm (push) Successful in 1m40s
Test / test_heal_csum_4k_dj (push) Successful in 6m8s
Test / test_scrub (push) Successful in 1m3s
Test / test_heal_csum_4k (push) Successful in 5m5s
Test / test_enospc_imm_xor (push) Successful in 1m23s
Test / test_scrub_zero_osd_2 (push) Successful in 26s
Test / test_scrub_xor (push) Successful in 31s
Test / test_scrub_ec (push) Successful in 33s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 37s
Test / test_nfs (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 48s
2024-07-20 02:16:38 +03:00
8d79d59964 Update antietcd to 1.1.0 2024-07-20 02:15:48 +03:00
551a209a50 Fix persistence filter initialization 2024-07-20 02:15:48 +03:00
06cafd7702 Do not merge config an extra unneeded time 2024-07-20 02:15:48 +03:00
3018352443 Fix clustered Antietcd support 2024-07-19 18:58:58 +03:00
f8edfb4a71 No need to check for PG intersection if a history set is smaller than EC data part count
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m3s
Test / test_rebalance_verify_imm (push) Successful in 4m18s
Test / test_root_node (push) Successful in 13s
Test / test_rebalance_verify (push) Successful in 4m59s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 43s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m22s
Test / test_write_no_same (push) Successful in 24s
Test / test_write_xor (push) Successful in 1m53s
Test / test_rebalance_verify_ec (push) Successful in 5m41s
Test / test_heal_pg_size_2 (push) Successful in 3m41s
Test / test_heal_csum_32k (push) Successful in 5m8s
Test / test_heal_csum_32k_dmj (push) Successful in 9m15s
Test / test_heal_csum_4k_dmj (push) Successful in 3m32s
Test / test_osd_tags (push) Successful in 24s
Test / test_enospc (push) Successful in 2m1s
Test / test_enospc_xor (push) Successful in 2m49s
Test / test_enospc_imm (push) Successful in 1m27s
Test / test_heal_csum_4k (push) Successful in 5m14s
Test / test_scrub (push) Successful in 48s
Test / test_scrub_zero_osd_2 (push) Successful in 34s
Test / test_enospc_imm_xor (push) Successful in 1m15s
Test / test_scrub_xor (push) Successful in 28s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 43s
Test / test_scrub_pg_size_3 (push) Successful in 54s
Test / test_scrub_ec (push) Successful in 31s
Test / test_nfs (push) Successful in 18s
Test / test_heal_csum_4k_dj (push) Successful in 8m27s
Test / test_heal_csum_32k_dj (push) Successful in 3m36s
Test / test_heal_ec (push) Successful in 4m56s
2024-07-18 19:29:05 +03:00
8239ea2356 Do not try to purge the same OSD multiple times if its multiple devices are passed to purge 2024-07-16 16:48:16 +03:00
e898335b8d Release 1.7.1
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m13s
Test / test_rebalance_verify_imm (push) Successful in 4m39s
Test / test_root_node (push) Successful in 15s
Test / test_rebalance_verify (push) Successful in 5m36s
Test / test_switch_primary (push) Successful in 41s
Test / test_write (push) Successful in 44s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m4s
Test / test_write_no_same (push) Successful in 22s
Test / test_write_xor (push) Successful in 1m7s
Test / test_rebalance_verify_ec (push) Successful in 7m23s
Test / test_heal_pg_size_2 (push) Successful in 4m10s
Test / test_heal_csum_32k_dmj (push) Successful in 4m46s
Test / test_heal_csum_4k_dmj (push) Successful in 5m5s
Test / test_heal_csum_32k_dj (push) Successful in 8m24s
Test / test_osd_tags (push) Successful in 23s
Test / test_enospc (push) Successful in 1m27s
Test / test_enospc_xor (push) Successful in 1m59s
Test / test_heal_csum_4k (push) Successful in 4m33s
Test / test_enospc_imm (push) Successful in 58s
Test / test_scrub (push) Successful in 44s
Test / test_enospc_imm_xor (push) Successful in 1m11s
Test / test_scrub_zero_osd_2 (push) Successful in 25s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 43s
Test / test_scrub_pg_size_3 (push) Successful in 53s
Test / test_nfs (push) Successful in 20s
Test / test_scrub_ec (push) Successful in 27s
Test / test_heal_csum_32k (push) Successful in 4m3s
Test / test_scrub_xor (push) Successful in 38s
Test / test_heal_csum_4k_dj (push) Successful in 3m23s
Test / test_heal_ec (push) Successful in 3m46s
Some stupid hotfixes for 1.7.0 :)

- Fix NFS mount
- Fix modify-osd
- Fix use_antietcd not taken from /etc
2024-07-16 00:07:03 +03:00
e7869611fa Another stupid fix for NFS (no idea how it worked for me)
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m13s
Test / test_rebalance_verify_imm (push) Successful in 6m48s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 7m31s
Test / test_switch_primary (push) Successful in 42s
Test / test_write (push) Successful in 48s
Test / test_rebalance_verify_ec_imm (push) Successful in 5m54s
Test / test_write_no_same (push) Successful in 21s
Test / test_write_xor (push) Successful in 1m51s
Test / test_rebalance_verify_ec (push) Successful in 8m35s
Test / test_heal_pg_size_2 (push) Successful in 3m49s
Test / test_heal_csum_32k_dmj (push) Successful in 5m32s
Test / test_heal_csum_32k (push) Successful in 5m25s
Test / test_heal_csum_4k_dmj (push) Successful in 4m45s
Test / test_osd_tags (push) Successful in 23s
Test / test_enospc (push) Successful in 1m48s
Test / test_enospc_xor (push) Successful in 1m45s
Test / test_heal_csum_4k_dj (push) Successful in 5m15s
Test / test_enospc_imm (push) Successful in 51s
Test / test_scrub (push) Successful in 31s
Test / test_enospc_imm_xor (push) Successful in 1m5s
Test / test_scrub_zero_osd_2 (push) Successful in 26s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 31s
Test / test_scrub_pg_size_3 (push) Successful in 55s
Test / test_scrub_ec (push) Successful in 26s
Test / test_nfs (push) Successful in 18s
Test / test_heal_csum_4k (push) Successful in 6m11s
Test / test_scrub_xor (push) Successful in 57s
Test / test_heal_csum_32k_dj (push) Successful in 3m44s
Test / test_heal_ec (push) Successful in 4m2s
2024-07-16 00:05:51 +03:00
e1c2500b60 Use modify-osd in the disk removal instruction 2024-07-16 00:01:42 +03:00
42cf3a11df Oops, fix reweight :)
Some checks reported warnings
Test / test_rebalance_verify_ec (push) Has been cancelled
Test / test_rebalance_verify_ec_imm (push) Has been cancelled
Test / test_root_node (push) Has been cancelled
Test / test_switch_primary (push) Has been cancelled
Test / test_write (push) Has been cancelled
Test / test_write_xor (push) Has been cancelled
Test / test_write_no_same (push) Has been cancelled
Test / test_cas (push) Has been cancelled
Test / test_change_pg_count (push) Has been cancelled
Test / test_heal_pg_size_2 (push) Has been cancelled
Test / test_heal_ec (push) Has been cancelled
Test / test_heal_csum_32k_dmj (push) Has been cancelled
Test / test_heal_csum_32k_dj (push) Has been cancelled
Test / test_heal_csum_32k (push) Has been cancelled
Test / test_heal_csum_4k_dmj (push) Has been cancelled
Test / test_heal_csum_4k_dj (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
Test / test_osd_tags (push) Has been cancelled
Test / test_enospc (push) Has been cancelled
Test / test_enospc_xor (push) Has been cancelled
Test / test_enospc_imm (push) Has been cancelled
Test / test_enospc_imm_xor (push) Has been cancelled
Test / test_scrub (push) Has been cancelled
Test / test_scrub_zero_osd_2 (push) Has been cancelled
Test / test_scrub_xor (push) Has been cancelled
Test / test_scrub_pg_size_3 (push) Has been cancelled
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Has been cancelled
Test / test_scrub_ec (push) Has been cancelled
Test / test_nfs (push) Has been cancelled
Test / test_add_osd (push) Has been cancelled
2024-07-16 00:01:11 +03:00
4d9293f0e9 Fix QEMU 8.2 and 9.0 patches (add @location comments) 2024-07-15 16:30:14 +03:00
7a13f85ae2 Fix mon config merge
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 3m6s
Test / test_rebalance_verify_imm (push) Successful in 4m17s
Test / test_root_node (push) Successful in 16s
Test / test_rebalance_verify (push) Successful in 4m59s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 42s
Test / test_write_no_same (push) Successful in 21s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m36s
Test / test_rebalance_verify_ec (push) Successful in 4m44s
Test / test_write_xor (push) Successful in 1m39s
Test / test_heal_pg_size_2 (push) Successful in 4m52s
Test / test_heal_csum_32k_dmj (push) Successful in 4m44s
Test / test_heal_csum_32k_dj (push) Successful in 5m45s
Test / test_heal_csum_4k_dmj (push) Successful in 5m13s
Test / test_heal_ec (push) Failing after 10m11s
Test / test_osd_tags (push) Successful in 15s
Test / test_enospc (push) Successful in 1m9s
Test / test_enospc_xor (push) Successful in 1m52s
Test / test_heal_csum_4k (push) Successful in 3m32s
Test / test_enospc_imm (push) Successful in 1m0s
Test / test_enospc_imm_xor (push) Successful in 1m14s
Test / test_heal_csum_4k_dj (push) Successful in 8m7s
Test / test_scrub (push) Successful in 35s
Test / test_heal_csum_32k (push) Failing after 10m17s
Test / test_scrub_zero_osd_2 (push) Successful in 37s
Test / test_scrub_xor (push) Successful in 35s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 44s
Test / test_scrub_ec (push) Successful in 29s
Test / test_scrub_pg_size_3 (push) Successful in 52s
Test / test_nfs (push) Failing after 2m18s
2024-07-15 16:25:22 +03:00
fc219b8602 Add pg-list to docs 2024-07-15 13:29:22 +03:00
989d73f874 Release 1.7.0
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 3m12s
Test / test_rebalance_verify_imm (push) Successful in 6m5s
Test / test_root_node (push) Successful in 15s
Test / test_rebalance_verify (push) Successful in 6m45s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 42s
Test / test_rebalance_verify_ec_imm (push) Successful in 5m6s
Test / test_write_no_same (push) Successful in 24s
Test / test_rebalance_verify_ec (push) Successful in 7m26s
Test / test_write_xor (push) Successful in 1m56s
Test / test_heal_pg_size_2 (push) Successful in 4m13s
Test / test_heal_ec (push) Successful in 4m39s
Test / test_heal_csum_32k_dmj (push) Successful in 5m49s
Test / test_heal_csum_32k_dj (push) Successful in 5m47s
Test / test_heal_csum_32k (push) Successful in 6m35s
Test / test_osd_tags (push) Successful in 25s
Test / test_enospc (push) Successful in 1m52s
Test / test_heal_csum_4k_dj (push) Successful in 6m9s
Test / test_enospc_imm (push) Successful in 43s
Test / test_enospc_xor (push) Successful in 1m4s
Test / test_scrub (push) Successful in 45s
Test / test_enospc_imm_xor (push) Successful in 1m10s
Test / test_scrub_zero_osd_2 (push) Successful in 27s
Test / test_scrub_xor (push) Successful in 34s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 42s
Test / test_scrub_pg_size_3 (push) Successful in 57s
Test / test_scrub_ec (push) Successful in 27s
Test / test_heal_csum_4k (push) Successful in 9m58s
Test / test_heal_csum_4k_dmj (push) Successful in 3m33s
Test / test_nfs (push) Failing after 2m17s
Omnidirectional release

New features:

- Support handling TCP I/O in simple separate io_uring-based [I/O threads](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/config/client.en.md#client_iothread_count) - may increase linear performance to 7-8 GB/s
- Experimental internal etcd replacement - [antietcd](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/config/monitor.en.md#use_antietcd)
- Monitor now has a [built-in Prometheus exporter](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/config/monitor.en.md#enable_prometheus)
- Added a reference [Grafana dashboard](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/mon/scripts/Vitastor-Grafana-6+.json)
- Implement vitastor-cli [osd-tree](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/usage/cli.en.md#osd-tree) and [ls-osd](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/usage/cli.en.md#ls-osd) commands
- Implement vitastor-cli [modify-osd](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/usage/cli.en.md#modify-osd) command
- Implement vitastor-cli [pg-list](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/usage/cli.en.md#pg-list) command
- Implement [VitastorFS defragmentation](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/usage/nfs.en.md#defrag)
- Implement basic node.js binding (not published on npm yet)

Changes:

- Make immediate_commit=all the default everywhere to match default vitastor-disk behaviour
- Make pool-create error message more obvious and add details to it
- Set default etcd_ws_keepalive_interval to 5 seconds (speedup client etcd failover)
- Support OpenStack 2023.2 in Nova and Cinder drivers/patches
- Add patches for libvirt 10.x
- Add patches for QEMU 8.2 and 9.0
- Implement internal restart / run_forever in monitor
- Some source tree refactoring - sources are now moved into subdirectories, monitor is now split into multiple files
- Add vitastor_c_inode_get_immediate_commit in vitastor_c client library
- Make vitastor_kv.h header public

Bug fixes:

- Fix total statistics usec/count/bytes not being reported when delta (bps/iops/lat) is zero
- Prevent infinite loop in NFS on files with incorrect metadata pointing to an empty volume
- Fix READDIR offsets (cookies) in VitastorFS sometimes leading to client infinite loops when reading a directory
- Fix a rare infinite loop during OSD journal flushing (OSD hanging and eating 100 % CPU)
- Fix several bugs which could lead to lost writes in setups without immediate_commit:
  - Client library treated writes as completed before actually completing them, thus missing them in a subsequent fsync
  - Client library didn't repeat writes on the new PG primary when it changed
  - OSDs didn't drop peer connections with dirty writes when stopping PG
- Fix Block Pseudo-FS initialization leading to ENOENTs some time after start
- Fix vitastor-cli merge-based commands (merge/flatten/rm snapshot) slowing down and finally failing when using CAS optimistic locks
- Fix pool create/modify --block_size validation
- Fix TTL comparison for determining failed lease/keepalive requests in OSD
- Add support for size suffixes in pool-create --block_size and --immediate_commit values
2024-07-15 11:48:35 +03:00
f0630722ce Make pool-create error message more obvious, add details
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 3m11s
Test / test_rebalance_verify_imm (push) Successful in 4m52s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 5m31s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 43s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m58s
Test / test_write_no_same (push) Successful in 24s
Test / test_rebalance_verify_ec (push) Successful in 6m27s
Test / test_write_xor (push) Successful in 1m59s
Test / test_heal_pg_size_2 (push) Successful in 3m55s
Test / test_heal_ec (push) Successful in 3m57s
Test / test_heal_csum_32k_dmj (push) Successful in 6m10s
Test / test_heal_csum_32k_dj (push) Successful in 6m10s
Test / test_heal_csum_32k (push) Successful in 7m12s
Test / test_heal_csum_4k_dmj (push) Successful in 6m52s
Test / test_osd_tags (push) Successful in 19s
Test / test_enospc (push) Successful in 2m6s
Test / test_enospc_xor (push) Successful in 2m47s
Test / test_heal_csum_4k (push) Successful in 6m20s
Test / test_heal_csum_4k_dj (push) Successful in 6m22s
Test / test_enospc_imm (push) Successful in 1m5s
Test / test_scrub_zero_osd_2 (push) Successful in 40s
Test / test_scrub (push) Successful in 42s
Test / test_scrub_xor (push) Successful in 39s
Test / test_enospc_imm_xor (push) Successful in 50s
Test / test_scrub_ec (push) Successful in 35s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 40s
Test / test_scrub_pg_size_3 (push) Successful in 50s
Test / test_nfs (push) Failing after 2m28s
2024-07-15 11:47:49 +03:00
93b0947720 Support size suffixes in pool-create --block_size / --bitmap_granularity 2024-07-15 11:47:05 +03:00
9c628646fa Remove bullseye-backports from build, remove buster-backports from docs 2024-07-15 11:47:05 +03:00
cf476a3b95 Add mkdir /var/lib/vitastor 2024-07-15 11:47:05 +03:00
23f9273ba3 Take use_antietcd setting from /etc/vitastor/vitastor.conf too
Some checks reported warnings
Test / test_interrupted_rebalance_ec_imm (push) Successful in 1m19s
Test / test_move_reappear (push) Successful in 24s
Test / test_rm (push) Successful in 16s
Test / test_snapshot_down (push) Successful in 33s
Test / test_snapshot_down_ec (push) Successful in 34s
Test / test_splitbrain (push) Successful in 24s
Test / test_snapshot_chain (push) Successful in 2m20s
Test / test_snapshot_chain_ec (push) Successful in 2m50s
Test / test_rebalance_verify (push) Has started running
Test / test_rebalance_verify_imm (push) Has started running
Test / test_root_node (push) Has been cancelled
Test / test_switch_primary (push) Has been cancelled
Test / test_write (push) Has been cancelled
Test / test_write_xor (push) Has been cancelled
Test / test_write_no_same (push) Has been cancelled
Test / test_heal_pg_size_2 (push) Has been cancelled
Test / test_heal_ec (push) Has been cancelled
Test / test_heal_csum_32k_dmj (push) Has been cancelled
Test / test_heal_csum_32k_dj (push) Has been cancelled
Test / test_heal_csum_32k (push) Has been cancelled
Test / test_heal_csum_4k_dmj (push) Has been cancelled
Test / test_heal_csum_4k_dj (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
Test / test_rebalance_verify_ec_imm (push) Has been cancelled
Test / test_osd_tags (push) Has been cancelled
Test / test_enospc (push) Has been cancelled
Test / test_enospc_xor (push) Has been cancelled
Test / test_enospc_imm (push) Has been cancelled
Test / test_enospc_imm_xor (push) Has been cancelled
Test / test_rebalance_verify_ec (push) Has been cancelled
2024-07-15 02:02:56 +03:00
74b88bf8ba Use own repo instead of buster-backports as it is EOL 2024-07-14 20:25:44 +03:00
1254d5a0de Fix delta stats when counters may be hypothetically reset
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 2m58s
Test / test_rebalance_verify_imm (push) Successful in 5m37s
Test / test_root_node (push) Successful in 14s
Test / test_rebalance_verify (push) Successful in 6m18s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m40s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 48s
Test / test_write_no_same (push) Successful in 23s
Test / test_write_xor (push) Successful in 1m30s
Test / test_rebalance_verify_ec (push) Successful in 6m51s
Test / test_heal_ec (push) Successful in 4m46s
Test / test_heal_csum_32k_dmj (push) Successful in 5m1s
Test / test_heal_csum_32k_dj (push) Successful in 5m8s
Test / test_heal_pg_size_2 (push) Failing after 10m29s
Test / test_heal_csum_4k_dmj (push) Successful in 5m9s
Test / test_heal_csum_4k_dj (push) Successful in 4m57s
Test / test_osd_tags (push) Successful in 20s
Test / test_enospc (push) Successful in 1m17s
Test / test_enospc_xor (push) Successful in 1m41s
Test / test_enospc_imm (push) Successful in 1m32s
Test / test_scrub (push) Successful in 31s
Test / test_enospc_imm_xor (push) Successful in 1m21s
Test / test_scrub_zero_osd_2 (push) Successful in 31s
Test / test_scrub_xor (push) Successful in 29s
Test / test_heal_csum_32k (push) Successful in 10m18s
Test / test_scrub_ec (push) Successful in 42s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 45s
Test / test_scrub_pg_size_3 (push) Successful in 57s
Test / test_heal_csum_4k (push) Successful in 7m38s
Test / test_nfs (push) Failing after 2m17s
2024-07-14 13:11:00 +03:00
f87bece253 Fix build with antietcd & tinyraft, remove some version hardcode 2024-07-14 13:04:25 +03:00
ba85d0ef16 Add vitastor_kv.h to RPM specs 2024-07-14 11:20:37 +03:00
17a909ea3a Stop metrics/future API HTTP server when closing Monitor instance
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 2m55s
Test / test_rebalance_verify_imm (push) Successful in 5m5s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 5m43s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m10s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 45s
Test / test_write_no_same (push) Successful in 21s
Test / test_write_xor (push) Successful in 1m50s
Test / test_rebalance_verify_ec (push) Successful in 6m58s
Test / test_heal_pg_size_2 (push) Successful in 3m55s
Test / test_heal_ec (push) Successful in 3m50s
Test / test_heal_csum_32k_dmj (push) Successful in 5m29s
Test / test_heal_csum_32k_dj (push) Successful in 6m35s
Test / test_heal_csum_32k (push) Successful in 6m41s
Test / test_heal_csum_4k_dmj (push) Successful in 6m38s
Test / test_osd_tags (push) Successful in 22s
Test / test_enospc (push) Successful in 1m54s
Test / test_enospc_xor (push) Successful in 2m24s
Test / test_heal_csum_4k_dj (push) Successful in 6m38s
Test / test_enospc_imm (push) Successful in 1m26s
Test / test_scrub (push) Successful in 1m0s
Test / test_heal_csum_4k (push) Successful in 5m45s
Test / test_scrub_zero_osd_2 (push) Successful in 48s
Test / test_enospc_imm_xor (push) Successful in 1m25s
Test / test_scrub_xor (push) Successful in 34s
Test / test_scrub_ec (push) Successful in 35s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 40s
Test / test_scrub_pg_size_3 (push) Successful in 44s
Test / test_nfs (push) Failing after 2m17s
2024-07-14 11:16:41 +03:00
a4dfc220ab Implement basic node.js binding (not published on npm yet)
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 3m12s
Test / test_rebalance_verify_ec (push) Failing after 1m56s
Test / test_root_node (push) Successful in 13s
Test / test_rebalance_verify_imm (push) Successful in 3m15s
Test / test_rebalance_verify (push) Successful in 4m1s
Test / test_switch_primary (push) Successful in 37s
Test / test_write_no_same (push) Successful in 21s
Test / test_write (push) Successful in 59s
Test / test_write_xor (push) Successful in 1m29s
Test / test_rebalance_verify_ec_imm (push) Successful in 5m13s
Test / test_heal_pg_size_2 (push) Has started running
Test / test_heal_ec (push) Has started running
Test / test_heal_csum_32k_dj (push) Has started running
Test / test_heal_csum_32k (push) Has been cancelled
Test / test_heal_csum_4k_dmj (push) Has been cancelled
Test / test_heal_csum_4k_dj (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
Test / test_osd_tags (push) Has been cancelled
Test / test_enospc (push) Has been cancelled
Test / test_enospc_xor (push) Has been cancelled
Test / test_enospc_imm (push) Has been cancelled
Test / test_enospc_imm_xor (push) Has been cancelled
Test / test_scrub (push) Has been cancelled
Test / test_scrub_zero_osd_2 (push) Has been cancelled
Test / test_scrub_xor (push) Has been cancelled
Test / test_scrub_pg_size_3 (push) Has been cancelled
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Has been cancelled
Test / test_scrub_ec (push) Has been cancelled
Test / test_nfs (push) Has been cancelled
Test / test_heal_csum_32k_dmj (push) Has been cancelled
2024-07-14 10:58:38 +03:00
26426dd95e Return it back, but fix stats in another way
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m9s
Test / test_rebalance_verify_imm (push) Successful in 5m3s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 5m45s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 41s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m4s
Test / test_write_no_same (push) Successful in 21s
Test / test_rebalance_verify_ec (push) Successful in 6m21s
Test / test_write_xor (push) Successful in 1m53s
Test / test_heal_pg_size_2 (push) Successful in 3m56s
Test / test_heal_ec (push) Successful in 4m18s
Test / test_heal_csum_32k_dmj (push) Successful in 5m46s
Test / test_heal_csum_32k_dj (push) Successful in 5m39s
Test / test_heal_csum_32k (push) Successful in 6m24s
Test / test_osd_tags (push) Successful in 30s
Test / test_enospc (push) Successful in 1m16s
Test / test_heal_csum_4k_dj (push) Successful in 5m40s
Test / test_enospc_xor (push) Successful in 51s
Test / test_enospc_imm (push) Successful in 54s
Test / test_heal_csum_4k (push) Successful in 6m48s
Test / test_enospc_imm_xor (push) Successful in 1m13s
Test / test_scrub (push) Successful in 30s
Test / test_scrub_zero_osd_2 (push) Successful in 31s
Test / test_scrub_xor (push) Successful in 28s
Test / test_scrub_pg_size_3 (push) Successful in 48s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 30s
Test / test_scrub_ec (push) Successful in 28s
Test / test_nfs (push) Successful in 17s
Test / test_heal_csum_4k_dmj (push) Successful in 4m27s
2024-07-13 19:14:34 +03:00
9f38b7e5c1 Fix osd_ping_time_remaining reset from 990c3ba7eb, leading to osd disconnections
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m0s
Test / test_rebalance_verify_imm (push) Successful in 4m26s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 4m59s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m21s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 46s
Test / test_write_no_same (push) Successful in 21s
Test / test_write_xor (push) Successful in 1m52s
Test / test_rebalance_verify_ec (push) Successful in 7m1s
Test / test_heal_pg_size_2 (push) Successful in 3m51s
Test / test_heal_csum_32k_dmj (push) Successful in 5m38s
Test / test_heal_csum_32k_dj (push) Successful in 6m30s
Test / test_heal_csum_32k (push) Successful in 6m39s
Test / test_osd_tags (push) Successful in 21s
Test / test_enospc (push) Successful in 2m13s
Test / test_heal_csum_4k_dmj (push) Successful in 6m44s
Test / test_heal_csum_4k_dj (push) Successful in 6m5s
Test / test_enospc_imm (push) Successful in 1m39s
Test / test_enospc_xor (push) Successful in 2m43s
Test / test_scrub (push) Successful in 1m0s
Test / test_scrub_zero_osd_2 (push) Successful in 47s
Test / test_enospc_imm_xor (push) Successful in 1m19s
Test / test_scrub_xor (push) Successful in 33s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 35s
Test / test_scrub_pg_size_3 (push) Successful in 53s
Test / test_nfs (push) Successful in 21s
Test / test_scrub_ec (push) Successful in 29s
Test / test_heal_csum_4k (push) Successful in 9m4s
Test / test_heal_ec (push) Successful in 2m51s
2024-07-13 16:09:56 +03:00
20057defbe Revert 8ad63465cd
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 4m10s
Test / test_rebalance_verify_ec (push) Failing after 4m22s
Test / test_root_node (push) Successful in 18s
Test / test_switch_primary (push) Successful in 39s
Test / test_rebalance_verify_imm (push) Successful in 7m51s
Test / test_write (push) Successful in 1m19s
Test / test_write_xor (push) Successful in 1m22s
Test / test_write_no_same (push) Successful in 21s
Test / test_rebalance_verify (push) Failing after 10m15s
Test / test_rebalance_verify_ec_imm (push) Successful in 7m51s
Test / test_heal_pg_size_2 (push) Successful in 4m52s
Test / test_heal_csum_32k_dmj (push) Successful in 5m53s
Test / test_heal_ec (push) Successful in 6m41s
Test / test_heal_csum_32k_dj (push) Successful in 6m8s
Test / test_heal_csum_32k (push) Successful in 6m37s
Test / test_osd_tags (push) Successful in 51s
Test / test_heal_csum_4k_dmj (push) Successful in 6m36s
Test / test_heal_csum_4k_dj (push) Successful in 6m43s
Test / test_heal_csum_4k (push) Successful in 6m30s
Test / test_enospc (push) Successful in 1m46s
Test / test_enospc_imm (push) Successful in 54s
Test / test_enospc_xor (push) Successful in 1m30s
Test / test_enospc_imm_xor (push) Successful in 52s
Test / test_scrub (push) Successful in 25s
Test / test_scrub_zero_osd_2 (push) Successful in 29s
Test / test_scrub_xor (push) Successful in 37s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 41s
Test / test_scrub_pg_size_3 (push) Successful in 50s
Test / test_nfs (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 28s
2024-07-13 15:34:34 +03:00
b4e9140755 Add defrag docs, fix trace message
Some checks failed
Test / test_interrupted_rebalance_ec (push) Successful in 6m48s
Test / test_root_node (push) Successful in 50s
Test / test_rebalance_verify_ec (push) Failing after 10m14s
Test / test_switch_primary (push) Successful in 56s
Test / test_write_no_same (push) Successful in 34s
Test / test_write (push) Successful in 1m38s
Test / test_rebalance_verify_ec_imm (push) Failing after 11m54s
Test / test_write_xor (push) Successful in 3m2s
Test / test_heal_pg_size_2 (push) Successful in 4m1s
Test / test_heal_csum_32k_dmj (push) Successful in 5m35s
Test / test_heal_csum_32k_dj (push) Successful in 6m34s
Test / test_heal_csum_32k (push) Successful in 6m44s
Test / test_osd_tags (push) Successful in 32s
Test / test_heal_csum_4k_dmj (push) Successful in 6m3s
Test / test_enospc (push) Successful in 1m56s
Test / test_heal_csum_4k_dj (push) Successful in 5m32s
Test / test_enospc_imm (push) Successful in 1m32s
Test / test_enospc_xor (push) Successful in 2m20s
Test / test_scrub (push) Successful in 1m9s
Test / test_scrub_zero_osd_2 (push) Successful in 38s
Test / test_enospc_imm_xor (push) Successful in 1m49s
Test / test_heal_csum_4k (push) Successful in 6m9s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 45s
Test / test_scrub_pg_size_3 (push) Successful in 55s
Test / test_scrub_ec (push) Successful in 24s
Test / test_nfs (push) Successful in 13s
Test / test_scrub_xor (push) Failing after 3m8s
Test / test_rebalance_verify (push) Successful in 3m35s
Test / test_rebalance_verify_imm (push) Successful in 3m50s
Test / test_heal_ec (push) Successful in 5m22s
2024-07-13 00:45:53 +03:00
413959e75a Prevent infinite loop in NFS - return EIO when an inode points to an incorrect volume position
Some checks failed
Test / test_snapshot_chain_ec (push) Failing after 6m21s
Test / test_rebalance_verify_ec (push) Failing after 5m19s
Test / test_rebalance_verify_imm (push) Successful in 7m19s
Test / test_rebalance_verify (push) Failing after 10m24s
Test / test_root_node (push) Successful in 3m21s
Test / test_switch_primary (push) Successful in 3m24s
Test / test_write_no_same (push) Successful in 41s
Test / test_write (push) Successful in 2m10s
Test / test_write_xor (push) Failing after 3m27s
Test / test_rebalance_verify_ec_imm (push) Successful in 10m32s
Test / test_heal_pg_size_2 (push) Successful in 5m42s
Test / test_heal_ec (push) Successful in 4m45s
Test / test_heal_csum_32k_dmj (push) Successful in 5m36s
Test / test_heal_csum_32k_dj (push) Successful in 6m42s
Test / test_heal_csum_4k_dmj (push) Successful in 6m58s
Test / test_heal_csum_32k (push) Successful in 7m2s
Test / test_osd_tags (push) Successful in 27s
Test / test_enospc (push) Successful in 1m57s
Test / test_enospc_xor (push) Successful in 2m13s
Test / test_heal_csum_4k_dj (push) Successful in 7m8s
Test / test_enospc_imm (push) Successful in 47s
Test / test_scrub (push) Successful in 1m11s
Test / test_scrub_zero_osd_2 (push) Successful in 1m18s
Test / test_enospc_imm_xor (push) Successful in 1m40s
Test / test_heal_csum_4k (push) Successful in 6m3s
Test / test_scrub_xor (push) Successful in 53s
Test / test_scrub_ec (push) Successful in 27s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 38s
Test / test_scrub_pg_size_3 (push) Successful in 56s
Test / test_nfs (push) Successful in 14s
2024-07-12 20:53:54 +03:00
8973982570 Delete keys from internal state instead of setting them to null on DELETE event in mon
Some checks failed
Test / test_snapshot_chain_ec (push) Failing after 6m9s
Test / test_rebalance_verify (push) Successful in 6m10s
Test / test_interrupted_rebalance_ec (push) Failing after 10m22s
Test / test_root_node (push) Successful in 55s
Test / test_switch_primary (push) Successful in 49s
Test / test_rebalance_verify_ec (push) Failing after 4m50s
Test / test_write (push) Successful in 2m19s
Test / test_write_no_same (push) Successful in 23s
Test / test_write_xor (push) Successful in 3m12s
Test / test_rebalance_verify_ec_imm (push) Successful in 7m11s
Test / test_heal_pg_size_2 (push) Successful in 4m27s
Test / test_heal_ec (push) Successful in 4m42s
Test / test_heal_csum_32k_dmj (push) Successful in 5m47s
Test / test_heal_csum_32k_dj (push) Successful in 6m36s
Test / test_heal_csum_32k (push) Successful in 7m9s
Test / test_heal_csum_4k_dmj (push) Successful in 6m50s
Test / test_osd_tags (push) Successful in 19s
Test / test_enospc (push) Successful in 1m52s
Test / test_enospc_xor (push) Successful in 2m29s
Test / test_heal_csum_4k_dj (push) Successful in 6m58s
Test / test_enospc_imm (push) Successful in 1m37s
Test / test_scrub (push) Successful in 48s
Test / test_enospc_imm_xor (push) Successful in 1m2s
Test / test_scrub_zero_osd_2 (push) Successful in 33s
Test / test_heal_csum_4k (push) Successful in 6m22s
Test / test_scrub_xor (push) Successful in 38s
Test / test_nfs (push) Successful in 21s
Test / test_scrub_ec (push) Successful in 34s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 36s
Test / test_scrub_pg_size_3 (push) Successful in 54s
2024-07-12 16:42:21 +03:00
990c3ba7eb Implement FS defragmentation
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 4m56s
Test / test_rebalance_verify_ec (push) Failing after 4m35s
Test / test_rebalance_verify_imm (push) Successful in 7m40s
Test / test_root_node (push) Successful in 30s
Test / test_switch_primary (push) Successful in 38s
Test / test_write (push) Successful in 54s
Test / test_write_no_same (push) Successful in 23s
Test / test_write_xor (push) Successful in 1m30s
Test / test_rebalance_verify_ec_imm (push) Successful in 6m21s
Test / test_rebalance_verify (push) Failing after 10m18s
Test / test_heal_pg_size_2 (push) Successful in 6m15s
Test / test_heal_csum_32k_dmj (push) Successful in 5m27s
Test / test_heal_csum_32k_dj (push) Successful in 5m8s
Test / test_heal_ec (push) Successful in 5m34s
Test / test_heal_csum_32k (push) Successful in 6m10s
Test / test_heal_csum_4k_dj (push) Successful in 6m4s
Test / test_heal_csum_4k_dmj (push) Successful in 6m6s
Test / test_heal_csum_4k (push) Successful in 6m2s
Test / test_osd_tags (push) Successful in 15s
Test / test_enospc (push) Successful in 54s
Test / test_enospc_imm (push) Successful in 51s
Test / test_enospc_xor (push) Successful in 58s
Test / test_enospc_imm_xor (push) Successful in 49s
Test / test_scrub_zero_osd_2 (push) Successful in 31s
Test / test_scrub (push) Successful in 34s
Test / test_scrub_xor (push) Successful in 33s
Test / test_nfs (push) Successful in 20s
Test / test_scrub_pg_size_3 (push) Successful in 51s
Test / test_scrub_ec (push) Successful in 27s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 30s
2024-07-12 16:11:35 +03:00
1771d2ef36 Fix READDIR cookie/offset bug
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 3m1s
Test / test_rebalance_verify_imm (push) Successful in 3m43s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 4m19s
Test / test_switch_primary (push) Successful in 42s
Test / test_write (push) Successful in 42s
Test / test_write_no_same (push) Successful in 22s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m7s
Test / test_rebalance_verify_ec (push) Successful in 4m10s
Test / test_write_xor (push) Successful in 1m20s
Test / test_heal_pg_size_2 (push) Successful in 4m59s
Test / test_heal_csum_32k_dmj (push) Successful in 5m30s
Test / test_heal_csum_32k_dj (push) Successful in 5m28s
Test / test_heal_csum_32k (push) Successful in 5m10s
Test / test_heal_ec (push) Failing after 10m10s
Test / test_osd_tags (push) Successful in 13s
Test / test_enospc (push) Successful in 1m10s
Test / test_enospc_xor (push) Successful in 1m35s
Test / test_heal_csum_4k (push) Successful in 3m39s
Test / test_enospc_imm (push) Successful in 41s
Test / test_scrub (push) Successful in 43s
Test / test_enospc_imm_xor (push) Successful in 1m3s
Test / test_scrub_zero_osd_2 (push) Successful in 27s
Test / test_scrub_xor (push) Successful in 32s
Test / test_scrub_pg_size_3 (push) Successful in 50s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 35s
Test / test_heal_csum_4k_dj (push) Successful in 10m2s
Test / test_heal_csum_4k_dmj (push) Failing after 10m24s
Test / test_scrub_ec (push) Successful in 29s
Test / test_nfs (push) Successful in 25s
2024-07-12 16:11:35 +03:00
d88ab76636 Fix active mon stat
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 3m8s
Test / test_rebalance_verify_imm (push) Successful in 4m20s
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify (push) Successful in 4m59s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 45s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m30s
Test / test_write_no_same (push) Successful in 20s
Test / test_write_xor (push) Successful in 1m20s
Test / test_rebalance_verify_ec (push) Successful in 7m3s
Test / test_heal_pg_size_2 (push) Successful in 3m49s
Test / test_heal_csum_32k_dmj (push) Successful in 5m11s
Test / test_heal_csum_32k_dj (push) Successful in 5m57s
Test / test_heal_csum_32k (push) Successful in 6m14s
Test / test_heal_ec (push) Failing after 10m14s
Test / test_osd_tags (push) Successful in 48s
Test / test_heal_csum_4k_dmj (push) Successful in 6m24s
Test / test_enospc (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 5m58s
Test / test_enospc_xor (push) Successful in 2m30s
Test / test_enospc_imm (push) Successful in 1m37s
Test / test_scrub (push) Successful in 48s
Test / test_heal_csum_4k (push) Successful in 5m40s
Test / test_scrub_zero_osd_2 (push) Successful in 44s
Test / test_enospc_imm_xor (push) Successful in 1m0s
Test / test_scrub_xor (push) Successful in 33s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 35s
Test / test_scrub_ec (push) Successful in 33s
Test / test_scrub_pg_size_3 (push) Successful in 43s
Test / test_nfs (push) Successful in 14s
2024-07-11 01:34:59 +03:00
c010a0aa54 Fix OSD "local write" latency sum
Some checks failed
Test / test_rebalance_verify (push) Failing after 2m2s
Test / test_snapshot_chain_ec (push) Successful in 2m45s
Test / test_root_node (push) Successful in 1m10s
Test / test_switch_primary (push) Successful in 1m58s
Test / test_rebalance_verify_imm (push) Successful in 5m10s
Test / test_write (push) Successful in 1m0s
Test / test_write_xor (push) Successful in 59s
Test / test_write_no_same (push) Successful in 16s
Test / test_rebalance_verify_ec_imm (push) Successful in 5m3s
Test / test_heal_pg_size_2 (push) Successful in 3m50s
Test / test_rebalance_verify_ec (push) Successful in 8m57s
Test / test_heal_ec (push) Successful in 4m27s
Test / test_heal_csum_32k (push) Successful in 5m20s
Test / test_heal_csum_4k_dmj (push) Successful in 5m21s
Test / test_heal_csum_32k_dmj (push) Failing after 10m21s
Test / test_osd_tags (push) Successful in 21s
Test / test_enospc (push) Successful in 1m55s
Test / test_heal_csum_32k_dj (push) Failing after 10m21s
Test / test_heal_csum_4k_dj (push) Successful in 4m36s
Test / test_heal_csum_4k (push) Successful in 4m43s
Test / test_enospc_xor (push) Successful in 1m25s
Test / test_enospc_imm (push) Successful in 47s
Test / test_enospc_imm_xor (push) Successful in 56s
Test / test_scrub (push) Successful in 33s
Test / test_scrub_zero_osd_2 (push) Successful in 29s
Test / test_scrub_xor (push) Successful in 29s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 36s
Test / test_nfs (push) Successful in 20s
Test / test_scrub_ec (push) Successful in 34s
Test / test_scrub_pg_size_3 (push) Successful in 43s
2024-07-11 01:30:03 +03:00
0d42712d29 Fix refresh in dashboard variable
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m3s
Test / test_rebalance_verify_imm (push) Successful in 4m17s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 4m59s
Test / test_switch_primary (push) Successful in 38s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m22s
Test / test_write (push) Successful in 42s
Test / test_write_no_same (push) Successful in 22s
Test / test_rebalance_verify_ec (push) Successful in 5m52s
Test / test_write_xor (push) Successful in 1m53s
Test / test_heal_pg_size_2 (push) Successful in 3m50s
Test / test_heal_ec (push) Successful in 4m8s
Test / test_heal_csum_32k_dmj (push) Successful in 5m41s
Test / test_heal_csum_32k_dj (push) Successful in 5m40s
Test / test_heal_csum_32k (push) Successful in 6m40s
Test / test_heal_csum_4k_dmj (push) Successful in 6m23s
Test / test_osd_tags (push) Successful in 31s
Test / test_enospc (push) Successful in 1m44s
Test / test_enospc_xor (push) Successful in 2m35s
Test / test_heal_csum_4k_dj (push) Successful in 6m11s
Test / test_enospc_imm (push) Successful in 1m10s
Test / test_heal_csum_4k (push) Successful in 6m17s
Test / test_scrub_zero_osd_2 (push) Successful in 35s
Test / test_scrub (push) Successful in 41s
Test / test_scrub_xor (push) Successful in 34s
Test / test_enospc_imm_xor (push) Successful in 49s
Test / test_nfs (push) Successful in 24s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 32s
Test / test_scrub_ec (push) Successful in 31s
Test / test_scrub_pg_size_3 (push) Successful in 42s
2024-07-11 01:13:02 +03:00
66b438106a Add vitastor-cli pg-list command
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 2m59s
Test / test_rebalance_verify_imm (push) Successful in 3m25s
Test / test_root_node (push) Successful in 13s
Test / test_rebalance_verify (push) Successful in 4m6s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 42s
Test / test_write_no_same (push) Successful in 21s
Test / test_rebalance_verify_ec (push) Successful in 3m52s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m2s
Test / test_write_xor (push) Successful in 1m24s
Test / test_heal_pg_size_2 (push) Successful in 4m31s
Test / test_heal_csum_32k_dmj (push) Successful in 4m46s
Test / test_heal_csum_32k_dj (push) Successful in 5m50s
Test / test_heal_csum_4k_dj (push) Successful in 5m1s
Test / test_osd_tags (push) Successful in 19s
Test / test_enospc (push) Successful in 46s
Test / test_enospc_xor (push) Successful in 1m38s
Test / test_heal_csum_4k (push) Successful in 4m7s
Test / test_enospc_imm (push) Successful in 56s
Test / test_enospc_imm_xor (push) Successful in 56s
Test / test_scrub (push) Successful in 28s
Test / test_scrub_zero_osd_2 (push) Successful in 30s
Test / test_scrub_xor (push) Successful in 30s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 34s
Test / test_nfs (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 48s
Test / test_scrub_ec (push) Successful in 26s
Test / test_heal_csum_32k (push) Successful in 4m51s
Test / test_heal_csum_4k_dmj (push) Successful in 4m48s
Test / test_heal_ec (push) Successful in 4m46s
2024-07-10 02:27:41 +03:00
3aef6682fb Add vitastor-cli modify-osd command
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m8s
Test / test_rebalance_verify_imm (push) Successful in 3m19s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 4m12s
Test / test_switch_primary (push) Successful in 41s
Test / test_write (push) Successful in 43s
Test / test_write_no_same (push) Successful in 20s
Test / test_rebalance_verify_ec_imm (push) Successful in 2m58s
Test / test_write_xor (push) Successful in 1m7s
Test / test_rebalance_verify_ec (push) Successful in 5m0s
Test / test_heal_pg_size_2 (push) Successful in 4m9s
Test / test_heal_csum_32k_dmj (push) Successful in 5m23s
Test / test_heal_ec (push) Successful in 7m37s
Test / test_heal_csum_32k (push) Successful in 5m7s
Test / test_heal_csum_4k_dmj (push) Successful in 5m10s
Test / test_osd_tags (push) Successful in 22s
Test / test_enospc (push) Successful in 2m2s
Test / test_heal_csum_4k_dj (push) Successful in 5m40s
Test / test_enospc_xor (push) Successful in 2m15s
Test / test_enospc_imm (push) Successful in 1m12s
Test / test_heal_csum_4k (push) Successful in 5m30s
Test / test_enospc_imm_xor (push) Successful in 1m16s
Test / test_scrub (push) Successful in 32s
Test / test_scrub_zero_osd_2 (push) Successful in 36s
Test / test_scrub_xor (push) Successful in 34s
Test / test_scrub_pg_size_3 (push) Successful in 39s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 38s
Test / test_nfs (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 22s
Test / test_heal_csum_32k_dj (push) Successful in 3m9s
2024-07-09 16:52:19 +03:00
8535bccf4c Add a note about antietcd dump/load
Some checks failed
Test / test_rebalance_verify_imm (push) Successful in 2m34s
Test / test_rebalance_verify (push) Successful in 4m11s
Test / test_root_node (push) Successful in 12s
Test / test_switch_primary (push) Successful in 38s
Test / test_etcd_fail (push) Failing after 10m8s
Test / test_rebalance_verify_ec (push) Successful in 3m52s
Test / test_write_no_same (push) Successful in 24s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m7s
Test / test_write (push) Successful in 1m12s
Test / test_write_xor (push) Successful in 2m26s
Test / test_heal_ec (push) Successful in 5m28s
Test / test_heal_csum_32k_dj (push) Successful in 6m2s
Test / test_heal_pg_size_2 (push) Failing after 10m20s
Test / test_heal_csum_32k_dmj (push) Failing after 10m20s
Test / test_heal_csum_4k_dj (push) Successful in 4m14s
Test / test_osd_tags (push) Successful in 14s
Test / test_enospc (push) Successful in 41s
Test / test_heal_csum_32k (push) Failing after 10m12s
Test / test_enospc_xor (push) Successful in 58s
Test / test_enospc_imm (push) Successful in 39s
Test / test_scrub (push) Successful in 39s
Test / test_enospc_imm_xor (push) Successful in 59s
Test / test_scrub_zero_osd_2 (push) Successful in 30s
Test / test_scrub_xor (push) Successful in 26s
Test / test_heal_csum_4k_dmj (push) Failing after 10m18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 37s
Test / test_scrub_pg_size_3 (push) Successful in 54s
Test / test_scrub_ec (push) Successful in 28s
Test / test_nfs (push) Successful in 17s
Test / test_heal_csum_4k (push) Successful in 9m21s
2024-07-09 15:58:03 +03:00
0487b3b239 Add clusterid to Grafana dashboard 2024-07-09 15:58:03 +03:00
a54ef97f5d Add Grafana dashboard link 2024-07-09 15:37:25 +03:00
10434a9b2b Add notes about antietcd to documentation
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 3m9s
Test / test_rebalance_verify_imm (push) Successful in 5m50s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 6m29s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 45s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m56s
Test / test_write_no_same (push) Successful in 20s
Test / test_write_xor (push) Successful in 1m37s
Test / test_rebalance_verify_ec (push) Successful in 7m58s
Test / test_heal_pg_size_2 (push) Successful in 3m57s
Test / test_heal_csum_32k_dmj (push) Successful in 5m18s
Test / test_heal_csum_32k (push) Successful in 5m55s
Test / test_heal_ec (push) Failing after 10m18s
Test / test_heal_csum_4k_dmj (push) Successful in 5m34s
Test / test_osd_tags (push) Successful in 22s
Test / test_heal_csum_32k_dj (push) Failing after 10m36s
Test / test_enospc (push) Successful in 1m47s
Test / test_heal_csum_4k_dj (push) Successful in 5m23s
Test / test_enospc_xor (push) Successful in 2m19s
Test / test_enospc_imm (push) Successful in 1m40s
Test / test_heal_csum_4k (push) Successful in 5m29s
Test / test_scrub (push) Successful in 46s
Test / test_enospc_imm_xor (push) Successful in 1m3s
Test / test_scrub_zero_osd_2 (push) Successful in 33s
Test / test_scrub_xor (push) Successful in 32s
Test / test_nfs (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 37s
Test / test_scrub_pg_size_3 (push) Successful in 50s
Test / test_scrub_ec (push) Successful in 29s
2024-07-09 15:01:41 +03:00
c6be194508 Implement experimental antietcd-based version of monitor 2024-07-09 13:54:58 +03:00
df668286fb Add Grafana dashboard
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 2m58s
Test / test_rebalance_verify_imm (push) Successful in 4m27s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 5m7s
Test / test_switch_primary (push) Successful in 37s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m24s
Test / test_write (push) Successful in 41s
Test / test_write_no_same (push) Successful in 23s
Test / test_write_xor (push) Successful in 2m2s
Test / test_rebalance_verify_ec (push) Successful in 6m4s
Test / test_heal_ec (push) Successful in 4m6s
Test / test_heal_csum_32k_dmj (push) Successful in 4m40s
Test / test_heal_csum_32k_dj (push) Successful in 5m13s
Test / test_heal_pg_size_2 (push) Failing after 10m31s
Test / test_heal_csum_32k (push) Successful in 6m6s
Test / test_osd_tags (push) Successful in 46s
Test / test_heal_csum_4k_dmj (push) Successful in 5m45s
Test / test_heal_csum_4k_dj (push) Successful in 5m56s
Test / test_enospc (push) Successful in 1m58s
Test / test_enospc_xor (push) Successful in 2m17s
Test / test_enospc_imm (push) Successful in 1m26s
Test / test_enospc_imm_xor (push) Successful in 1m57s
Test / test_scrub_zero_osd_2 (push) Successful in 39s
Test / test_scrub (push) Successful in 44s
Test / test_heal_csum_4k (push) Successful in 5m21s
Test / test_scrub_xor (push) Successful in 40s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 42s
Test / test_nfs (push) Successful in 18s
Test / test_scrub_pg_size_3 (push) Successful in 56s
Test / test_scrub_ec (push) Successful in 25s
2024-07-09 02:39:36 +03:00
667c5999c9 Report all PG states
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 3m1s
Test / test_rebalance_verify_imm (push) Successful in 6m25s
Test / test_root_node (push) Successful in 14s
Test / test_rebalance_verify (push) Successful in 7m1s
Test / test_switch_primary (push) Successful in 40s
Test / test_write (push) Successful in 43s
Test / test_rebalance_verify_ec_imm (push) Successful in 5m40s
Test / test_write_no_same (push) Successful in 19s
Test / test_write_xor (push) Successful in 1m21s
Test / test_rebalance_verify_ec (push) Successful in 8m11s
Test / test_heal_pg_size_2 (push) Successful in 3m51s
Test / test_heal_csum_32k_dj (push) Successful in 4m49s
Test / test_heal_csum_32k (push) Successful in 4m34s
Test / test_heal_ec (push) Failing after 10m27s
Test / test_heal_csum_4k_dmj (push) Successful in 4m27s
Test / test_heal_csum_32k_dmj (push) Failing after 10m28s
Test / test_osd_tags (push) Successful in 41s
Test / test_heal_csum_4k_dj (push) Successful in 4m33s
Test / test_enospc (push) Successful in 1m41s
Test / test_enospc_xor (push) Successful in 2m20s
Test / test_enospc_imm (push) Successful in 1m28s
Test / test_enospc_imm_xor (push) Successful in 1m54s
Test / test_scrub (push) Successful in 37s
Test / test_scrub_zero_osd_2 (push) Successful in 53s
Test / test_heal_csum_4k (push) Successful in 4m52s
Test / test_scrub_xor (push) Successful in 32s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 29s
Test / test_nfs (push) Successful in 20s
Test / test_scrub_ec (push) Successful in 28s
Test / test_scrub_pg_size_3 (push) Successful in 55s
2024-07-08 19:52:56 +03:00
8ad63465cd Do not wipe previous metrics at moments when difference is 0
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 2m48s
Test / test_rebalance_verify_imm (push) Successful in 3m24s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 4m3s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 41s
Test / test_write_no_same (push) Successful in 20s
Test / test_rebalance_verify_ec_imm (push) Successful in 2m59s
Test / test_write_xor (push) Successful in 1m16s
Test / test_rebalance_verify_ec (push) Successful in 5m58s
Test / test_heal_pg_size_2 (push) Successful in 4m7s
Test / test_heal_ec (push) Successful in 4m3s
Test / test_heal_csum_32k_dmj (push) Successful in 4m43s
Test / test_heal_csum_32k_dj (push) Successful in 6m10s
Test / test_heal_csum_4k_dmj (push) Successful in 6m28s
Test / test_osd_tags (push) Successful in 31s
Test / test_enospc (push) Successful in 56s
Test / test_enospc_xor (push) Successful in 1m21s
Test / test_enospc_imm (push) Successful in 43s
Test / test_heal_csum_32k (push) Failing after 10m22s
Test / test_scrub (push) Successful in 30s
Test / test_enospc_imm_xor (push) Successful in 51s
Test / test_scrub_xor (push) Successful in 31s
Test / test_scrub_zero_osd_2 (push) Successful in 34s
Test / test_heal_csum_4k_dj (push) Successful in 10m6s
Test / test_scrub_ec (push) Successful in 35s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 40s
Test / test_scrub_pg_size_3 (push) Successful in 50s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 8m28s
2024-07-08 02:20:12 +03:00
976290e6a9 Implement built-in Prometheus exporter in monitor 2024-07-08 02:20:12 +03:00
79f1d1969b Make immediate_commit=all the default
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m5s
Test / test_rebalance_verify_imm (push) Successful in 3m22s
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify (push) Successful in 4m10s
Test / test_switch_primary (push) Successful in 42s
Test / test_write (push) Successful in 42s
Test / test_write_no_same (push) Successful in 19s
Test / test_rebalance_verify_ec_imm (push) Successful in 2m54s
Test / test_write_xor (push) Successful in 1m12s
Test / test_rebalance_verify_ec (push) Successful in 4m58s
Test / test_heal_pg_size_2 (push) Successful in 4m19s
Test / test_heal_ec (push) Successful in 4m39s
Test / test_heal_csum_32k_dmj (push) Successful in 5m29s
Test / test_heal_csum_32k_dj (push) Successful in 6m14s
Test / test_heal_csum_32k (push) Successful in 6m46s
Test / test_heal_csum_4k_dmj (push) Successful in 6m20s
Test / test_osd_tags (push) Successful in 36s
Test / test_enospc (push) Successful in 1m57s
Test / test_heal_csum_4k_dj (push) Successful in 6m59s
Test / test_enospc_xor (push) Successful in 2m16s
Test / test_heal_csum_4k (push) Successful in 6m18s
Test / test_enospc_imm (push) Successful in 1m5s
Test / test_enospc_imm_xor (push) Successful in 1m10s
Test / test_scrub_zero_osd_2 (push) Successful in 31s
Test / test_scrub (push) Successful in 40s
Test / test_scrub_xor (push) Successful in 29s
Test / test_nfs (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 27s
Test / test_scrub_ec (push) Successful in 26s
Test / test_scrub_pg_size_3 (push) Successful in 54s
2024-07-07 11:45:18 +03:00
918e1f83b0 Add JSON output for ls-osd
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m7s
Test / test_rebalance_verify_imm (push) Successful in 3m22s
Test / test_root_node (push) Successful in 2m24s
Test / test_rebalance_verify (push) Successful in 6m17s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 42s
Test / test_write_no_same (push) Successful in 20s
Test / test_rebalance_verify_ec_imm (push) Successful in 5m3s
Test / test_write_xor (push) Successful in 1m6s
Test / test_rebalance_verify_ec (push) Successful in 6m41s
Test / test_heal_pg_size_2 (push) Successful in 4m9s
Test / test_heal_csum_32k_dmj (push) Successful in 5m27s
Test / test_heal_csum_32k_dj (push) Successful in 5m50s
Test / test_heal_csum_32k (push) Successful in 5m40s
Test / test_osd_tags (push) Successful in 43s
Test / test_heal_csum_4k_dj (push) Successful in 5m19s
Test / test_enospc (push) Successful in 1m19s
Test / test_enospc_imm (push) Successful in 1m7s
Test / test_enospc_xor (push) Successful in 1m44s
Test / test_scrub (push) Successful in 42s
Test / test_heal_csum_4k (push) Successful in 4m47s
Test / test_enospc_imm_xor (push) Successful in 1m3s
Test / test_scrub_zero_osd_2 (push) Successful in 23s
Test / test_scrub_xor (push) Successful in 22s
Test / test_scrub_ec (push) Successful in 38s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 41s
Test / test_scrub_pg_size_3 (push) Successful in 53s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k_dmj (push) Successful in 3m9s
Test / test_heal_ec (push) Successful in 2m50s
2024-07-07 02:24:36 +03:00
abbba6ade4 Support handling TCP I/O in simple separate io_uring-based I/O threads
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 3m4s
Test / test_rebalance_verify_imm (push) Successful in 5m52s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 6m28s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 46s
Test / test_rebalance_verify_ec_imm (push) Successful in 5m2s
Test / test_write_no_same (push) Successful in 22s
Test / test_rebalance_verify_ec (push) Successful in 6m58s
Test / test_write_xor (push) Successful in 2m38s
Test / test_heal_pg_size_2 (push) Successful in 3m53s
Test / test_heal_ec (push) Successful in 5m20s
Test / test_heal_csum_32k_dmj (push) Successful in 5m19s
Test / test_heal_csum_32k_dj (push) Successful in 6m38s
Test / test_heal_csum_32k (push) Successful in 7m12s
Test / test_osd_tags (push) Successful in 37s
Test / test_heal_csum_4k_dmj (push) Successful in 6m51s
Test / test_enospc (push) Successful in 1m26s
Test / test_enospc_imm (push) Successful in 1m5s
Test / test_enospc_xor (push) Successful in 1m43s
Test / test_heal_csum_4k (push) Successful in 5m57s
Test / test_scrub (push) Successful in 50s
Test / test_scrub_zero_osd_2 (push) Successful in 29s
Test / test_enospc_imm_xor (push) Successful in 1m14s
Test / test_scrub_xor (push) Successful in 27s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 39s
Test / test_scrub_ec (push) Successful in 37s
Test / test_scrub_pg_size_3 (push) Successful in 50s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k_dj (push) Failing after 10m17s
Required mainly for clients, allows to scale parallel client I/O with TCP
from 100-150k iops to ~400k iops and from 2-3 GB/s to at least 7-8 GB/s
with 4 I/O threads, at the same time increasing Q=1 latency by 2x thread
switching delay, which is ~10 us when CPU powersaving is disabled and may
be as high as 200 us when it's enabled.
2024-07-04 13:29:20 +03:00
21d1171ba4 Fix parsing after "slightly decopypasting" :)
Some checks failed
Test / test_rebalance_verify (push) Failing after 1m56s
Test / test_snapshot_chain_ec (push) Failing after 6m25s
Test / test_root_node (push) Successful in 17s
Test / test_rebalance_verify_imm (push) Successful in 5m47s
Test / test_switch_primary (push) Successful in 39s
Test / test_write (push) Successful in 43s
Test / test_write_no_same (push) Successful in 18s
Test / test_rebalance_verify_ec_imm (push) Successful in 5m16s
Test / test_write_xor (push) Successful in 1m19s
Test / test_rebalance_verify_ec (push) Successful in 9m23s
Test / test_heal_pg_size_2 (push) Successful in 3m51s
Test / test_heal_ec (push) Successful in 4m57s
Test / test_heal_csum_32k_dmj (push) Successful in 4m44s
Test / test_heal_csum_32k_dj (push) Successful in 5m59s
Test / test_heal_csum_4k (push) Successful in 3m32s
Test / test_osd_tags (push) Successful in 12s
Test / test_enospc (push) Successful in 37s
Test / test_heal_csum_32k (push) Failing after 10m18s
Test / test_heal_csum_4k_dmj (push) Successful in 9m38s
Test / test_enospc_imm (push) Successful in 58s
Test / test_enospc_xor (push) Successful in 1m10s
Test / test_enospc_imm_xor (push) Successful in 57s
Test / test_scrub (push) Successful in 27s
Test / test_heal_csum_4k_dj (push) Failing after 10m14s
Test / test_scrub_zero_osd_2 (push) Successful in 23s
Test / test_scrub_xor (push) Successful in 35s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 37s
Test / test_scrub_ec (push) Successful in 36s
Test / test_scrub_pg_size_3 (push) Successful in 44s
Test / test_nfs (push) Successful in 14s
2024-06-29 00:09:30 +03:00
ace
8f83086889 Nova and cinder driver patches for OpenStack 2023.2 2024-06-28 00:04:57 +03:00
ceb18f25db Add libvirt 10.0 patch (same as 9.10 and 10.4 actually) 2024-06-28 00:03:46 +03:00
ed51a89f70 Add QEMU 8.2 and 9.0 patches 2024-06-27 12:33:16 +03:00
f59456f22d Add libvirt 10.4 patch (same as 9.10 actually) 2024-06-27 01:35:29 +03:00
ca63cd507d Fix possible infinite loop in flusher (surprisingly reproduced in test_write.sh with iothreads)
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m5s
Test / test_rebalance_verify_imm (push) Successful in 3m41s
Test / test_root_node (push) Successful in 11s
Test / test_rebalance_verify (push) Successful in 4m22s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 40s
Test / test_write_no_same (push) Successful in 20s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m5s
Test / test_write_xor (push) Successful in 1m13s
Test / test_rebalance_verify_ec (push) Successful in 5m21s
Test / test_heal_pg_size_2 (push) Successful in 4m15s
Test / test_heal_ec (push) Successful in 4m58s
Test / test_heal_csum_32k_dmj (push) Successful in 5m30s
Test / test_heal_csum_32k_dj (push) Successful in 6m12s
Test / test_heal_csum_4k (push) Successful in 5m32s
Test / test_osd_tags (push) Successful in 12s
Test / test_enospc (push) Successful in 41s
Test / test_heal_csum_4k_dj (push) Successful in 8m33s
Test / test_enospc_xor (push) Successful in 53s
Test / test_enospc_imm (push) Successful in 43s
Test / test_scrub (push) Successful in 27s
Test / test_enospc_imm_xor (push) Successful in 52s
Test / test_scrub_zero_osd_2 (push) Successful in 29s
Test / test_scrub_xor (push) Successful in 36s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 41s
Test / test_scrub_ec (push) Successful in 25s
Test / test_scrub_pg_size_3 (push) Successful in 49s
Test / test_nfs (push) Successful in 16s
Test / test_heal_csum_32k (push) Successful in 5m9s
Test / test_heal_csum_4k_dmj (push) Successful in 5m7s
2024-06-27 00:38:01 +03:00
ea0d72289c Treat copied buffers as written only after completing the write in client
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m18s
Test / test_rebalance_verify_imm (push) Successful in 3m22s
Test / test_root_node (push) Successful in 57s
Test / test_rebalance_verify (push) Successful in 5m10s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 55s
Test / test_write_xor (push) Successful in 53s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m40s
Test / test_write_no_same (push) Successful in 15s
Test / test_rebalance_verify_ec (push) Successful in 6m34s
Test / test_heal_pg_size_2 (push) Successful in 4m32s
Test / test_heal_csum_32k_dmj (push) Successful in 4m24s
Test / test_heal_ec (push) Successful in 4m54s
Test / test_heal_csum_32k_dj (push) Successful in 5m58s
Test / test_heal_csum_32k (push) Successful in 9m43s
Test / test_heal_csum_4k_dmj (push) Successful in 9m41s
Test / test_osd_tags (push) Successful in 13s
Test / test_heal_csum_4k_dj (push) Successful in 10m0s
Test / test_enospc (push) Successful in 44s
Test / test_enospc_xor (push) Successful in 55s
Test / test_enospc_imm (push) Successful in 44s
Test / test_scrub (push) Successful in 31s
Test / test_enospc_imm_xor (push) Successful in 57s
Test / test_scrub_zero_osd_2 (push) Successful in 23s
Test / test_scrub_xor (push) Successful in 27s
Test / test_scrub_pg_size_3 (push) Successful in 46s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 40s
Test / test_scrub_ec (push) Successful in 25s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 9m27s
SYNC operation fsyncs only completed operations, so treating writes as "eligible
for fsync" before actually completing them is incorrect

It affected SCHEME=ec test_heal.sh (with immediate_commit=none) test - it was
flapping with lost writes - some non-fsynced writes were legitimately lost by
the OSD, but weren't repeated by the client
2024-06-20 02:11:53 +03:00
e400a851f4 Repeat dirty buffer flushes on any PG primary change because the new primary may not know about unfinished operations of the old primary
Some checks reported warnings
Test / test_rebalance_verify_ec (push) Has been cancelled
Test / test_rebalance_verify_ec_imm (push) Has been cancelled
Test / test_root_node (push) Has been cancelled
Test / test_switch_primary (push) Has been cancelled
Test / test_write (push) Has been cancelled
Test / test_write_xor (push) Has been cancelled
Test / test_write_no_same (push) Has been cancelled
Test / test_heal_pg_size_2 (push) Has been cancelled
Test / test_heal_ec (push) Has been cancelled
Test / test_heal_csum_32k_dmj (push) Has been cancelled
Test / test_heal_csum_32k_dj (push) Has been cancelled
Test / test_heal_csum_32k (push) Has been cancelled
Test / test_heal_csum_4k_dmj (push) Has been cancelled
Test / test_minsize_1 (push) Has been cancelled
Test / test_heal_csum_4k_dj (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
Test / test_osd_tags (push) Has been cancelled
Test / test_enospc (push) Has been cancelled
Test / test_enospc_xor (push) Has been cancelled
Test / test_enospc_imm (push) Has been cancelled
Test / test_enospc_imm_xor (push) Has been cancelled
Test / test_scrub (push) Has been cancelled
Test / test_scrub_zero_osd_2 (push) Has been cancelled
Test / test_scrub_xor (push) Has been cancelled
Test / test_scrub_pg_size_3 (push) Has been cancelled
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Has been cancelled
Test / test_scrub_ec (push) Has been cancelled
Test / test_nfs (push) Has been cancelled
Test / test_interrupted_rebalance_ec_imm (push) Has been cancelled
Test / test_snapshot (push) Has been cancelled
2024-06-19 00:28:26 +03:00
0fec7a9fea Drop dirty peer connections also when stopping PG to guarantee that clients do not miss fsync 2024-06-19 00:28:26 +03:00
b9de2a92a9 Print OSD performance stats
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 3m9s
Test / test_rebalance_verify_imm (push) Successful in 3m47s
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify (push) Successful in 4m35s
Test / test_switch_primary (push) Successful in 35s
Test / test_rebalance_verify_ec_imm (push) Successful in 2m55s
Test / test_write (push) Successful in 55s
Test / test_write_no_same (push) Successful in 21s
Test / test_write_xor (push) Successful in 1m19s
Test / test_rebalance_verify_ec (push) Successful in 6m10s
Test / test_heal_pg_size_2 (push) Successful in 3m51s
Test / test_heal_ec (push) Successful in 5m10s
Test / test_heal_csum_32k_dmj (push) Successful in 4m46s
Test / test_heal_csum_32k_dj (push) Successful in 6m25s
Test / test_heal_csum_32k (push) Successful in 6m13s
Test / test_osd_tags (push) Successful in 41s
Test / test_heal_csum_4k_dj (push) Successful in 6m26s
Test / test_heal_csum_4k_dmj (push) Successful in 6m28s
Test / test_enospc (push) Successful in 1m53s
Test / test_enospc_imm (push) Successful in 51s
Test / test_enospc_xor (push) Successful in 1m37s
Test / test_scrub (push) Successful in 1m11s
Test / test_scrub_zero_osd_2 (push) Successful in 34s
Test / test_enospc_imm_xor (push) Successful in 1m29s
Test / test_heal_csum_4k (push) Successful in 5m46s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 43s
Test / test_scrub_pg_size_3 (push) Successful in 53s
Test / test_scrub_ec (push) Successful in 21s
Test / test_nfs (push) Successful in 15s
Test / test_scrub_xor (push) Failing after 3m7s
2024-06-17 13:02:58 +03:00
5360a70853 Make OSD also report derived stats 2024-06-17 13:02:52 +03:00
4c2328eb13 Implement ls-osd command
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 2m45s
Test / test_rebalance_verify_imm (push) Successful in 2m27s
Test / test_root_node (push) Successful in 11s
Test / test_rebalance_verify (push) Successful in 3m1s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 41s
Test / test_write_no_same (push) Successful in 19s
Test / test_write_xor (push) Successful in 1m2s
Test / test_rebalance_verify_ec (push) Successful in 3m53s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m40s
Test / test_heal_pg_size_2 (push) Successful in 3m46s
Test / test_heal_ec (push) Successful in 3m49s
Test / test_heal_csum_32k_dmj (push) Successful in 5m41s
Test / test_heal_csum_32k_dj (push) Successful in 6m12s
Test / test_heal_csum_32k (push) Successful in 6m57s
Test / test_heal_csum_4k_dmj (push) Successful in 6m42s
Test / test_osd_tags (push) Successful in 27s
Test / test_enospc (push) Successful in 1m59s
Test / test_heal_csum_4k_dj (push) Successful in 6m33s
Test / test_enospc_xor (push) Successful in 2m24s
Test / test_enospc_imm (push) Successful in 1m32s
Test / test_scrub (push) Successful in 1m1s
Test / test_heal_csum_4k (push) Successful in 6m38s
Test / test_enospc_imm_xor (push) Successful in 1m11s
Test / test_scrub_zero_osd_2 (push) Successful in 33s
Test / test_scrub_xor (push) Successful in 32s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 39s
Test / test_nfs (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 48s
Test / test_scrub_ec (push) Successful in 21s
2024-06-17 02:22:14 +03:00
313daef12d Slightly decopypaste etcd key parsing 2024-06-17 01:38:42 +03:00
ad9c12e1b9 Fix Pseudo-FS initialization leading to ENOENTs some time after start
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 2m44s
Test / test_rebalance_verify_imm (push) Successful in 3m57s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 5m9s
Test / test_switch_primary (push) Successful in 40s
Test / test_write (push) Successful in 1m2s
Test / test_write_no_same (push) Successful in 14s
Test / test_write_xor (push) Successful in 1m23s
Test / test_rebalance_verify_ec (push) Successful in 6m33s
Test / test_rebalance_verify_ec_imm (push) Successful in 5m57s
Test / test_heal_pg_size_2 (push) Successful in 3m54s
Test / test_heal_csum_32k_dmj (push) Successful in 5m10s
Test / test_heal_csum_32k_dj (push) Successful in 5m9s
Test / test_heal_csum_32k (push) Successful in 5m43s
Test / test_osd_tags (push) Successful in 1m3s
Test / test_heal_csum_4k_dmj (push) Successful in 5m38s
Test / test_heal_csum_4k_dj (push) Successful in 5m38s
Test / test_enospc (push) Successful in 1m37s
Test / test_enospc_imm (push) Successful in 1m23s
Test / test_enospc_xor (push) Successful in 2m0s
Test / test_scrub (push) Successful in 45s
Test / test_enospc_imm_xor (push) Successful in 1m32s
Test / test_scrub_xor (push) Successful in 30s
Test / test_scrub_zero_osd_2 (push) Successful in 41s
Test / test_heal_csum_4k (push) Successful in 5m29s
Test / test_nfs (push) Successful in 21s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 33s
Test / test_scrub_ec (push) Successful in 31s
Test / test_scrub_pg_size_3 (push) Successful in 56s
Test / test_heal_ec (push) Failing after 2m54s
2024-06-16 23:43:09 +03:00
4473eb5512 Fix slow & failing CAS layer merge
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 2m56s
Test / test_rebalance_verify_imm (push) Successful in 2m44s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 3m24s
Test / test_switch_primary (push) Successful in 34s
Test / test_rebalance_verify_ec (push) Successful in 3m3s
Test / test_write_xor (push) Successful in 44s
Test / test_write_no_same (push) Successful in 15s
Test / test_rebalance_verify_ec_imm (push) Successful in 2m30s
Test / test_heal_pg_size_2 (push) Successful in 4m37s
Test / test_heal_csum_32k_dmj (push) Successful in 4m30s
Test / test_heal_ec (push) Successful in 4m45s
Test / test_heal_csum_32k_dj (push) Successful in 6m8s
Test / test_heal_csum_4k_dmj (push) Successful in 6m39s
Test / test_heal_csum_32k (push) Successful in 6m42s
Test / test_heal_csum_4k_dj (push) Successful in 6m30s
Test / test_osd_tags (push) Successful in 18s
Test / test_enospc (push) Successful in 1m21s
Test / test_enospc_imm (push) Successful in 1m13s
Test / test_enospc_xor (push) Successful in 2m2s
Test / test_scrub (push) Successful in 1m5s
Test / test_enospc_imm_xor (push) Successful in 1m42s
Test / test_scrub_zero_osd_2 (push) Successful in 1m1s
Test / test_heal_csum_4k (push) Successful in 6m18s
Test / test_scrub_xor (push) Successful in 31s
Test / test_nfs (push) Successful in 24s
Test / test_scrub_ec (push) Successful in 34s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 40s
Test / test_scrub_pg_size_3 (push) Successful in 43s
Test / test_write (push) Successful in 40s
2024-06-14 02:15:49 +03:00
6501abc060 Set default etcd_ws_keepalive_interval to 5
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 2m52s
Test / test_rebalance_verify_imm (push) Successful in 3m4s
Test / test_root_node (push) Successful in 13s
Test / test_rebalance_verify (push) Successful in 3m46s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 44s
Test / test_write_no_same (push) Successful in 20s
Test / test_write_xor (push) Successful in 1m7s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m8s
Test / test_rebalance_verify_ec (push) Successful in 5m25s
Test / test_heal_pg_size_2 (push) Successful in 3m27s
Test / test_heal_csum_32k_dmj (push) Successful in 5m42s
Test / test_heal_csum_32k_dj (push) Successful in 6m28s
Test / test_heal_csum_32k (push) Successful in 6m40s
Test / test_osd_tags (push) Successful in 25s
Test / test_heal_csum_4k_dmj (push) Successful in 7m16s
Test / test_enospc (push) Successful in 2m12s
Test / test_heal_csum_4k_dj (push) Successful in 6m26s
Test / test_enospc_imm (push) Successful in 1m41s
Test / test_enospc_xor (push) Successful in 2m21s
Test / test_heal_csum_4k (push) Successful in 6m19s
Test / test_enospc_imm_xor (push) Successful in 1m32s
Test / test_scrub (push) Successful in 49s
Test / test_scrub_zero_osd_2 (push) Successful in 34s
Test / test_scrub_xor (push) Successful in 31s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 40s
Test / test_nfs (push) Successful in 18s
Test / test_scrub_pg_size_3 (push) Successful in 46s
Test / test_scrub_ec (push) Successful in 27s
Test / test_heal_ec (push) Successful in 2m57s
2024-06-08 00:38:48 +03:00
1228403e74 Implement internal restart / run_forever in monitor
Some checks failed
Test / test_rebalance_verify_imm (push) Successful in 1m55s
Test / test_rebalance_verify (push) Successful in 2m42s
Test / test_root_node (push) Successful in 1m19s
Test / test_switch_primary (push) Successful in 33s
Test / test_rebalance_verify_ec (push) Successful in 3m21s
Test / test_etcd_fail (push) Failing after 10m8s
Test / test_write (push) Successful in 53s
Test / test_write_no_same (push) Successful in 16s
Test / test_write_xor (push) Successful in 1m0s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m59s
Test / test_heal_pg_size_2 (push) Successful in 4m37s
Test / test_heal_csum_32k_dmj (push) Failing after 4m45s
Test / test_heal_ec (push) Successful in 5m48s
Test / test_heal_csum_32k_dj (push) Successful in 6m12s
Test / test_heal_csum_32k (push) Successful in 6m30s
Test / test_heal_csum_4k_dmj (push) Successful in 6m16s
Test / test_osd_tags (push) Successful in 34s
Test / test_heal_csum_4k_dj (push) Successful in 6m50s
Test / test_enospc (push) Successful in 1m34s
Test / test_enospc_imm (push) Successful in 1m4s
Test / test_enospc_xor (push) Successful in 2m6s
Test / test_heal_csum_4k (push) Successful in 6m47s
Test / test_enospc_imm_xor (push) Successful in 1m26s
Test / test_scrub (push) Successful in 37s
Test / test_scrub_zero_osd_2 (push) Successful in 35s
Test / test_scrub_xor (push) Successful in 28s
Test / test_nfs (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 31s
Test / test_scrub_pg_size_3 (push) Successful in 45s
Test / test_scrub_ec (push) Successful in 29s
2024-06-08 00:35:18 +03:00
4eabebd245 Put all configuration to Mon.config
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 2m50s
Test / test_rebalance_verify_imm (push) Successful in 2m48s
Test / test_rebalance_verify (push) Successful in 3m18s
Test / test_root_node (push) Successful in 11s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 39s
Test / test_write_no_same (push) Successful in 19s
Test / test_write_xor (push) Successful in 1m2s
Test / test_rebalance_verify_ec (push) Successful in 4m7s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m49s
Test / test_heal_pg_size_2 (push) Successful in 3m24s
Test / test_heal_ec (push) Successful in 5m29s
Test / test_heal_csum_32k_dmj (push) Successful in 6m5s
Test / test_heal_csum_32k_dj (push) Successful in 5m52s
Test / test_heal_csum_32k (push) Successful in 6m44s
Test / test_osd_tags (push) Successful in 21s
Test / test_enospc (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 6m44s
Test / test_heal_csum_4k (push) Successful in 6m31s
Test / test_heal_csum_4k_dj (push) Successful in 6m45s
Test / test_enospc_imm (push) Successful in 1m21s
Test / test_enospc_xor (push) Successful in 1m28s
Test / test_scrub_zero_osd_2 (push) Successful in 35s
Test / test_scrub (push) Successful in 39s
Test / test_scrub_xor (push) Successful in 36s
Test / test_enospc_imm_xor (push) Successful in 49s
Test / test_nfs (push) Successful in 24s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 35s
Test / test_scrub_ec (push) Successful in 33s
Test / test_scrub_pg_size_3 (push) Successful in 41s
2024-06-07 00:20:38 +03:00
cf60b6818c Extract PG generation into pg_gen.js
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 2m56s
Test / test_rebalance_verify_imm (push) Successful in 3m2s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 3m35s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 40s
Test / test_write_no_same (push) Successful in 19s
Test / test_write_xor (push) Successful in 1m3s
Test / test_rebalance_verify_ec (push) Successful in 4m0s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m39s
Test / test_heal_pg_size_2 (push) Successful in 3m29s
Test / test_heal_csum_32k_dmj (push) Successful in 5m22s
Test / test_heal_ec (push) Successful in 6m27s
Test / test_heal_csum_32k_dj (push) Successful in 5m40s
Test / test_heal_csum_32k (push) Successful in 6m50s
Test / test_osd_tags (push) Successful in 25s
Test / test_heal_csum_4k_dmj (push) Successful in 6m28s
Test / test_enospc (push) Successful in 2m13s
Test / test_heal_csum_4k_dj (push) Successful in 6m10s
Test / test_heal_csum_4k (push) Successful in 6m12s
Test / test_scrub (push) Successful in 44s
Test / test_enospc_imm (push) Successful in 1m1s
Test / test_enospc_xor (push) Successful in 1m24s
Test / test_enospc_imm_xor (push) Successful in 1m5s
Test / test_scrub_zero_osd_2 (push) Successful in 27s
Test / test_scrub_xor (push) Successful in 28s
Test / test_nfs (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 37s
Test / test_scrub_pg_size_3 (push) Successful in 48s
Test / test_scrub_ec (push) Successful in 29s
2024-06-05 11:22:06 +03:00
1a4a7cdc37 Extract OSD Tree generation functions to osd_tree.js 2024-06-05 11:19:35 +03:00
1b48085e21 Extract remote etcd interaction to etcd_adapter.js 2024-06-05 11:19:35 +03:00
a71847244e Rename PGUtil.js to pg_utils.js 2024-06-05 10:51:20 +03:00
848c2d2722 Move LPOptimizer, DSL and tests to lp_optimizer/ 2024-06-05 10:51:20 +03:00
86832dc43f Add eslint import/no-unresolved 2024-06-05 10:51:20 +03:00
1f6da79463 Extract stats calculation into a separate file 2024-06-05 10:51:20 +03:00
9bf57c3760 Mention generic Toshiba MG instead of specific MGxx, fix russian vitastorfs link 2024-06-05 02:08:09 +03:00
a0305b5b4a Extract pool configuration validation into a separate file 2024-06-05 02:08:08 +03:00
1546f8e447 Extract etcd data "schema" into a separate file 2024-06-05 02:07:53 +03:00
8ce962b312 Move scripts 2024-06-05 02:07:53 +03:00
50e56b3b92 Add vitastor_c_inode_get_immediate_commit
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 2m48s
Test / test_rebalance_verify_imm (push) Successful in 2m58s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify (push) Successful in 3m33s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 40s
Test / test_write_no_same (push) Successful in 18s
Test / test_write_xor (push) Successful in 1m3s
Test / test_rebalance_verify_ec (push) Successful in 4m12s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m41s
Test / test_heal_pg_size_2 (push) Successful in 3m30s
Test / test_heal_ec (push) Successful in 4m46s
Test / test_heal_csum_32k_dmj (push) Successful in 5m6s
Test / test_heal_csum_32k_dj (push) Successful in 6m33s
Test / test_heal_csum_32k (push) Successful in 6m38s
Test / test_osd_tags (push) Successful in 33s
Test / test_heal_csum_4k_dmj (push) Successful in 6m59s
Test / test_enospc (push) Successful in 2m16s
Test / test_heal_csum_4k_dj (push) Successful in 6m48s
Test / test_enospc_imm (push) Successful in 1m42s
Test / test_enospc_xor (push) Successful in 2m26s
Test / test_enospc_imm_xor (push) Successful in 2m11s
Test / test_heal_csum_4k (push) Successful in 6m15s
Test / test_scrub_zero_osd_2 (push) Successful in 29s
Test / test_scrub (push) Successful in 37s
Test / test_scrub_xor (push) Successful in 36s
Test / test_scrub_ec (push) Successful in 38s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 40s
Test / test_scrub_pg_size_3 (push) Successful in 47s
Test / test_nfs (push) Successful in 15s
2024-05-19 01:57:18 +03:00
ace
b85dab8583 use fio 3.35-1 for AlmaLinux 9 2024-05-18 21:17:16 +03:00
a12d328793 Rename cli/ to cmd/, fix cmake install
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 2m39s
Test / test_rebalance_verify_imm (push) Successful in 3m2s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify (push) Successful in 3m39s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 41s
Test / test_write_no_same (push) Successful in 18s
Test / test_write_xor (push) Successful in 1m4s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m44s
Test / test_rebalance_verify_ec (push) Successful in 4m17s
Test / test_heal_pg_size_2 (push) Successful in 3m30s
Test / test_heal_ec (push) Successful in 5m4s
Test / test_heal_csum_32k_dmj (push) Successful in 5m53s
Test / test_heal_csum_32k_dj (push) Successful in 5m46s
Test / test_heal_csum_32k (push) Successful in 6m31s
Test / test_osd_tags (push) Successful in 25s
Test / test_heal_csum_4k_dmj (push) Successful in 7m7s
Test / test_enospc (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 5m57s
Test / test_heal_csum_4k (push) Successful in 6m27s
Test / test_enospc_imm (push) Successful in 1m6s
Test / test_enospc_xor (push) Successful in 1m18s
Test / test_scrub (push) Successful in 30s
Test / test_enospc_imm_xor (push) Successful in 1m5s
Test / test_scrub_zero_osd_2 (push) Successful in 25s
Test / test_scrub_xor (push) Successful in 31s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 37s
Test / test_scrub_ec (push) Successful in 26s
Test / test_nfs (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 46s
2024-05-15 23:04:50 +03:00
c79b38bd26 Move all sources to subdirs
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 2m50s
Test / test_rebalance_verify_imm (push) Successful in 3m17s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify (push) Successful in 4m6s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 53s
Test / test_write_no_same (push) Successful in 12s
Test / test_write_xor (push) Successful in 51s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m26s
Test / test_rebalance_verify_ec (push) Successful in 4m24s
Test / test_heal_pg_size_2 (push) Successful in 5m11s
Test / test_heal_ec (push) Successful in 5m8s
Test / test_heal_csum_32k_dmj (push) Successful in 5m1s
Test / test_heal_csum_32k_dj (push) Successful in 6m0s
Test / test_heal_csum_4k_dmj (push) Successful in 7m3s
Test / test_heal_csum_32k (push) Successful in 7m6s
Test / test_heal_csum_4k_dj (push) Successful in 6m53s
Test / test_osd_tags (push) Successful in 28s
Test / test_enospc (push) Successful in 1m10s
Test / test_enospc_imm (push) Successful in 57s
Test / test_enospc_xor (push) Successful in 1m24s
Test / test_heal_csum_4k (push) Successful in 6m58s
Test / test_scrub_zero_osd_2 (push) Successful in 29s
Test / test_scrub (push) Successful in 33s
Test / test_scrub_xor (push) Successful in 26s
Test / test_enospc_imm_xor (push) Successful in 51s
Test / test_nfs (push) Successful in 24s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 34s
Test / test_scrub_ec (push) Successful in 32s
Test / test_scrub_pg_size_3 (push) Successful in 39s
2024-05-15 11:06:01 +03:00
44692d148a Make vitastor_kv.h header public
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 2m47s
Test / test_rebalance_verify_imm (push) Successful in 2m42s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify (push) Successful in 3m21s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 46s
Test / test_write_no_same (push) Successful in 17s
Test / test_write_xor (push) Successful in 49s
Test / test_rebalance_verify_ec (push) Successful in 4m51s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m31s
Test / test_heal_pg_size_2 (push) Successful in 3m36s
Test / test_heal_ec (push) Successful in 3m37s
Test / test_heal_csum_32k_dmj (push) Successful in 6m7s
Test / test_heal_csum_32k_dj (push) Successful in 6m12s
Test / test_heal_csum_32k (push) Successful in 7m20s
Test / test_heal_csum_4k_dmj (push) Successful in 7m7s
Test / test_osd_tags (push) Successful in 19s
Test / test_enospc (push) Successful in 1m27s
Test / test_enospc_xor (push) Successful in 2m24s
Test / test_heal_csum_4k_dj (push) Successful in 5m42s
Test / test_enospc_imm (push) Successful in 1m39s
Test / test_heal_csum_4k (push) Successful in 6m0s
Test / test_scrub_zero_osd_2 (push) Successful in 47s
Test / test_scrub (push) Successful in 50s
Test / test_enospc_imm_xor (push) Successful in 1m15s
Test / test_scrub_xor (push) Successful in 25s
Test / test_nfs (push) Successful in 23s
Test / test_scrub_ec (push) Successful in 32s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 35s
Test / test_scrub_pg_size_3 (push) Successful in 41s
2024-05-15 01:49:38 +03:00
ba52359611 Fix last master commit 2024-05-15 01:49:31 +03:00
23a9aa93b5 Fix pool create/modify --block_size validation
All checks were successful
Test / test_splitbrain (push) Has been skipped
Test / test_rebalance_verify (push) Has been skipped
Test / test_rebalance_verify_imm (push) Has been skipped
Test / test_rebalance_verify_ec (push) Has been skipped
Test / test_rebalance_verify_ec_imm (push) Has been skipped
Test / test_root_node (push) Has been skipped
Test / test_switch_primary (push) Has been skipped
Test / test_write (push) Has been skipped
Test / test_write_xor (push) Has been skipped
Test / test_write_no_same (push) Has been skipped
Test / test_heal_pg_size_2 (push) Has been skipped
Test / test_heal_ec (push) Has been skipped
Test / test_heal_csum_32k_dmj (push) Has been skipped
Test / test_heal_csum_32k_dj (push) Has been skipped
Test / test_heal_csum_32k (push) Has been skipped
Test / test_heal_csum_4k_dmj (push) Has been skipped
Test / test_heal_csum_4k_dj (push) Has been skipped
Test / test_heal_csum_4k (push) Has been skipped
Test / test_osd_tags (push) Has been skipped
Test / test_enospc (push) Has been skipped
Test / test_enospc_xor (push) Has been skipped
Test / test_enospc_imm (push) Has been skipped
Test / test_enospc_imm_xor (push) Has been skipped
Test / test_scrub (push) Has been skipped
Test / test_scrub_zero_osd_2 (push) Has been skipped
Test / test_scrub_xor (push) Has been skipped
Test / test_scrub_pg_size_3 (push) Has been skipped
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Has been skipped
Test / test_scrub_ec (push) Has been skipped
Test / test_nfs (push) Has been skipped
2024-05-04 16:33:22 +03:00
2412d9e239 Fix TTL comparison for lease/keepalive
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m5s
Test / test_rebalance_verify_imm (push) Successful in 3m29s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify (push) Successful in 4m3s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 54s
Test / test_write_no_same (push) Successful in 13s
Test / test_write_xor (push) Successful in 54s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m58s
Test / test_rebalance_verify_ec (push) Successful in 4m58s
Test / test_heal_pg_size_2 (push) Successful in 4m6s
Test / test_heal_ec (push) Successful in 4m15s
Test / test_heal_csum_32k_dmj (push) Successful in 5m52s
Test / test_heal_csum_32k_dj (push) Successful in 5m59s
Test / test_heal_csum_32k (push) Successful in 7m7s
Test / test_heal_csum_4k_dmj (push) Successful in 6m57s
Test / test_osd_tags (push) Successful in 28s
Test / test_enospc (push) Successful in 1m58s
Test / test_heal_csum_4k_dj (push) Successful in 6m53s
Test / test_heal_csum_4k (push) Successful in 6m20s
Test / test_enospc_xor (push) Successful in 2m9s
Test / test_enospc_imm (push) Successful in 41s
Test / test_scrub_zero_osd_2 (push) Successful in 35s
Test / test_scrub (push) Successful in 38s
Test / test_scrub_xor (push) Successful in 34s
Test / test_enospc_imm_xor (push) Successful in 58s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 35s
Test / test_scrub_ec (push) Successful in 33s
Test / test_nfs (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 41s
2024-04-30 01:53:05 +03:00
9301c857b1 Release 1.6.1
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 2m59s
Test / test_rebalance_verify_imm (push) Successful in 3m16s
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify (push) Successful in 3m50s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 39s
Test / test_write_no_same (push) Successful in 13s
Test / test_write_xor (push) Successful in 1m20s
Test / test_rebalance_verify_ec (push) Successful in 4m20s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m54s
Test / test_heal_pg_size_2 (push) Successful in 3m25s
Test / test_heal_csum_32k_dmj (push) Successful in 5m52s
Test / test_heal_ec (push) Successful in 6m12s
Test / test_heal_csum_32k_dj (push) Successful in 5m40s
Test / test_heal_csum_32k (push) Successful in 6m21s
Test / test_osd_tags (push) Successful in 21s
Test / test_enospc (push) Successful in 2m25s
Test / test_heal_csum_4k_dmj (push) Successful in 6m5s
Test / test_heal_csum_4k_dj (push) Successful in 6m3s
Test / test_heal_csum_4k (push) Successful in 6m1s
Test / test_scrub (push) Successful in 43s
Test / test_enospc_imm (push) Successful in 47s
Test / test_enospc_xor (push) Successful in 1m38s
Test / test_enospc_imm_xor (push) Successful in 1m0s
Test / test_scrub_zero_osd_2 (push) Successful in 26s
Test / test_scrub_xor (push) Successful in 36s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 33s
Test / test_scrub_ec (push) Successful in 26s
Test / test_scrub_pg_size_3 (push) Successful in 47s
Test / test_nfs (push) Successful in 16s
A bunch of monitor fixes

- Add noout flag for OSDs (/vitastor/config/osd/xx)
- Fix "effective" size of degraded PGs (and thus "used space") calculation in monitor
- Fix monitor not clearing PGs of deleted pools
- Fix incorrect PG generation with hosts with 0 OSDs
- Fix monitor crashing during primary OSD recheck when pool has no PGs
- Fix monitor crashing when node_placement included non-existing OSDs
- Fix possible data movement after removing OSDs reweighted to 0
- Remove extra empty keys from pool configurations created by vitastor-cli create-pool
- Fix 32-bit build
2024-04-22 02:01:29 +03:00
3094358ec2 Fix autovivification leading to extra empty keys in pool-create
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 2m48s
Test / test_rebalance_verify_imm (push) Successful in 3m4s
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify (push) Successful in 3m44s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 39s
Test / test_write_no_same (push) Successful in 19s
Test / test_write_xor (push) Successful in 1m4s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m36s
Test / test_rebalance_verify_ec (push) Successful in 4m21s
Test / test_heal_pg_size_2 (push) Successful in 3m33s
Test / test_heal_csum_32k_dmj (push) Successful in 5m41s
Test / test_heal_ec (push) Successful in 6m5s
Test / test_heal_csum_32k_dj (push) Successful in 5m29s
Test / test_heal_csum_32k (push) Successful in 6m11s
Test / test_osd_tags (push) Successful in 22s
Test / test_enospc (push) Successful in 2m30s
Test / test_heal_csum_4k (push) Successful in 6m9s
Test / test_heal_csum_4k_dj (push) Successful in 6m11s
Test / test_heal_csum_4k_dmj (push) Successful in 6m14s
Test / test_scrub (push) Successful in 42s
Test / test_enospc_imm (push) Successful in 47s
Test / test_enospc_xor (push) Successful in 1m4s
Test / test_enospc_imm_xor (push) Successful in 1m1s
Test / test_scrub_zero_osd_2 (push) Successful in 27s
Test / test_scrub_xor (push) Successful in 27s
Test / test_nfs (push) Successful in 20s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 34s
Test / test_scrub_pg_size_3 (push) Successful in 49s
Test / test_scrub_ec (push) Successful in 31s
2024-04-20 02:04:09 +03:00
87f666d2a2 Filter out OSDs reweighted to 0 2024-04-20 02:03:53 +03:00
bd7fe4ef8f Filter out non-existing OSDs added in node_placement 2024-04-20 02:03:36 +03:00
1b3f9a1416 Do not set non-existing OSD weight to 0, we'll remove them instead 2024-04-20 02:03:11 +03:00
a7b7354f38 Do not recheck primary distribution when pool has no PGs 2024-04-20 02:02:47 +03:00
765befa22f Remove empty nodes from tree because PG DSL expects that all leaf nodes are OSDs 2024-04-20 02:02:28 +03:00
87b3ab94fe Do not disable require-atomic-updates and no-unused-vars 2024-04-20 02:02:13 +03:00
2c0801f6e4 Configure ESLint and add it to CI
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m0s
Test / test_rebalance_verify_imm (push) Successful in 3m20s
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify (push) Successful in 3m50s
Test / test_switch_primary (push) Successful in 40s
Test / test_write (push) Successful in 41s
Test / test_write_no_same (push) Successful in 18s
Test / test_write_xor (push) Successful in 1m5s
Test / test_rebalance_verify_ec (push) Successful in 4m38s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m17s
Test / test_heal_pg_size_2 (push) Successful in 3m25s
Test / test_heal_ec (push) Successful in 4m46s
Test / test_heal_csum_32k_dmj (push) Successful in 5m38s
Test / test_heal_csum_32k_dj (push) Successful in 6m16s
Test / test_heal_csum_32k (push) Successful in 6m45s
Test / test_osd_tags (push) Successful in 27s
Test / test_heal_csum_4k_dmj (push) Successful in 7m12s
Test / test_enospc (push) Successful in 2m6s
Test / test_heal_csum_4k_dj (push) Successful in 6m34s
Test / test_enospc_imm (push) Successful in 1m43s
Test / test_heal_csum_4k (push) Successful in 6m23s
Test / test_enospc_xor (push) Successful in 1m57s
Test / test_enospc_imm_xor (push) Successful in 1m0s
Test / test_scrub (push) Successful in 32s
Test / test_scrub_zero_osd_2 (push) Successful in 31s
Test / test_scrub_xor (push) Successful in 33s
Test / test_nfs (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 27s
Test / test_scrub_ec (push) Successful in 28s
Test / test_scrub_pg_size_3 (push) Successful in 57s
2024-04-16 02:39:31 +03:00
fd83fef1d9 Fix pool deletion
Some checks reported warnings
Test / test_snapshot_chain_ec (push) Successful in 3m1s
Test / test_rebalance_verify_imm (push) Successful in 3m11s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify (push) Successful in 3m53s
Test / test_switch_primary (push) Successful in 39s
Test / test_write (push) Successful in 39s
Test / test_write_no_same (push) Successful in 18s
Test / test_write_xor (push) Successful in 1m9s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m53s
Test / test_rebalance_verify_ec (push) Successful in 4m33s
Test / test_heal_pg_size_2 (push) Successful in 3m27s
Test / test_heal_csum_4k_dmj (push) Has been cancelled
Test / test_heal_csum_4k_dj (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
Test / test_osd_tags (push) Has been cancelled
Test / test_enospc (push) Has been cancelled
Test / test_enospc_xor (push) Has been cancelled
Test / test_enospc_imm (push) Has been cancelled
Test / test_enospc_imm_xor (push) Has been cancelled
Test / test_scrub (push) Has been cancelled
Test / test_scrub_zero_osd_2 (push) Has been cancelled
Test / test_scrub_xor (push) Has been cancelled
Test / test_scrub_pg_size_3 (push) Has been cancelled
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Has been cancelled
Test / test_scrub_ec (push) Has been cancelled
Test / test_nfs (push) Has been cancelled
Test / test_heal_csum_32k_dj (push) Has been cancelled
Test / test_heal_ec (push) Has been cancelled
Test / test_heal_csum_32k_dmj (push) Has been cancelled
Test / test_heal_csum_32k (push) Has been cancelled
2024-04-16 02:20:26 +03:00
8d1067971b Fix pg_effsize (and thus "used space") calculation in monitor 2024-04-16 02:20:18 +03:00
ae5af04fde Add noout flag for OSDs 2024-04-16 02:19:55 +03:00
266d038b11 Fix 32-bit build warnings and one error again :-)
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 2m52s
Test / test_rebalance_verify_imm (push) Successful in 3m7s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify (push) Successful in 3m36s
Test / test_switch_primary (push) Successful in 40s
Test / test_write (push) Successful in 41s
Test / test_write_no_same (push) Successful in 18s
Test / test_write_xor (push) Successful in 1m6s
Test / test_rebalance_verify_ec (push) Successful in 4m25s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m52s
Test / test_heal_pg_size_2 (push) Successful in 3m21s
Test / test_heal_ec (push) Successful in 5m27s
Test / test_heal_csum_32k_dmj (push) Successful in 5m56s
Test / test_heal_csum_32k_dj (push) Successful in 5m49s
Test / test_heal_csum_32k (push) Successful in 6m43s
Test / test_osd_tags (push) Successful in 21s
Test / test_enospc (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 6m43s
Test / test_heal_csum_4k (push) Successful in 6m27s
Test / test_heal_csum_4k_dj (push) Successful in 6m29s
Test / test_enospc_imm (push) Successful in 1m5s
Test / test_enospc_xor (push) Successful in 1m38s
Test / test_scrub (push) Successful in 37s
Test / test_scrub_zero_osd_2 (push) Successful in 32s
Test / test_enospc_imm_xor (push) Successful in 45s
Test / test_scrub_xor (push) Successful in 33s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 34s
Test / test_scrub_ec (push) Successful in 34s
Test / test_scrub_pg_size_3 (push) Successful in 43s
Test / test_nfs (push) Successful in 13s
2024-04-11 22:49:33 +03:00
ff4414d37e Release 1.6.0
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m1s
Test / test_rebalance_verify_imm (push) Successful in 3m25s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify (push) Successful in 4m4s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 54s
Test / test_write_xor (push) Successful in 52s
Test / test_write_no_same (push) Successful in 13s
Test / test_rebalance_verify_ec (push) Successful in 4m13s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m9s
Test / test_heal_pg_size_2 (push) Successful in 4m30s
Test / test_heal_ec (push) Successful in 5m16s
Test / test_heal_csum_32k_dmj (push) Successful in 6m21s
Test / test_heal_csum_32k_dj (push) Successful in 5m40s
Test / test_heal_csum_32k (push) Successful in 6m42s
Test / test_osd_tags (push) Successful in 39s
Test / test_heal_csum_4k_dmj (push) Successful in 6m35s
Test / test_enospc (push) Successful in 1m43s
Test / test_heal_csum_4k (push) Successful in 6m27s
Test / test_heal_csum_4k_dj (push) Successful in 6m31s
Test / test_enospc_xor (push) Successful in 1m42s
Test / test_scrub_zero_osd_2 (push) Successful in 39s
Test / test_scrub (push) Successful in 41s
Test / test_enospc_imm (push) Successful in 46s
Test / test_enospc_imm_xor (push) Successful in 52s
Test / test_scrub_xor (push) Successful in 32s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 39s
Test / test_scrub_ec (push) Successful in 34s
Test / test_nfs (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 46s
New features:

- Implement "hierarchical failure domains" and other complex distribution rules, for example
  EC 4+2 over 3 DC, with 2 chunks per each DC ([documentation](docs/config/pool.en.md#level_placement))
- Make OSDs handle ENOSPC - now cluster stays online even if some OSDs fill up
  to 100 %, only writes requiring free space hang
- Implement Stage/Unstage & volume locking for CSI to prevent parallel mounting
  and/or modifications of the same volume
- Warn about full and almost full OSDs in vitastor-cli status
- Add an experimental NBD netlink map mode as an option ([documentation](docs/usage/nbd.en.md))
- Add --pg parameter to vitastor-cli describe, print objects with 0x in human-readable format too
- Add [administration docs](docs/usage/admin.en.md)

Bug fixes:

- Fix client operation retry timeout - previously the timeout wasn't applied and writes were
  retries almost instantly
- Fix monitors crashing on invalid pool configurations
- Fix journaling - make each journal write wait for all previous journal writes
- Fix monitor thinking that OSD weight is 0 after deleting /osd/config/ key online
- Fix a write stall caused by flusher possibly not trimming journal on rollback
- Set 32k csum_block_size for HDD by default in vitastor-disk
2024-04-09 16:57:59 +03:00
0fa7ecc03f Add also a test for OSD tags 2024-04-09 16:57:59 +03:00
c29bfe12eb Oops - fix filter_by_root_node, add a test for it 2024-04-09 15:48:44 +03:00
57bf84ddb2 Fix filtering in mon 2024-04-09 14:51:05 +03:00
dff4879c8c Check if NBD_ATTR_BACKEND_IDENTIFIER is defined 2024-04-09 13:16:58 +03:00
af9a853db6 Move NBD netlink map&unmap to separate commands, add "netlink-revive" command
All checks were successful
Test / test_splitbrain (push) Successful in 21s
Test / test_snapshot_chain (push) Successful in 2m57s
Test / test_snapshot_chain_ec (push) Successful in 3m18s
Test / test_rebalance_verify_imm (push) Successful in 3m40s
Test / test_rebalance_verify (push) Successful in 4m19s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 53s
Test / test_write_xor (push) Successful in 58s
Test / test_write_no_same (push) Successful in 13s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m29s
Test / test_rebalance_verify_ec (push) Successful in 5m12s
Test / test_heal_pg_size_2 (push) Successful in 3m50s
Test / test_heal_ec (push) Successful in 3m46s
Test / test_heal_csum_32k_dmj (push) Successful in 6m12s
Test / test_heal_csum_32k_dj (push) Successful in 6m40s
Test / test_heal_csum_32k (push) Successful in 6m52s
Test / test_heal_csum_4k_dmj (push) Successful in 6m51s
Test / test_enospc (push) Successful in 1m42s
Test / test_enospc_xor (push) Successful in 2m23s
Test / test_enospc_imm (push) Successful in 1m42s
Test / test_heal_csum_4k_dj (push) Successful in 6m12s
Test / test_heal_csum_4k (push) Successful in 5m40s
Test / test_enospc_imm_xor (push) Successful in 1m26s
Test / test_scrub_zero_osd_2 (push) Successful in 32s
Test / test_scrub (push) Successful in 35s
Test / test_scrub_xor (push) Successful in 27s
Test / test_nfs (push) Successful in 23s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 32s
Test / test_scrub_ec (push) Successful in 30s
Test / test_scrub_pg_size_3 (push) Successful in 43s
2024-04-08 16:34:41 +03:00
b7a3275af3 Make netlink optional 2024-04-08 01:51:28 +03:00
64c5c4ca26 Fix code style 2024-04-08 01:35:03 +03:00
idelson
442a9d838d nbd-proxy: add configuration via netlink to support kinds of timeouts.
PR #58 - https://github.com/vitalif/vitastor/pull/58/commits

By MIND Software LLC

By submitting this pull request, I accept Vitastor CLA
2024-04-08 00:50:08 +03:00
6366972fe8 Warn about full and almost full OSDs in status
All checks were successful
Test / test_splitbrain (push) Successful in 18s
Test / test_snapshot_chain (push) Successful in 2m23s
Test / test_snapshot_chain_ec (push) Successful in 2m53s
Test / test_rebalance_verify_imm (push) Successful in 3m21s
Test / test_rebalance_verify (push) Successful in 3m46s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 54s
Test / test_write_xor (push) Successful in 48s
Test / test_write_no_same (push) Successful in 14s
Test / test_rebalance_verify_ec (push) Successful in 4m38s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m22s
Test / test_heal_pg_size_2 (push) Successful in 3m34s
Test / test_heal_ec (push) Successful in 3m38s
Test / test_heal_csum_32k_dmj (push) Successful in 5m44s
Test / test_heal_csum_32k_dj (push) Successful in 5m51s
Test / test_heal_csum_32k (push) Successful in 6m45s
Test / test_heal_csum_4k_dmj (push) Successful in 6m34s
Test / test_enospc (push) Successful in 1m47s
Test / test_enospc_xor (push) Successful in 2m41s
Test / test_enospc_imm (push) Successful in 1m31s
Test / test_heal_csum_4k_dj (push) Successful in 6m39s
Test / test_heal_csum_4k (push) Successful in 6m15s
Test / test_scrub_zero_osd_2 (push) Successful in 32s
Test / test_scrub (push) Successful in 35s
Test / test_scrub_xor (push) Successful in 26s
Test / test_enospc_imm_xor (push) Successful in 1m13s
Test / test_nfs (push) Successful in 24s
Test / test_scrub_ec (push) Successful in 33s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 34s
Test / test_scrub_pg_size_3 (push) Successful in 42s
2024-04-07 19:39:51 +03:00
2b863fb715 Add ENOSPC handling tests 2024-04-07 19:39:33 +03:00
3bf4dd5abd Fix client op retry timeout - do not retry immediately 2024-04-07 19:08:36 +03:00
3b84dcaedd Handle ENOSPC during write - rollback partial EC writes, remember partial replica writes
All checks were successful
Test / test_rm (push) Successful in 14s
Test / test_interrupted_rebalance_ec_imm (push) Successful in 1m59s
Test / test_snapshot_down (push) Successful in 28s
Test / test_snapshot_down_ec (push) Successful in 30s
Test / test_splitbrain (push) Successful in 27s
Test / test_snapshot_chain (push) Successful in 2m41s
Test / test_snapshot_chain_ec (push) Successful in 3m12s
Test / test_rebalance_verify_imm (push) Successful in 3m33s
Test / test_rebalance_verify (push) Successful in 4m24s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 53s
Test / test_write_xor (push) Successful in 51s
Test / test_write_no_same (push) Successful in 11s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m11s
Test / test_rebalance_verify_ec (push) Successful in 6m3s
Test / test_heal_pg_size_2 (push) Successful in 4m57s
Test / test_heal_ec (push) Successful in 4m52s
Test / test_heal_csum_32k_dmj (push) Successful in 4m37s
Test / test_heal_csum_32k_dj (push) Successful in 6m55s
Test / test_heal_csum_32k (push) Successful in 6m42s
Test / test_heal_csum_4k_dj (push) Successful in 6m41s
Test / test_heal_csum_4k_dmj (push) Successful in 6m45s
Test / test_scrub_zero_osd_2 (push) Successful in 44s
Test / test_scrub (push) Successful in 48s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 1m6s
Test / test_scrub_pg_size_3 (push) Successful in 1m30s
Test / test_scrub_ec (push) Successful in 51s
Test / test_nfs (push) Successful in 39s
Test / test_heal_csum_4k (push) Successful in 5m22s
Test / test_scrub_xor (push) Successful in 18s
2024-04-07 18:02:05 +03:00
20fbc4a745 Add --pg parameter to vitastor-cli describe, print objects with 0x in human-readable format too
All checks were successful
Test / test_rm (push) Successful in 16s
Test / test_interrupted_rebalance_ec_imm (push) Successful in 2m4s
Test / test_snapshot_down (push) Successful in 26s
Test / test_snapshot_down_ec (push) Successful in 30s
Test / test_splitbrain (push) Successful in 18s
Test / test_snapshot_chain (push) Successful in 2m32s
Test / test_snapshot_chain_ec (push) Successful in 3m11s
Test / test_rebalance_verify_imm (push) Successful in 3m34s
Test / test_rebalance_verify (push) Successful in 4m12s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 52s
Test / test_write_xor (push) Successful in 54s
Test / test_write_no_same (push) Successful in 13s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m6s
Test / test_rebalance_verify_ec (push) Successful in 5m1s
Test / test_heal_pg_size_2 (push) Successful in 4m15s
Test / test_heal_ec (push) Successful in 4m11s
Test / test_heal_csum_32k_dmj (push) Successful in 6m4s
Test / test_heal_csum_32k_dj (push) Successful in 5m59s
Test / test_heal_csum_32k (push) Successful in 6m42s
Test / test_heal_csum_4k_dmj (push) Successful in 6m40s
Test / test_scrub_zero_osd_2 (push) Successful in 55s
Test / test_scrub (push) Successful in 58s
Test / test_heal_csum_4k_dj (push) Successful in 6m22s
Test / test_heal_csum_4k (push) Successful in 6m24s
Test / test_scrub_pg_size_3 (push) Successful in 2m11s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 25s
Test / test_scrub_ec (push) Successful in 23s
Test / test_nfs (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 18s
2024-04-07 12:39:46 +03:00
02993ee1dd Implement Stage/Unstage & volume locking for CSI to prevent parallel modifications of the same volume 2024-04-07 11:48:19 +03:00
3629dbc54d Plug the new PG combinator into monitor
All checks were successful
Test / test_move_reappear (push) Successful in 22s
Test / test_snapshot_down (push) Successful in 25s
Test / test_interrupted_rebalance_ec_imm (push) Successful in 2m46s
Test / test_snapshot_down_ec (push) Successful in 24s
Test / test_splitbrain (push) Successful in 17s
Test / test_snapshot_chain (push) Successful in 2m36s
Test / test_snapshot_chain_ec (push) Successful in 3m1s
Test / test_rebalance_verify_imm (push) Successful in 3m17s
Test / test_rebalance_verify (push) Successful in 3m50s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 50s
Test / test_write_xor (push) Successful in 56s
Test / test_write_no_same (push) Successful in 14s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m29s
Test / test_rebalance_verify_ec (push) Successful in 5m23s
Test / test_heal_pg_size_2 (push) Successful in 4m23s
Test / test_heal_ec (push) Successful in 4m57s
Test / test_heal_csum_32k_dmj (push) Successful in 5m21s
Test / test_heal_csum_32k_dj (push) Successful in 6m33s
Test / test_heal_csum_32k (push) Successful in 6m55s
Test / test_heal_csum_4k_dmj (push) Successful in 6m54s
Test / test_scrub (push) Successful in 1m32s
Test / test_scrub_zero_osd_2 (push) Successful in 1m12s
Test / test_heal_csum_4k_dj (push) Successful in 7m12s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 1m1s
Test / test_scrub_pg_size_3 (push) Successful in 1m41s
Test / test_heal_csum_4k (push) Successful in 6m22s
Test / test_scrub_ec (push) Successful in 44s
Test / test_nfs (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 18s
2024-04-07 02:44:17 +03:00
29284bef40 Implement new DSL/rule-based PG generation algorithm 2024-04-07 00:36:20 +03:00
6a924d6066 Extract PG combinator into a separate module 2024-04-07 00:36:20 +03:00
9fe779a691 Do not die on invalid pool configurations
All checks were successful
Test / test_rm (push) Successful in 16s
Test / test_move_reappear (push) Successful in 24s
Test / test_snapshot_down (push) Successful in 26s
Test / test_snapshot_down_ec (push) Successful in 31s
Test / test_splitbrain (push) Successful in 17s
Test / test_snapshot_chain (push) Successful in 2m34s
Test / test_snapshot_chain_ec (push) Successful in 3m12s
Test / test_rebalance_verify_imm (push) Successful in 2m59s
Test / test_rebalance_verify (push) Successful in 3m27s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 55s
Test / test_write_xor (push) Successful in 54s
Test / test_write_no_same (push) Successful in 15s
Test / test_rebalance_verify_ec (push) Successful in 4m37s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m8s
Test / test_heal_pg_size_2 (push) Successful in 3m48s
Test / test_heal_ec (push) Successful in 3m47s
Test / test_heal_csum_32k_dmj (push) Successful in 6m8s
Test / test_heal_csum_32k_dj (push) Successful in 6m18s
Test / test_heal_csum_32k (push) Successful in 7m9s
Test / test_heal_csum_4k_dmj (push) Successful in 7m7s
Test / test_scrub (push) Successful in 1m9s
Test / test_scrub_zero_osd_2 (push) Successful in 1m8s
Test / test_scrub_xor (push) Successful in 1m7s
Test / test_heal_csum_4k_dj (push) Successful in 6m20s
Test / test_heal_csum_4k (push) Successful in 5m58s
Test / test_scrub_pg_size_3 (push) Successful in 2m9s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 1m4s
Test / test_nfs (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 21s
2024-04-07 00:36:20 +03:00
31c2751b9b Move NBD/VDUSE map/unmap functions to a separate file 2024-04-07 00:36:09 +03:00
c5195666cd Fix journal sequencing: make each journal write wait for all previous journal writes
Some checks failed
Test / test_snapshot_ec (push) Successful in 31s
Test / test_rm (push) Successful in 15s
Test / test_snapshot_down (push) Successful in 30s
Test / test_snapshot_down_ec (push) Successful in 32s
Test / test_splitbrain (push) Successful in 25s
Test / test_snapshot_chain (push) Successful in 2m11s
Test / test_snapshot_chain_ec (push) Successful in 3m1s
Test / test_rebalance_verify_imm (push) Successful in 2m35s
Test / test_rebalance_verify (push) Successful in 3m10s
Test / test_switch_primary (push) Successful in 39s
Test / test_write (push) Successful in 43s
Test / test_write_no_same (push) Successful in 18s
Test / test_write_xor (push) Successful in 1m3s
Test / test_rebalance_verify_ec (push) Successful in 4m38s
Test / test_heal_pg_size_2 (push) Successful in 3m22s
Test / test_rebalance_verify_ec_imm (push) Successful in 5m11s
Test / test_heal_ec (push) Successful in 4m23s
Test / test_heal_csum_32k_dmj (push) Successful in 4m55s
Test / test_heal_csum_32k_dj (push) Successful in 6m31s
Test / test_heal_csum_32k (push) Successful in 6m29s
Test / test_heal_csum_4k_dmj (push) Successful in 7m18s
Test / test_scrub_zero_osd_2 (push) Successful in 1m0s
Test / test_scrub (push) Failing after 3m19s
Test / test_heal_csum_4k_dj (push) Successful in 6m39s
Test / test_scrub_xor (push) Successful in 58s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 1m13s
Test / test_scrub_ec (push) Successful in 50s
Test / test_scrub_pg_size_3 (push) Successful in 1m51s
Test / test_heal_csum_4k (push) Successful in 5m13s
Test / test_nfs (push) Successful in 23s
2024-04-06 23:53:12 +03:00
f36d7eb76c Fix monitor thinking that OSD weight is 0 after deleting /osd/config/ key
All checks were successful
Test / test_rm (push) Successful in 16s
Test / test_interrupted_rebalance_ec_imm (push) Successful in 1m30s
Test / test_snapshot_down (push) Successful in 31s
Test / test_snapshot_down_ec (push) Successful in 32s
Test / test_splitbrain (push) Successful in 23s
Test / test_snapshot_chain (push) Successful in 2m13s
Test / test_snapshot_chain_ec (push) Successful in 3m1s
Test / test_rebalance_verify_imm (push) Successful in 3m22s
Test / test_rebalance_verify (push) Successful in 4m5s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 52s
Test / test_write_xor (push) Successful in 52s
Test / test_write_no_same (push) Successful in 14s
Test / test_rebalance_verify_ec (push) Successful in 4m41s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m51s
Test / test_heal_pg_size_2 (push) Successful in 3m55s
Test / test_heal_ec (push) Successful in 4m35s
Test / test_heal_csum_32k_dmj (push) Successful in 6m0s
Test / test_heal_csum_32k_dj (push) Successful in 5m51s
Test / test_heal_csum_32k (push) Successful in 6m48s
Test / test_heal_csum_4k_dmj (push) Successful in 7m7s
Test / test_scrub (push) Successful in 1m36s
Test / test_scrub_zero_osd_2 (push) Successful in 1m20s
Test / test_scrub_xor (push) Successful in 56s
Test / test_heal_csum_4k_dj (push) Successful in 6m39s
Test / test_heal_csum_4k (push) Successful in 6m37s
Test / test_nfs (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 47s
Test / test_scrub_ec (push) Successful in 27s
Test / test_scrub_pg_size_3 (push) Successful in 1m3s
2024-04-05 23:14:46 +03:00
dd7f651de1 Add --max-request-bytes=104857600 to etcd params in tests 2024-04-05 23:14:46 +03:00
a2994ecd0d Fix flusher possibly not trimming journal on rollback 2024-04-05 23:14:39 +03:00
5d3aaf016b Add administration docs 2024-03-31 01:54:52 +03:00
0b097ca3f2 Set 32k csum_block_size for HDD by default 2024-03-30 16:16:49 +03:00
989675a780 s/etcd_ws_keepalive_timeout/etcd_ws_keepalive_interval/ in docs 2024-03-26 01:56:08 +03:00
f8c403ec9e Add newer benchmark results 2024-03-23 18:28:48 +03:00
bfbb85e653 Replace -Oanything with -O3, not just -O/-O1/-O2
All checks were successful
Test / test_move_reappear (push) Successful in 22s
Test / test_rm (push) Successful in 13s
Test / test_snapshot_down (push) Successful in 31s
Test / test_snapshot_down_ec (push) Successful in 32s
Test / test_splitbrain (push) Successful in 21s
Test / test_snapshot_chain (push) Successful in 2m11s
Test / test_snapshot_chain_ec (push) Successful in 3m14s
Test / test_rebalance_verify_imm (push) Successful in 3m29s
Test / test_rebalance_verify (push) Successful in 4m4s
Test / test_switch_primary (push) Successful in 40s
Test / test_write (push) Successful in 42s
Test / test_write_no_same (push) Successful in 17s
Test / test_write_xor (push) Successful in 1m3s
Test / test_rebalance_verify_ec (push) Successful in 4m36s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m43s
Test / test_heal_pg_size_2 (push) Successful in 3m36s
Test / test_heal_ec (push) Successful in 6m7s
Test / test_heal_csum_32k_dmj (push) Successful in 5m39s
Test / test_heal_csum_32k_dj (push) Successful in 5m33s
Test / test_heal_csum_32k (push) Successful in 6m38s
Test / test_scrub (push) Successful in 1m52s
Test / test_heal_csum_4k_dmj (push) Successful in 6m32s
Test / test_heal_csum_4k_dj (push) Successful in 6m30s
Test / test_heal_csum_4k (push) Successful in 6m25s
Test / test_scrub_zero_osd_2 (push) Successful in 1m32s
Test / test_scrub_xor (push) Successful in 31s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 35s
Test / test_scrub_pg_size_3 (push) Successful in 43s
Test / test_nfs (push) Successful in 13s
Test / test_scrub_ec (push) Successful in 19s
2024-03-18 02:03:44 +03:00
9ad6822353 Release 1.5.0
All checks were successful
Test / test_rm (push) Successful in 14s
Test / test_interrupted_rebalance_ec_imm (push) Successful in 1m36s
Test / test_snapshot_down (push) Successful in 31s
Test / test_snapshot_down_ec (push) Successful in 30s
Test / test_splitbrain (push) Successful in 24s
Test / test_snapshot_chain (push) Successful in 2m20s
Test / test_snapshot_chain_ec (push) Successful in 3m5s
Test / test_rebalance_verify_imm (push) Successful in 5m11s
Test / test_rebalance_verify (push) Successful in 5m55s
Test / test_switch_primary (push) Successful in 33s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m26s
Test / test_write (push) Successful in 54s
Test / test_write_xor (push) Successful in 57s
Test / test_write_no_same (push) Successful in 19s
Test / test_rebalance_verify_ec (push) Successful in 7m21s
Test / test_heal_pg_size_2 (push) Successful in 4m36s
Test / test_heal_csum_32k_dmj (push) Successful in 4m33s
Test / test_heal_ec (push) Successful in 6m15s
Test / test_heal_csum_32k_dj (push) Successful in 6m31s
Test / test_heal_csum_32k (push) Successful in 6m29s
Test / test_heal_csum_4k_dmj (push) Successful in 6m15s
Test / test_scrub_zero_osd_2 (push) Successful in 1m16s
Test / test_scrub (push) Successful in 1m18s
Test / test_scrub_xor (push) Successful in 1m13s
Test / test_heal_csum_4k_dj (push) Successful in 7m10s
Test / test_scrub_ec (push) Successful in 56s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 59s
Test / test_heal_csum_4k (push) Successful in 6m2s
Test / test_scrub_pg_size_3 (push) Successful in 2m11s
Test / test_nfs (push) Successful in 11s
After half a year of hard work, VitastorFS is finally here ! :-)

New features:
- VitastorFS, a full-featured clustered (read-write-many) file system.
  Documentation: [VitastorFS](docs/usage/nfs.en.md)
- Embedded key-value database implementation based on Parallel Optimistic B-Tree
  algorithm and used for the metadata of VitastorFS
- Pool management commands in vitastor-cli (create-pool, list-pools, rm-pool, modify-pool).
  Thanks MIND Software (https://mindsw.io) for their contribution!
  [Documentation](docs/usage/cli.en.md#create-pool)

Bug fixes:
- Fix a very rare "infinite loop" in the client library
- Fix a rare OSD hang on during start when zeroing out bad metadata entries left from the previous run
2024-03-16 15:35:10 +03:00
2043b4e374 Fix build errors for gcc 8 2024-03-16 15:35:10 +03:00
de840e6fe3 Reduce kv-cli loadjson load parallelism to 16 2024-03-16 15:35:10 +03:00
b5e04bf809 Fix build warning 2024-03-16 15:35:10 +03:00
8807a1623b Fix markdown tables 2024-03-16 15:35:10 +03:00
f12855c31b Add vitastor-kv to packages 2024-03-16 15:35:10 +03:00
e75dcc9a71 Add documentation for VitastorFS 2024-03-16 15:16:43 +03:00
88516ab4bd Remove extra log 2024-03-16 13:24:36 +03:00
6221126b4f Allow to print simple-offsets just given the device size 2024-03-16 13:24:36 +03:00
6783d4a13c Implement fool protection for FS pools 2024-03-16 13:24:36 +03:00
dcbe1afac3 Store pool ID in inode metadata 2024-03-16 13:24:36 +03:00
0bde28c24a Make nfs_do_rmw a library function 2024-03-16 13:24:36 +03:00
bb8ca6184e Support setattr guard 2024-03-16 13:24:36 +03:00
87310ef7bb Support ctime 2024-03-16 13:24:36 +03:00
4f4b2dab80 Log NFS liveness checks 2024-03-16 13:24:36 +03:00
f70da82317 Add loadjson command to vitastor-kv 2024-03-16 13:24:36 +03:00
e42148f347 Allow to specify KV commands on command line 2024-03-16 13:24:36 +03:00
c289584469 Add JSON dump format 2024-03-16 13:24:36 +03:00
018e89f867 Erase verf key left from creation from ientries on every modification 2024-03-16 13:24:36 +03:00
603dc68f11 Implement async mtime change 2024-03-16 13:24:36 +03:00
7b12342933 Allow to specify additional NFS mount options 2024-03-16 13:24:36 +03:00
44bf0f16ee Fix malloc/free in nfs_kv_read/write 2024-03-16 13:24:36 +03:00
8840c84572 Fix "bad key in etcd" in mon for FS pools 2024-03-16 13:24:36 +03:00
5b747c12ec Check if already mounted before mounting 2024-03-16 13:24:36 +03:00
05f5f46162 Fix zero used space, update mtime when moving/changing inode 2024-03-16 13:24:36 +03:00
b5604191c8 Ignore ECANCELED in nfs-proxy (happens in io_uring on fork) 2024-03-16 13:24:36 +03:00
e871de27de Support unaligned shared_offsets, align shared file data instead of header 2024-03-16 13:24:36 +03:00
f600ce98e2 Implement auto-unmount local NFS server mode for vitastor-nfs 2024-03-16 13:24:36 +03:00
57605a5c13 Return error on failed shrink 2024-03-16 13:24:36 +03:00
29bd4561bb Implement rename over an existing file/directory 2024-03-16 13:24:36 +03:00
7142460ec8 Support --logfile in nfs-proxy 2024-03-16 13:24:36 +03:00
d03f19ebe5 Fix shared file overlap, add FIXMEs 2024-03-16 13:24:36 +03:00
88f9d18be3 Create inode, then direntry, not direntry, then inode; retry ID collisions 2024-03-16 13:24:36 +03:00
6213fbd8c6 Fix NFS shared/aligned write FIXMEs 2024-03-16 13:24:36 +03:00
3aee37eadd Allow to disable per-inode stats for VitastorFS pools 2024-03-16 13:24:36 +03:00
ecfc753e93 Add basic NFS tests, fix bugs 2024-03-16 13:24:36 +03:00
a574f9ad71 Return block NFS implementation back as an option too 2024-03-16 13:24:36 +03:00
7c235c9103 Move KV FS header into a separate file 2024-03-16 13:24:36 +03:00
e5bb986164 Implement packing small files into shared inodes 2024-03-16 13:24:36 +03:00
181795d748 Split new NFS proxy implementation into multiple files 2024-03-16 13:24:36 +03:00
8cdc38805b WIP VitastorFS with metadata storage in VitastorKV 2024-03-16 13:24:36 +03:00
0cd455d17f First just recheck version without actually re-reading block in vitastor-kv 2024-03-16 13:24:36 +03:00
32ba653ba6 Fix vitastor-kv hang on reopen & unfinished closed listing 2024-03-16 13:24:36 +03:00
231d4b15fc Add loadable dump format to vitastor-kv (dump) 2024-03-16 13:24:36 +03:00
9dc4d5fd7b Fix freeing r/w buffers on errors in kv_db 2024-03-16 13:24:36 +03:00
e58538fa47 Fix eviction when random_pos selects the end 2024-03-16 13:24:36 +03:00
11ac9e7024 Implement min/max list_count to make listings during performance test reasonable 2024-03-16 13:24:36 +03:00
511bc3df1c Fix and improve parallel allocation
- Do not try to allocate more DB blocks in an inode block until it's "confirmed" and "locked" by the first write
- Do not recheck for new zero DB blocks on first write into an inode block - a CAS failure means someone else is already writing into it
- Throw new allocation blocks away regardless of whether the known_version is 0 on a CAS failure
2024-03-16 13:24:36 +03:00
a64f0d1f73 Implement key_prefix for K/V stress test 2024-03-16 13:24:36 +03:00
ec5f7c6b87 More fixes
- do not overwrite a block with older version if known version is newer
  (read may start before update and end after update)
- invalidated block versions can't be remembered and trusted
- right boundary for split blocks is right_half when diving down, not key_lt
- restart update also when block is "invalidated", not just on version mismatch
- copy callback in listings to avoid closure destruction bugs too
2024-03-16 13:24:36 +03:00
3ebed9a749 Add logging and one more assert 2024-03-16 13:24:36 +03:00
eab67a6e8f Make get_block() wait for updating when unrelated block is found along the path 2024-03-16 13:24:36 +03:00
20993d9b7a Fix a race condition where changed blocks were parsed over existing cached blocks and getting a mix of data 2024-03-16 13:24:36 +03:00
5cf9b343c0 Simplify code by removing an unneeded "optimisation" 2024-03-16 13:24:36 +03:00
79ae0aadcd Add kv_log_level, print warnings on level 1, trace ops on level 10 2024-03-16 13:24:36 +03:00
605afc3583 Fix duplicate keys in listings on parallel updates -- do not rewind key "iterator position" 2024-03-16 13:24:36 +03:00
c0681d8242 Implement key suffix to avoid collisions of multiple test workers 2024-03-16 13:24:36 +03:00
763e77b4f4 Do not complain on empty first block 2024-03-16 13:24:36 +03:00
19426aa4c5 Add JSON output for stress-tester 2024-03-16 13:24:36 +03:00
08f586bcec Print total stats 2024-03-16 13:24:36 +03:00
f1cd87473a Do not send more than op_count operations (fix segfault on finish) 2024-03-16 13:24:36 +03:00
1bd8d2da56 Add some more resiliency to serialize() 2024-03-16 13:24:36 +03:00
a7396d2baf Invalidate blocks being updated too 2024-03-16 13:24:36 +03:00
e98a38810d Change new block allocation method: make each writer choose multiple empty PG blocks and place blocks in them 2024-03-16 13:24:36 +03:00
28c4324c36 Remove blocks from cache on unsuccessful updates 2024-03-16 13:24:36 +03:00
31ec3fa8f5 Allow to track multiple updates per block (it should never happen though) 2024-03-16 13:24:36 +03:00
e4fa26f60a Do not call stop_updating after failed write_new_block and after clear_block (both delete the item) 2024-03-16 13:24:36 +03:00
59ae27f9e5 Track versions of parent blocks and recheck if changed during update 2024-03-16 13:24:36 +03:00
2c6a301d9b Fix resume_split condition (key_lt can also be "") 2024-03-16 13:24:36 +03:00
01558349f8 Experiment: transform offsets for better sharding 2024-03-16 13:24:36 +03:00
36f4717d0d More post-stress-test fixes
- Prevent _split types of new blocks
- Stop updating new blocks only after the whole update, otherwise pointers
  may become invalid
- Use recheck_none for updates initially
- Use UINT64_MAX as initial block version when postponing ops, otherwise the
  check fails when the block is initially empty. This for example leads to
  writing both leaf items & block pointers (which is incorrect) into the root
  block when starting stress-test with --parallelism 32
- Fix -EINTR comparison
2024-03-16 13:24:36 +03:00
babaf2a0ce Print operation statistics 2024-03-16 13:24:36 +03:00
5773f1a375 K/V fixes after stress-test :-)
- track block versions correctly - per inode block (128kb) instead of tree block (4kb)
- prevent multiple parallel CAS writes of the same inode block
- add logging for EILSEQ which means invalid data in the tree
- fix get_block updated flag which was true for blocks already in cache and was leading to infinite loops on "unrelated block" errors
- apply changes to blocks in cache only after successful writes (using "virtual changes")
- do not replace cached block with an older version from disk
- recheck "unrelated blocks" (read/update collisions) until data stops changing
- track tree path correctly - do not treat split block as parent of its right half
- correctly move blocks when finding new empty place on disk
- restart updates from the beginning when one of blocks is changed by a parallel update
- fix delete using SET opcode and setting key to the empty value instead
- prevent changing the same key more than 1 time in parallel
- fix listing verification
- resume continue_updates in update_find (required because it uses continue_update itself)
- add allow_old_cached parameter to get()
2024-03-16 13:24:36 +03:00
57222a9f79 Implement K/V DB stress tester 2024-03-16 13:24:36 +03:00
61ef000c6e Evict blocks based on memory limit & block usage 2024-03-16 13:24:36 +03:00
7d5e1cc393 Track blocks per level 2024-03-16 13:24:36 +03:00
5e7f27a02d Track block level 2024-03-16 13:24:36 +03:00
fd1d8a8520 Experimental B-Tree Vitastor embedded K/V database implementation! 2024-03-16 13:24:36 +03:00
c364e14c40 Stop then retry, not retry then stop 2024-03-16 13:24:36 +03:00
3ebbfa0428 Fix another rare OSD hang on zeroing out entries on start 2024-03-16 13:24:36 +03:00
aa79d1db1c Fix incorrect "changing scheme" message in modify-pool
All checks were successful
Test / test_rm (push) Successful in 14s
Test / test_interrupted_rebalance_ec_imm (push) Successful in 1m32s
Test / test_move_reappear (push) Successful in 20s
Test / test_snapshot_down (push) Successful in 29s
Test / test_snapshot_down_ec (push) Successful in 29s
Test / test_splitbrain (push) Successful in 28s
Test / test_snapshot_chain (push) Successful in 2m5s
Test / test_snapshot_chain_ec (push) Successful in 3m3s
Test / test_rebalance_verify_imm (push) Successful in 4m0s
Test / test_rebalance_verify (push) Successful in 4m40s
Test / test_switch_primary (push) Successful in 38s
Test / test_write (push) Successful in 41s
Test / test_write_no_same (push) Successful in 17s
Test / test_write_xor (push) Successful in 1m2s
Test / test_rebalance_verify_ec (push) Successful in 5m34s
Test / test_rebalance_verify_ec_imm (push) Successful in 5m34s
Test / test_heal_pg_size_2 (push) Successful in 3m22s
Test / test_heal_ec (push) Successful in 4m58s
Test / test_heal_csum_32k_dmj (push) Successful in 5m37s
Test / test_heal_csum_32k_dj (push) Successful in 6m21s
Test / test_heal_csum_32k (push) Successful in 7m1s
Test / test_scrub (push) Successful in 1m37s
Test / test_heal_csum_4k_dmj (push) Successful in 6m59s
Test / test_scrub_zero_osd_2 (push) Successful in 1m26s
Test / test_scrub_xor (push) Successful in 1m3s
Test / test_heal_csum_4k_dj (push) Successful in 7m20s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 1m7s
Test / test_scrub_ec (push) Successful in 36s
Test / test_scrub_pg_size_3 (push) Successful in 1m37s
Test / test_heal_csum_4k (push) Successful in 6m23s
2024-03-06 00:41:35 +03:00
a1fecb7eff Move callback away when calling it in cluster_client 2024-03-06 00:41:35 +03:00
ff74b19423 Fix rare OSD hang on zeroing out bad entries on start 2024-03-06 00:41:35 +03:00
4cf6dceed7 Merge branch 'rel-1.4'
Some checks reported warnings
Test / test_minsize_1 (push) Has been cancelled
Test / test_move_reappear (push) Has been cancelled
Test / test_rm (push) Has been cancelled
Test / test_snapshot_chain (push) Has been cancelled
Test / test_snapshot_chain_ec (push) Has been cancelled
Test / test_snapshot_down (push) Has been cancelled
Test / test_snapshot_down_ec (push) Has been cancelled
Test / test_splitbrain (push) Has been cancelled
Test / test_rebalance_verify (push) Has been cancelled
Test / test_rebalance_verify_imm (push) Has been cancelled
Test / test_rebalance_verify_ec (push) Has been cancelled
Test / test_rebalance_verify_ec_imm (push) Has been cancelled
Test / test_switch_primary (push) Has been cancelled
Test / test_write (push) Has been cancelled
Test / test_write_xor (push) Has been cancelled
Test / test_write_no_same (push) Has been cancelled
Test / test_heal_pg_size_2 (push) Has been cancelled
Test / test_heal_ec (push) Has been cancelled
Test / test_heal_csum_32k_dmj (push) Has been cancelled
Test / test_heal_csum_32k_dj (push) Has been cancelled
Test / test_heal_csum_32k (push) Has been cancelled
Test / test_heal_csum_4k_dmj (push) Has been cancelled
Test / test_heal_csum_4k_dj (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
Test / test_scrub (push) Has been cancelled
Test / test_scrub_zero_osd_2 (push) Has been cancelled
Test / test_scrub_xor (push) Has been cancelled
Test / test_scrub_pg_size_3 (push) Has been cancelled
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Has been cancelled
Test / test_scrub_ec (push) Has been cancelled
2024-02-29 09:59:01 +03:00
38b8963330 Release 1.4.8
All checks were successful
Test / test_rm (push) Successful in 19s
Test / test_move_reappear (push) Successful in 26s
Test / test_interrupted_rebalance_ec_imm (push) Successful in 1m40s
Test / test_snapshot_down (push) Successful in 31s
Test / test_snapshot_down_ec (push) Successful in 34s
Test / test_splitbrain (push) Successful in 27s
Test / test_snapshot_chain (push) Successful in 2m18s
Test / test_snapshot_chain_ec (push) Successful in 2m59s
Test / test_rebalance_verify_imm (push) Successful in 5m32s
Test / test_rebalance_verify (push) Successful in 6m11s
Test / test_switch_primary (push) Successful in 41s
Test / test_write (push) Successful in 45s
Test / test_write_no_same (push) Successful in 23s
Test / test_rebalance_verify_ec_imm (push) Successful in 5m2s
Test / test_write_xor (push) Successful in 55s
Test / test_rebalance_verify_ec (push) Successful in 6m22s
Test / test_heal_pg_size_2 (push) Successful in 5m41s
Test / test_heal_csum_32k_dmj (push) Successful in 5m59s
Test / test_heal_csum_32k_dj (push) Successful in 7m19s
Test / test_heal_csum_32k (push) Successful in 7m17s
Test / test_heal_csum_4k_dmj (push) Successful in 7m14s
Test / test_scrub (push) Successful in 1m12s
Test / test_heal_ec (push) Successful in 9m2s
Test / test_scrub_xor (push) Successful in 56s
Test / test_scrub_zero_osd_2 (push) Successful in 1m8s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 2m1s
Test / test_heal_csum_4k_dj (push) Successful in 4m45s
Test / test_scrub_pg_size_3 (push) Successful in 2m31s
Test / test_heal_csum_4k (push) Successful in 4m54s
Test / test_scrub_ec (push) Successful in 46s
- Do not use \r if output is not a terminal (should fix unexpected job output in proxmox)
- Fix rm/rm-data error return code, add --down-ok option to bypass the error
- Add EIO retry timeout and allow to disable these retries, rename up_wait_retry_interval to client_retry_interval
- Add ubuntu jammy build
- Wait for blockstore initialisation before starting OSD (prevent timeouts when init takes time)
- Fix a rare use-after-free in automatic sync after delete in blockstore
2024-02-29 09:58:34 +03:00
77167e2920 Do not use \r if output is not a terminal 2024-02-29 00:21:17 +03:00
5af23672d0 Fix rm/rm-data error return code, add --down-ok option to bypass the error 2024-02-29 00:20:10 +03:00
6bf1f539a6 Add EIO retry timeout and allow to disable these retries, rename up_wait_retry_interval to client_retry_interval 2024-02-28 13:10:02 +03:00
4eab26f968 Add documentation and a very basic test for pool management commands
All checks were successful
Test / test_snapshot_ec (push) Successful in 31s
Test / test_rm (push) Successful in 17s
Test / test_move_reappear (push) Successful in 24s
Test / test_snapshot_down (push) Successful in 27s
Test / test_snapshot_down_ec (push) Successful in 33s
Test / test_splitbrain (push) Successful in 20s
Test / test_snapshot_chain (push) Successful in 2m15s
Test / test_snapshot_chain_ec (push) Successful in 2m58s
Test / test_rebalance_verify_imm (push) Successful in 5m3s
Test / test_rebalance_verify (push) Successful in 5m36s
Test / test_switch_primary (push) Successful in 37s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m3s
Test / test_write_no_same (push) Successful in 21s
Test / test_write (push) Successful in 58s
Test / test_write_xor (push) Successful in 1m31s
Test / test_rebalance_verify_ec (push) Successful in 6m20s
Test / test_heal_pg_size_2 (push) Successful in 4m7s
Test / test_heal_ec (push) Successful in 4m33s
Test / test_heal_csum_32k_dmj (push) Successful in 5m53s
Test / test_heal_csum_32k_dj (push) Successful in 6m17s
Test / test_heal_csum_32k (push) Successful in 7m23s
Test / test_heal_csum_4k_dmj (push) Successful in 6m56s
Test / test_scrub_zero_osd_2 (push) Successful in 1m26s
Test / test_scrub (push) Successful in 1m29s
Test / test_heal_csum_4k_dj (push) Successful in 7m1s
Test / test_scrub_xor (push) Successful in 1m1s
Test / test_heal_csum_4k (push) Successful in 6m34s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 32s
Test / test_scrub_pg_size_3 (push) Successful in 1m19s
Test / test_scrub_ec (push) Successful in 24s
2024-02-28 13:08:04 +03:00
86243b7101 Rework & fix pool-create / pool-modify / pool-ls 2024-02-28 13:08:04 +03:00
idelson
dc92851322 vitastor-cli: add commands to control pools: pool-create, pool-ls, pool-modify, pool-rm
PR #59 - https://github.com/vitalif/vitastor/pull/58/commits

By MIND Software LLC

By submitting this pull request, I accept Vitastor CLA
2024-02-28 13:08:04 +03:00
02d1f16bbd Add ubuntu jammy build
PR #62 #62

I accept Vitastor CLA agreement: https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/CLA-en.md
2024-02-28 11:43:54 +03:00
fc413038d1 Wait for blockstore initialisation before starting OSD
Some checks reported warnings
Test / test_cas (push) Has been cancelled
Test / test_change_pg_count (push) Has been cancelled
Test / test_change_pg_count_ec (push) Has been cancelled
Test / test_change_pg_size (push) Has been cancelled
Test / test_create_nomaxid (push) Has been cancelled
Test / test_etcd_fail (push) Has been cancelled
Test / test_interrupted_rebalance (push) Has been cancelled
Test / test_interrupted_rebalance_imm (push) Has been cancelled
Test / test_interrupted_rebalance_ec (push) Has been cancelled
Test / test_interrupted_rebalance_ec_imm (push) Has been cancelled
Test / test_failure_domain (push) Has been cancelled
Test / test_snapshot (push) Has been cancelled
Test / test_snapshot_ec (push) Has been cancelled
Test / test_minsize_1 (push) Has been cancelled
Test / test_move_reappear (push) Has been cancelled
Test / test_rm (push) Has been cancelled
Test / test_snapshot_chain (push) Has been cancelled
Test / test_snapshot_chain_ec (push) Has been cancelled
Test / test_snapshot_down (push) Has been cancelled
Test / test_snapshot_down_ec (push) Has been cancelled
Test / test_splitbrain (push) Has been cancelled
Test / test_rebalance_verify (push) Has been cancelled
Test / test_rebalance_verify_imm (push) Has been cancelled
Test / test_rebalance_verify_ec (push) Has been cancelled
Test / test_rebalance_verify_ec_imm (push) Has been cancelled
Test / test_switch_primary (push) Has been cancelled
Test / test_write (push) Has been cancelled
Test / test_write_xor (push) Has been cancelled
Test / test_write_no_same (push) Has been cancelled
Test / test_heal_pg_size_2 (push) Has been cancelled
2024-02-27 02:20:04 +03:00
1bc0b5aab3 Fix a rare use-after-free in automatic sync after delete in blockstore
All checks were successful
Test / test_interrupted_rebalance_ec (push) Successful in 2m49s
Test / test_rm (push) Successful in 14s
Test / test_move_reappear (push) Successful in 21s
Test / test_snapshot_down (push) Successful in 31s
Test / test_snapshot_down_ec (push) Successful in 30s
Test / test_splitbrain (push) Successful in 23s
Test / test_snapshot_chain (push) Successful in 2m29s
Test / test_snapshot_chain_ec (push) Successful in 2m48s
Test / test_rebalance_verify_imm (push) Successful in 4m9s
Test / test_rebalance_verify (push) Successful in 4m42s
Test / test_switch_primary (push) Successful in 41s
Test / test_write (push) Successful in 43s
Test / test_write_no_same (push) Successful in 21s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m37s
Test / test_write_xor (push) Successful in 1m11s
Test / test_rebalance_verify_ec (push) Successful in 7m14s
Test / test_heal_pg_size_2 (push) Successful in 4m3s
Test / test_heal_ec (push) Successful in 4m18s
Test / test_heal_csum_32k_dmj (push) Successful in 5m5s
Test / test_heal_csum_32k_dj (push) Successful in 6m52s
Test / test_heal_csum_32k (push) Successful in 6m23s
Test / test_heal_csum_4k_dmj (push) Successful in 6m23s
Test / test_scrub (push) Successful in 1m30s
Test / test_scrub_zero_osd_2 (push) Successful in 1m18s
Test / test_heal_csum_4k_dj (push) Successful in 7m9s
Test / test_scrub_xor (push) Successful in 57s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 1m5s
Test / test_scrub_ec (push) Successful in 1m6s
Test / test_scrub_pg_size_3 (push) Successful in 2m3s
Test / test_heal_csum_4k (push) Successful in 4m54s
ASan report: [0] READ of size 16 at operator() /root/vitastor/src/blockstore_write.cpp:100
...[5] blockstore_impl_t::ack_sync(blockstore_op_t*) /root/vitastor/src/blockstore_sync.cpp:232
2024-02-24 00:06:36 +03:00
5e934264cf Release 1.4.7
- Fix another old "BUG: Attempt to overwrite used offset" in a very simple
  case: bs=4k rw=write iodepth=16 from OSD start; add this case to tests
- Fix a rare crash with "unexpected state during flush: 0x51" possible with
  EC since 1.4.2 during rebalance and OSD outages
- Fix a rare write stall with EC & immediate_commit=none caused by sync
  operations reserving unneeded space in the journal
- Fix 32-bit build warnings, most in printf/scanf format strings
2024-02-22 12:45:52 +03:00
f20564b44b Fix 32-bit build warnings (99.9% in printf) 2024-02-22 12:22:16 +03:00
b3c15db331 32M journal by default in simple-offsets
All checks were successful
Test / test_snapshot_ec (push) Successful in 30s
Test / test_rm (push) Successful in 18s
Test / test_move_reappear (push) Successful in 24s
Test / test_snapshot_down (push) Successful in 26s
Test / test_snapshot_down_ec (push) Successful in 30s
Test / test_splitbrain (push) Successful in 23s
Test / test_snapshot_chain (push) Successful in 2m17s
Test / test_snapshot_chain_ec (push) Successful in 2m55s
Test / test_rebalance_verify_imm (push) Successful in 2m46s
Test / test_rebalance_verify (push) Successful in 3m9s
Test / test_switch_primary (push) Successful in 39s
Test / test_write (push) Successful in 43s
Test / test_write_no_same (push) Successful in 19s
Test / test_write_xor (push) Successful in 55s
Test / test_rebalance_verify_ec (push) Successful in 3m35s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m37s
Test / test_heal_pg_size_2 (push) Successful in 3m36s
Test / test_heal_ec (push) Successful in 5m47s
Test / test_heal_csum_32k_dmj (push) Successful in 5m21s
Test / test_heal_csum_32k_dj (push) Successful in 6m16s
Test / test_heal_csum_32k (push) Successful in 6m45s
Test / test_scrub (push) Successful in 1m56s
Test / test_heal_csum_4k_dj (push) Successful in 6m39s
Test / test_heal_csum_4k_dmj (push) Successful in 6m42s
Test / test_scrub_zero_osd_2 (push) Successful in 1m16s
Test / test_scrub_xor (push) Successful in 47s
Test / test_scrub_pg_size_3 (push) Successful in 1m26s
Test / test_heal_csum_4k (push) Successful in 6m32s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 48s
Test / test_scrub_ec (push) Successful in 49s
2024-02-21 15:25:02 +03:00
685bcd6ef9 Do not reserve extra space for big_writes during sync - sync itself is needed to commit and clear them 2024-02-21 13:00:14 +03:00
3eb389b321 Supposed fix for "unexpected state during flush: 0x51" with EC
Some checks failed
Test / test_move_reappear (push) Successful in 22s
Test / test_interrupted_rebalance_ec_imm (push) Successful in 1m32s
Test / test_rm (push) Successful in 16s
Test / test_snapshot_down (push) Successful in 31s
Test / test_snapshot_down_ec (push) Successful in 32s
Test / test_splitbrain (push) Successful in 25s
Test / test_snapshot_chain (push) Successful in 2m4s
Test / test_snapshot_chain_ec (push) Successful in 2m51s
Test / test_rebalance_verify_imm (push) Successful in 2m47s
Test / test_rebalance_verify (push) Successful in 3m30s
Test / test_switch_primary (push) Successful in 38s
Test / test_write (push) Successful in 51s
Test / test_write_no_same (push) Successful in 16s
Test / test_write_xor (push) Successful in 52s
Test / test_rebalance_verify_ec (push) Successful in 3m32s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m7s
Test / test_scrub_zero_osd_2 (push) Successful in 59s
Test / test_scrub (push) Successful in 1m2s
Test / test_scrub_xor (push) Successful in 36s
Test / test_scrub_ec (push) Successful in 38s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 40s
Test / test_scrub_pg_size_3 (push) Successful in 49s
Test / test_heal_csum_32k_dmj (push) Successful in 5m12s
Test / test_heal_csum_32k_dj (push) Successful in 5m8s
Test / test_heal_csum_32k (push) Successful in 4m55s
Test / test_heal_ec (push) Failing after 10m14s
Test / test_heal_csum_4k_dmj (push) Successful in 4m59s
Test / test_heal_csum_4k_dj (push) Successful in 5m5s
Test / test_heal_pg_size_2 (push) Successful in 3m54s
Test / test_heal_csum_4k (push) Successful in 3m49s
2024-02-21 01:32:06 +03:00
3d16cde23c Fix assertions, add small sequential write test
Some checks failed
Test / test_snapshot_down_ec (push) Successful in 32s
Test / test_splitbrain (push) Successful in 22s
Test / test_snapshot_chain (push) Successful in 2m8s
Test / test_snapshot_chain_ec (push) Successful in 2m48s
Test / test_rebalance_verify_imm (push) Successful in 2m57s
Test / test_rebalance_verify (push) Successful in 3m29s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 54s
Test / test_write_xor (push) Successful in 51s
Test / test_write_no_same (push) Successful in 16s
Test / test_rebalance_verify_ec (push) Successful in 3m40s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m20s
Test / test_scrub (push) Successful in 1m1s
Test / test_scrub_zero_osd_2 (push) Successful in 46s
Test / test_scrub_xor (push) Successful in 41s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 1m0s
Test / test_scrub_ec (push) Successful in 58s
Test / test_scrub_pg_size_3 (push) Successful in 1m45s
Test / test_heal_pg_size_2 (push) Failing after 4m52s
Test / test_heal_csum_32k_dmj (push) Successful in 5m36s
Test / test_heal_csum_32k_dj (push) Successful in 5m33s
Test / test_interrupted_rebalance_imm (push) Successful in 1m35s
Test / test_interrupted_rebalance (push) Successful in 2m28s
Test / test_interrupted_rebalance_ec (push) Successful in 2m30s
Test / test_interrupted_rebalance_ec_imm (push) Successful in 2m41s
Test / test_heal_ec (push) Failing after 10m20s
Test / test_heal_csum_4k_dmj (push) Successful in 4m21s
Test / test_heal_csum_32k (push) Successful in 5m15s
Test / test_heal_csum_4k_dj (push) Successful in 5m48s
Test / test_heal_csum_4k (push) Successful in 5m32s
2024-02-20 19:41:48 +03:00
c6406d67fc Fix journal space_check incorrectly checking for space at the beginning 2024-02-20 19:40:56 +03:00
f87964861d Release 1.4.6
All checks were successful
Test / test_snapshot_ec (push) Successful in 29s
Test / test_rm (push) Successful in 18s
Test / test_move_reappear (push) Successful in 26s
Test / test_snapshot_down (push) Successful in 28s
Test / test_snapshot_down_ec (push) Successful in 32s
Test / test_splitbrain (push) Successful in 23s
Test / test_snapshot_chain (push) Successful in 2m3s
Test / test_snapshot_chain_ec (push) Successful in 2m46s
Test / test_rebalance_verify_imm (push) Successful in 3m1s
Test / test_rebalance_verify (push) Successful in 3m30s
Test / test_switch_primary (push) Successful in 38s
Test / test_write (push) Successful in 32s
Test / test_write_no_same (push) Successful in 17s
Test / test_write_xor (push) Successful in 38s
Test / test_rebalance_verify_ec (push) Successful in 4m38s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m57s
Test / test_heal_csum_32k_dj (push) Successful in 5m14s
Test / test_heal_csum_32k_dmj (push) Successful in 5m21s
Test / test_heal_csum_32k (push) Successful in 5m45s
Test / test_heal_csum_4k_dmj (push) Successful in 5m27s
Test / test_scrub (push) Successful in 1m30s
Test / test_heal_csum_4k_dj (push) Successful in 5m26s
Test / test_scrub_zero_osd_2 (push) Successful in 38s
Test / test_scrub_xor (push) Successful in 40s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 1m8s
Test / test_scrub_ec (push) Successful in 1m5s
Test / test_scrub_pg_size_3 (push) Successful in 1m49s
Test / test_heal_csum_4k (push) Successful in 5m41s
Test / test_heal_ec (push) Successful in 4m11s
Test / test_heal_pg_size_2 (push) Successful in 4m22s
Unwavering stabilization of 1.4.x, continued :-)

- Include the accidentally lost part of 1.4.5 journal trimming fix
- Fix a possible OSD crash with "BUG: Attempt to overwrite used offset"
  which was probably present for long time, but became apparent after
  fixing flapping tests in CI
- Fix remaining flapping tests in CI. It was the first time when tests
  actually passed without retries :-)
2024-02-20 17:01:26 +03:00
62a4f45160 Raise test_scrub waiting timeout
Some checks failed
Test / test_snapshot_ec (push) Successful in 27s
Test / test_rm (push) Successful in 19s
Test / test_move_reappear (push) Successful in 25s
Test / test_snapshot_down (push) Successful in 28s
Test / test_snapshot_down_ec (push) Successful in 33s
Test / test_splitbrain (push) Successful in 28s
Test / test_snapshot_chain (push) Successful in 2m17s
Test / test_snapshot_chain_ec (push) Successful in 3m5s
Test / test_rebalance_verify_imm (push) Successful in 3m0s
Test / test_rebalance_verify (push) Successful in 3m43s
Test / test_switch_primary (push) Successful in 40s
Test / test_write (push) Successful in 41s
Test / test_write_no_same (push) Successful in 14s
Test / test_write_xor (push) Successful in 42s
Test / test_rebalance_verify_ec (push) Successful in 3m55s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m6s
Test / test_heal_pg_size_2 (push) Successful in 4m51s
Test / test_heal_csum_32k_dj (push) Successful in 5m47s
Test / test_heal_csum_32k_dmj (push) Successful in 5m50s
Test / test_heal_csum_32k (push) Successful in 5m42s
Test / test_heal_ec (push) Failing after 10m30s
Test / test_heal_csum_4k_dmj (push) Successful in 5m22s
Test / test_scrub (push) Successful in 1m21s
Test / test_scrub_xor (push) Successful in 46s
Test / test_scrub_zero_osd_2 (push) Successful in 53s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 1m21s
Test / test_scrub_pg_size_3 (push) Successful in 1m56s
Test / test_scrub_ec (push) Successful in 55s
Test / test_heal_csum_4k (push) Successful in 4m28s
Test / test_heal_csum_4k_dj (push) Failing after 10m15s
2024-02-20 16:26:09 +03:00
7048228678 Supposed fix for "BUG: Attempt to overwrite used offset" 2024-02-20 15:56:48 +03:00
ea73857450 Add asserts to catch "BUG: Attempt to overwrite used offset" 2024-02-20 15:56:48 +03:00
6cfe38ec04 Followup to empty cur.oid as stop condition for forced trim fix 2024-02-20 15:56:38 +03:00
7ae5766fdb Wait to clear has_degraded in test_heal - should fix flaps of test_heal_* in CI 2024-02-20 15:56:27 +03:00
f882c7dd87 Release 1.4.5
All checks were successful
Test / test_interrupted_rebalance_ec_imm (push) Successful in 1m23s
Test / test_rm (push) Successful in 15s
Test / test_move_reappear (push) Successful in 21s
Test / test_snapshot_down (push) Successful in 26s
Test / test_snapshot_down_ec (push) Successful in 30s
Test / test_splitbrain (push) Successful in 29s
Test / test_snapshot_chain (push) Successful in 2m17s
Test / test_snapshot_chain_ec (push) Successful in 3m14s
Test / test_rebalance_verify_imm (push) Successful in 3m24s
Test / test_rebalance_verify (push) Successful in 3m59s
Test / test_switch_primary (push) Successful in 35s
Test / test_write_xor (push) Successful in 32s
Test / test_write_no_same (push) Successful in 13s
Test / test_rebalance_verify_ec (push) Successful in 3m46s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m13s
Test / test_heal_pg_size_2 (push) Successful in 3m52s
Test / test_heal_ec (push) Successful in 5m25s
Test / test_heal_csum_32k_dj (push) Successful in 4m24s
Test / test_heal_csum_4k_dmj (push) Successful in 4m23s
Test / test_heal_csum_4k_dj (push) Successful in 4m17s
Test / test_scrub (push) Successful in 38s
Test / test_scrub_zero_osd_2 (push) Successful in 29s
Test / test_scrub_xor (push) Successful in 30s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 43s
Test / test_scrub_ec (push) Successful in 32s
Test / test_scrub_pg_size_3 (push) Successful in 1m46s
Test / test_heal_csum_4k (push) Successful in 4m4s
Test / test_write (push) Successful in 1m38s
Test / test_heal_csum_32k_dmj (push) Successful in 4m5s
Test / test_heal_csum_32k (push) Successful in 4m15s
- Fix a write stall caused by incorrect journal trimming introduced in 1.4.4 :)
- Fix PGs sometimes hanging in "starting" state on mass OSD restarts
- Fix a rare crash with "map::at" during OSD pings
- Use new defaults for non-capacitor (desktop) SSDs - improves T1Q256 random write from ~6k iops to ~45k iops
- Make journal_trim_interval configurable
2024-02-16 10:13:33 +03:00
26dd863c8d Fix sometimes possible crash on clients.at() during pings 2024-02-16 10:13:33 +03:00
2ae859fbc6 Use min/max_flusher_count=32/256, 128M journal and autosync_writes=512 for non-capacitor SSDs by default 2024-02-16 10:13:33 +03:00
f6cd9f9153 Add a note about pg_minsize 2024-02-15 23:38:52 +03:00
8389c0f33b Fix PGs sometimes hanging in "starting" state on mass OSD restarts 2024-02-15 23:38:52 +03:00
9db2196aef Make journal_trim_interval configurable 2024-02-15 23:38:51 +03:00
8d6ae662fe Use empty cur.oid as stop condition for forced trim, not journal_trim_counter 2024-02-15 23:27:17 +03:00
c777a0041a Release 1.4.4
All checks were successful
Test / test_interrupted_rebalance_ec_imm (push) Successful in 1m23s
Test / test_move_reappear (push) Successful in 21s
Test / test_rm (push) Successful in 16s
Test / test_snapshot_down (push) Successful in 30s
Test / test_snapshot_down_ec (push) Successful in 30s
Test / test_splitbrain (push) Successful in 25s
Test / test_snapshot_chain (push) Successful in 2m18s
Test / test_snapshot_chain_ec (push) Successful in 3m13s
Test / test_rebalance_verify_imm (push) Successful in 3m8s
Test / test_rebalance_verify (push) Successful in 3m41s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 40s
Test / test_write_no_same (push) Successful in 16s
Test / test_write_xor (push) Successful in 39s
Test / test_rebalance_verify_ec (push) Successful in 4m56s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m21s
Test / test_heal_pg_size_2 (push) Successful in 4m15s
Test / test_heal_ec (push) Successful in 5m1s
Test / test_heal_csum_32k_dj (push) Successful in 5m32s
Test / test_heal_csum_32k (push) Successful in 5m38s
Test / test_heal_csum_4k_dmj (push) Successful in 5m43s
Test / test_scrub (push) Successful in 1m31s
Test / test_scrub_zero_osd_2 (push) Successful in 1m17s
Test / test_heal_csum_4k_dj (push) Successful in 5m57s
Test / test_scrub_xor (push) Successful in 30s
Test / test_scrub_pg_size_3 (push) Successful in 1m7s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 41s
Test / test_scrub_ec (push) Successful in 24s
Test / test_heal_csum_32k_dmj (push) Successful in 3m56s
Test / test_heal_csum_4k (push) Successful in 3m16s
A couple of fixes for EC pools

- Fix a segfault possible on partial EC overwrite in 1234 -> 5030 rebalance scenario
- Fix two problems leading to EC pools stalling on rebalance & parallel sudden stops
  of OSDs, for example during a sudden poweroff of a host:
  - Recovery auto-tuning (1.4.0 feature) could apply too large delays and stall
    the EC journal - fixed by limiting delays with a new recovery_tune_sleep_cutoff_us
    parameter (10 seconds by default) and applying recovery pauses before write
    operations, not after them, to not occupy space in the journal for long time
  - Dynamic journal space reservation (1.3.0 feature) wasn't accounting new writes
    when checking the limit so OSDs could still fill the journal fully and stall -
    fixed by including new writes into the limit
- Print etcd dbSize instead of dbSizeInUse in status
2024-02-11 16:23:08 +03:00
2947ea93e8 Raise test_snapshot_chain_ec timeout to 6 minutes 2024-02-11 16:13:52 +03:00
978bdc128a Apply recovery pause before writes, after commits, and do not apply it to syncs to not block EC pools from functioning 2024-02-11 16:13:52 +03:00
bb2f395f1e Add cutoff threshold for recovery auto-tuning 2024-02-11 16:13:52 +03:00
b127da40f7 Add a FIXME about incomplete PGs 2024-02-11 13:42:51 +03:00
ca34a6047a Fix dynamic journal space reservation: include the new write itself, too 2024-02-11 13:42:51 +03:00
38ba76e893 Fix flusher sometimes being unable to trim journal when the flush queue is empty 2024-02-11 13:42:51 +03:00
1e3c4edea0 Print etcd dbSize instead of dbSizeInUse in status 2024-02-11 13:42:51 +03:00
e7ac855b07 Fix that EC segfault (1234 -> 5030 partial overwrite) 2024-02-11 13:42:51 +03:00
c53357ac45 Add a test for EC segfault with partial overwrite in 1234 -> 5030 rebalance scenario 2024-02-11 13:42:51 +03:00
485 changed files with 48994 additions and 8056 deletions

View File

@@ -22,7 +22,7 @@ RUN apt-get update
RUN apt-get -y install etcd qemu-system-x86 qemu-block-extra qemu-utils fio libasan5 \
liburing1 liburing-dev libgoogle-perftools-dev devscripts libjerasure-dev cmake libibverbs-dev libisal-dev
RUN apt-get -y build-dep fio qemu=`dpkg -s qemu-system-x86|grep ^Version:|awk '{print $2}'`
RUN apt-get -y install jq lp-solve sudo
RUN apt-get update && apt-get -y install jq lp-solve sudo nfs-common fdisk parted
RUN apt-get --download-only source fio qemu=`dpkg -s qemu-system-x86|grep ^Version:|awk '{print $2}'`
RUN set -ex; \

View File

@@ -16,6 +16,7 @@ env:
BUILDENV_IMAGE: git.yourcmc.ru/vitalif/vitastor/buildenv
TEST_IMAGE: git.yourcmc.ru/vitalif/vitastor/test
OSD_ARGS: '--etcd_quick_timeout 2000'
USE_RAMDISK: 1
concurrency:
group: ci-${{ github.ref }}
@@ -64,6 +65,13 @@ jobs:
# leak sanitizer sometimes crashes
- run: cd /root/vitastor/build && ASAN_OPTIONS=detect_leaks=0 make -j16 test
npm_lint:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- run: cd /root/vitastor/mon && npm run lint
test_add_osd:
runs-on: ubuntu-latest
needs: build
@@ -190,6 +198,24 @@ jobs:
echo ""
done
test_etcd_fail_antietcd:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 10
run: ANTIETCD=1 /root/vitastor/tests/test_etcd_fail.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_interrupted_rebalance:
runs-on: ubuntu-latest
needs: build
@@ -262,6 +288,24 @@ jobs:
echo ""
done
test_create_halfhost:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: /root/vitastor/tests/test_create_halfhost.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_failure_domain:
runs-on: ubuntu-latest
needs: build
@@ -370,6 +414,24 @@ jobs:
echo ""
done
test_rm_degraded:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: /root/vitastor/tests/test_rm_degraded.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_snapshot_chain:
runs-on: ubuntu-latest
needs: build
@@ -395,7 +457,7 @@ jobs:
steps:
- name: Run test
id: test
timeout-minutes: 3
timeout-minutes: 6
run: SCHEME=ec /root/vitastor/tests/test_snapshot_chain.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
@@ -532,6 +594,42 @@ jobs:
echo ""
done
test_dd:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: /root/vitastor/tests/test_dd.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_root_node:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: /root/vitastor/tests/test_root_node.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_switch_primary:
runs-on: ubuntu-latest
needs: build
@@ -586,6 +684,24 @@ jobs:
echo ""
done
test_write_iothreads:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: TEST_NAME=iothreads GLOBAL_CONFIG=',"client_iothread_count":4' /root/vitastor/tests/test_write.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_write_no_same:
runs-on: ubuntu-latest
needs: build
@@ -622,6 +738,24 @@ jobs:
echo ""
done
test_heal_local_read:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 10
run: TEST_NAME=local_read POOLCFG='"local_reads":"random",' /root/vitastor/tests/test_heal.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_heal_ec:
runs-on: ubuntu-latest
needs: build
@@ -640,6 +774,24 @@ jobs:
echo ""
done
test_heal_antietcd:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 10
run: ANTIETCD=1 /root/vitastor/tests/test_heal.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_heal_csum_32k_dmj:
runs-on: ubuntu-latest
needs: build
@@ -748,6 +900,150 @@ jobs:
echo ""
done
test_resize:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: /root/vitastor/tests/test_resize.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_resize_auto:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: /root/vitastor/tests/test_resize_auto.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_snapshot_pool2:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: /root/vitastor/tests/test_snapshot_pool2.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_osd_tags:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: /root/vitastor/tests/test_osd_tags.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_enospc:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: /root/vitastor/tests/test_enospc.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_enospc_xor:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: SCHEME=xor /root/vitastor/tests/test_enospc.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_enospc_imm:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: IMMEDIATE_COMMIT=1 /root/vitastor/tests/test_enospc.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_enospc_imm_xor:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: IMMEDIATE_COMMIT=1 SCHEME=xor /root/vitastor/tests/test_enospc.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_scrub:
runs-on: ubuntu-latest
needs: build
@@ -856,3 +1152,21 @@ jobs:
echo ""
done
test_nfs:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: /root/vitastor/tests/test_nfs.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done

View File

@@ -34,11 +34,19 @@ for my $line (<>)
{
$test_name .= '_imm';
}
elsif ($1 eq 'ANTIETCD')
{
$test_name .= '_antietcd';
}
else
{
$test_name .= '_'.lc($1).'_'.$2;
}
}
if ($test_name eq 'test_snapshot_chain_ec')
{
$timeout = 6;
}
$line =~ s!\./test_!/root/vitastor/tests/test_!;
# Gitea CI doesn't support artifacts yet, lol
#- name: Upload results

13
.gitignore vendored
View File

@@ -3,16 +3,3 @@
package-lock.json
fio
qemu
osd
stub_osd
stub_uring_osd
stub_bench
osd_test
osd_peering_pg_test
dump_journal
nbd_proxy
rm_inode
test_allocator
test_blockstore
test_shit
osd_rmw_test

View File

@@ -2,6 +2,6 @@ cmake_minimum_required(VERSION 2.8.12)
project(vitastor)
set(VERSION "1.4.3")
set(VITASTOR_VERSION "2.2.2")
add_subdirectory(src)

View File

@@ -1,4 +1,4 @@
## Vitastor
# Vitastor
[Read English version](README.md)
@@ -6,8 +6,8 @@
Вернём былую скорость кластерному блочному хранилищу!
Vitastor - распределённая блочная SDS (программная СХД), прямой аналог Ceph RBD и
внутренних СХД популярных облачных провайдеров. Однако, в отличие от них, Vitastor
Vitastor - распределённая блочная, файловая и объектная SDS (программная СХД), прямой аналог Ceph RBD, CephFS и RGW,
а также внутренних СХД популярных облачных провайдеров. Однако, в отличие от них, Vitastor
быстрый и при этом простой. Только пока маленький :-).
Vitastor архитектурно похож на Ceph, что означает атомарность и строгую консистентность,
@@ -19,10 +19,10 @@ Vitastor нацелен в первую очередь на SSD и SSD+HDD кл
TCP и RDMA и на хорошем железе может достигать задержки 4 КБ чтения и записи на уровне ~0.1 мс,
что примерно в 10 раз быстрее, чем Ceph и другие популярные программные СХД.
Vitastor поддерживает QEMU-драйвер, протоколы NBD и NFS, драйверы OpenStack, Proxmox, Kubernetes.
Vitastor поддерживает QEMU-драйвер, протоколы NBD и NFS, драйверы OpenStack, OpenNebula, Proxmox, Kubernetes.
Другие драйверы могут также быть легко реализованы.
Подробности смотрите в документации по ссылкам ниже.
Подробности смотрите в документации по ссылкам. Можете начать отсюда: [Быстрый старт](docs/intro/quickstart.ru.md).
## Презентации и записи докладов
@@ -41,16 +41,19 @@ Vitastor поддерживает QEMU-драйвер, протоколы NBD и
- [Автор и лицензия](docs/intro/author.ru.md)
- Установка
- [Пакеты](docs/installation/packages.ru.md)
- [Docker](docs/installation/docker.ru.md)
- [Proxmox](docs/installation/proxmox.ru.md)
- [OpenNebula](docs/installation/opennebula.ru.md)
- [OpenStack](docs/installation/openstack.ru.md)
- [Kubernetes CSI](docs/installation/kubernetes.ru.md)
- [S3](docs/installation/s3.ru.md)
- [Сборка из исходных кодов](docs/installation/source.ru.md)
- Конфигурация
- [Обзор](docs/config.ru.md)
- Параметры
- [Общие](docs/config/common.ru.md)
- [Сетевые](docs/config/network.ru.md)
- [Клиентский код](docs/config/client.en.md)
- [Клиентский код](docs/config/client.ru.md)
- [Глобальные дисковые параметры](docs/config/layout-cluster.ru.md)
- [Дисковые параметры OSD](docs/config/layout-osd.ru.md)
- [Прочие параметры OSD](docs/config/osd.ru.md)
@@ -63,11 +66,13 @@ Vitastor поддерживает QEMU-драйвер, протоколы NBD и
- [fio](docs/usage/fio.ru.md) для тестов производительности
- [NBD](docs/usage/nbd.ru.md) для монтирования ядром
- [QEMU и qemu-img](docs/usage/qemu.ru.md)
- [NFS](docs/usage/nfs.ru.md)-прокси для VMWare и подобных
- [NFS](docs/usage/nfs.ru.md) кластерная файловая система и псевдо-ФС прокси
- [Администрирование](docs/usage/admin.ru.md)
- Производительность
- [Понимание сути производительности](docs/performance/understanding.ru.md)
- [Теоретический максимум](docs/performance/theoretical.ru.md)
- [Пример сравнения с Ceph](docs/performance/comparison1.ru.md)
- [Более новый тест Vitastor 1.3.1](docs/performance/bench2.ru.md)
## Автор и лицензия

View File

@@ -6,9 +6,9 @@
Make Clustered Block Storage Fast Again.
Vitastor is a distributed block SDS, direct replacement of Ceph RBD and internal SDS's
of public clouds. However, in contrast to them, Vitastor is fast and simple at the same time.
The only thing is it's slightly young :-).
Vitastor is a distributed block, file and object SDS, direct replacement of Ceph RBD, CephFS and RGW,
and also internal SDS's of public clouds. However, in contrast to them, Vitastor is fast
and simple at the same time. The only thing is it's slightly young :-).
Vitastor is architecturally similar to Ceph which means strong consistency,
primary-replication, symmetric clustering and automatic data distribution over any
@@ -19,10 +19,10 @@ supports TCP and RDMA and may achieve 4 KB read and write latency as low as ~0.1
with proper hardware which is ~10 times faster than other popular SDS's like Ceph
or internal systems of public clouds.
Vitastor supports QEMU, NBD, NFS protocols, OpenStack, Proxmox, Kubernetes drivers.
Vitastor supports QEMU, NBD, NFS protocols, OpenStack, OpenNebula, Proxmox, Kubernetes drivers.
More drivers may be created easily.
Read more details below in the documentation.
Read more details in the documentation. You can start from here: [Quick Start](docs/intro/quickstart.en.md).
## Talks and presentations
@@ -41,9 +41,12 @@ Read more details below in the documentation.
- [Author and license](docs/intro/author.en.md)
- Installation
- [Packages](docs/installation/packages.en.md)
- [Docker](docs/installation/docker.en.md)
- [Proxmox](docs/installation/proxmox.en.md)
- [OpenNebula](docs/installation/opennebula.en.md)
- [OpenStack](docs/installation/openstack.en.md)
- [Kubernetes CSI](docs/installation/kubernetes.en.md)
- [S3](docs/installation/s3.en.md)
- [Building from Source](docs/installation/source.en.md)
- Configuration
- [Overview](docs/config.en.md)
@@ -63,11 +66,13 @@ Read more details below in the documentation.
- [fio](docs/usage/fio.en.md) for benchmarks
- [NBD](docs/usage/nbd.en.md) for kernel mounts
- [QEMU and qemu-img](docs/usage/qemu.en.md)
- [NFS](docs/usage/nfs.en.md) emulator for VMWare and similar
- [NFS](docs/usage/nfs.en.md) clustered file system and pseudo-FS proxy
- [Administration](docs/usage/admin.en.md)
- Performance
- [Understanding storage performance](docs/performance/understanding.en.md)
- [Theoretical performance](docs/performance/theoretical.en.md)
- [Example comparison with Ceph](docs/performance/comparison1.en.md)
- [Newer benchmark of Vitastor 1.3.1](docs/performance/bench2.en.md)
## Author and License

View File

@@ -1,6 +1,6 @@
#!/bin/bash
gcc -I. -E -o fio_headers.i src/fio_headers.h
gcc -I. -E -o fio_headers.i src/util/fio_headers.h
rm -rf fio-copy
for i in `grep -Po 'fio/[^"]+' fio_headers.i | sort | uniq`; do

View File

@@ -5,7 +5,7 @@
#cd b/qemu; make qapi
gcc -I qemu/b/qemu `pkg-config glib-2.0 --cflags` \
-I qemu/include -E -o qemu_driver.i src/qemu_driver.c
-I qemu/include -E -o qemu_driver.i src/client/qemu_driver.c
rm -rf qemu-copy
for i in `grep -Po 'qemu/[^"]+' qemu_driver.i | sort | uniq`; do

View File

@@ -22,6 +22,8 @@ RUN apt-get update && \
(echo "APT::Install-Recommends false;" > /etc/apt/apt.conf) && \
apt-get update && \
apt-get install -y e2fsprogs xfsprogs kmod iproute2 \
# NFS mount dependencies
nfs-common netbase \
# dependencies of qemu-storage-daemon
libnuma1 liburing2 libglib2.0-0 libfuse3-3 libaio1 libzstd1 libnettle8 \
libgmp10 libhogweed6 libp11-kit0 libidn2-0 libunistring2 libtasn1-6 libpcre2-8-0 libffi8 && \
@@ -35,8 +37,8 @@ RUN (echo deb http://vitastor.io/debian bookworm main > /etc/apt/sources.list.d/
wget -q -O /etc/apt/trusted.gpg.d/vitastor.gpg https://vitastor.io/debian/pubkey.gpg && \
apt-get update && \
apt-get install -y vitastor-client && \
wget https://vitastor.io/archive/qemu/qemu-bookworm-8.1.2%2Bds-1%2Bvitastor1/qemu-utils_8.1.2%2Bds-1%2Bvitastor1_amd64.deb && \
wget https://vitastor.io/archive/qemu/qemu-bookworm-8.1.2%2Bds-1%2Bvitastor1/qemu-block-extra_8.1.2%2Bds-1%2Bvitastor1_amd64.deb && \
wget https://vitastor.io/archive/qemu/qemu-bookworm-9.2.2%2Bds-1%2Bvitastor4/qemu-utils_9.2.2%2Bds-1%2Bvitastor4_amd64.deb && \
wget https://vitastor.io/archive/qemu/qemu-bookworm-9.2.2%2Bds-1%2Bvitastor4/qemu-block-extra_9.2.2%2Bds-1%2Bvitastor4_amd64.deb && \
dpkg -x qemu-utils*.deb tmp1 && \
dpkg -x qemu-block-extra*.deb tmp1 && \
cp -a tmp1/usr/bin/qemu-storage-daemon /usr/bin/ && \

View File

@@ -1,9 +1,9 @@
VERSION ?= v1.4.3
VITASTOR_VERSION ?= v2.2.2
all: build push
build:
@docker build --rm -t vitalif/vitastor-csi:$(VERSION) .
@docker build --rm -t vitalif/vitastor-csi:$(VITASTOR_VERSION) .
push:
@docker push vitalif/vitastor-csi:$(VERSION)
@docker push vitalif/vitastor-csi:$(VITASTOR_VERSION)

View File

@@ -49,7 +49,7 @@ spec:
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: vitalif/vitastor-csi:v1.4.3
image: vitalif/vitastor-csi:v2.2.2
args:
- "--node=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)"

View File

@@ -121,7 +121,7 @@ spec:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
image: vitalif/vitastor-csi:v1.4.3
image: vitalif/vitastor-csi:v2.2.2
args:
- "--node=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)"

View File

@@ -9,8 +9,16 @@ metadata:
provisioner: csi.vitastor.io
volumeBindingMode: Immediate
parameters:
etcdVolumePrefix: ""
poolId: "1"
# CSI driver can create block-based volumes and VitastorFS-based volumes
# only VitastorFS-based volumes and raw block volumes (without FS) support ReadWriteMany mode
# set this parameter to VitastorFS metadata volume name to use VitastorFS
# if unset, block-based volumes will be created
vitastorfs: ""
# for block-based storage classes, pool ID may be either a string (name) or a number (ID)
# for vitastorFS-based storage classes it must be a string - name of the default pool for FS data
poolId: "testpool"
# volume name prefix for block-based storage classes or NFS subdirectory (including /) for FS-based volumes
volumePrefix: ""
# you can choose other configuration file if you have it in the config map
# different etcd URLs and prefixes should also be put in the config
#configPath: "/etc/vitastor/vitastor.conf"

View File

@@ -0,0 +1,25 @@
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
namespace: vitastor-system
name: vitastor
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.vitastor.io
volumeBindingMode: Immediate
parameters:
# CSI driver can create block-based volumes and VitastorFS-based volumes
# only VitastorFS-based volumes and raw block volumes (without FS) support ReadWriteMany mode
# set this parameter to VitastorFS metadata volume name to use VitastorFS
# if unset, block-based volumes will be created
vitastorfs: "testfs"
# for block-based storage classes, pool ID may be either a string (name) or a number (ID)
# for vitastorFS-based storage classes it must be a string - name of the default pool for FS data
poolId: "testpool"
# volume name prefix for block-based storage classes or NFS subdirectory (including /) for FS-based volumes
volumePrefix: "k8s/"
# you can choose other configuration file if you have it in the config map
# different etcd URLs and prefixes should also be put in the config
#configPath: "/etc/vitastor/vitastor.conf"
allowVolumeExpansion: true

View File

@@ -3,10 +3,10 @@ module vitastor.io/csi
go 1.15
require (
github.com/container-storage-interface/spec v1.4.0
github.com/container-storage-interface/spec v1.8.0
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b
github.com/kubernetes-csi/csi-lib-utils v0.9.1
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb
golang.org/x/net v0.7.0
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
google.golang.org/grpc v1.33.1
google.golang.org/protobuf v1.24.0

View File

@@ -41,8 +41,8 @@ github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWR
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/container-storage-interface/spec v1.2.0/go.mod h1:6URME8mwIBbpVyZV93Ce5St17xBiQJQY67NDsuohiy4=
github.com/container-storage-interface/spec v1.4.0 h1:ozAshSKxpJnYUfmkpZCTYyF/4MYeYlhdXbAvPvfGmkg=
github.com/container-storage-interface/spec v1.4.0/go.mod h1:6URME8mwIBbpVyZV93Ce5St17xBiQJQY67NDsuohiy4=
github.com/container-storage-interface/spec v1.8.0 h1:D0vhF3PLIZwlwZEf2eNbpujGCNwspwTYf2idJRJx4xI=
github.com/container-storage-interface/spec v1.8.0/go.mod h1:ROLik+GhPslwwWRNFF1KasPzroNARibH2rfz1rkg4H0=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -182,6 +182,7 @@ github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UV
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
@@ -195,6 +196,7 @@ golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8U
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191206172530-e9b2fee46413/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -213,6 +215,7 @@ golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCc
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -228,8 +231,10 @@ golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb h1:eBmm0M9fYhWpKZLjQUUKka/LtIxf46G4fxeEz5KJr9U=
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.7.0 h1:rJrUqqhjsgNp7KqAIc25s9pZnjU7TUcSY7HcVZjdn1g=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -240,6 +245,7 @@ golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -259,13 +265,22 @@ golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200622214017-ed371f2e16b4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f h1:+Nyd8tzPX9R7BWHguqsrbFdRx3WQ/1ib8I44HXV5yTA=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0 h1:MUK/U/4lj1t1oPg0HfuXDN/Z1wv31ZJ/YcPiGccS4DU=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0 h1:4BRB4x83lYWy72KwLD/qYDuTu7q9PjSagHvijDw7cLo=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -286,8 +301,10 @@ golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgw
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=

View File

@@ -5,7 +5,7 @@ package vitastor
const (
vitastorCSIDriverName = "csi.vitastor.io"
vitastorCSIDriverVersion = "1.4.3"
vitastorCSIDriverVersion = "2.2.2"
)
// Config struct fills the parameters of request or user input

View File

@@ -8,11 +8,8 @@ import (
"encoding/json"
"fmt"
"strings"
"bytes"
"strconv"
"time"
"os"
"os/exec"
"io/ioutil"
"github.com/kubernetes-csi/csi-lib-utils/protosanitizer"
@@ -70,9 +67,10 @@ func GetConnectionParams(params map[string]string) (map[string]string, error)
{
configPath = "/etc/vitastor/vitastor.conf"
}
else
ctxVars["configPath"] = configPath
if (params["vitastorfs"] != "")
{
ctxVars["configPath"] = configPath
ctxVars["vitastorfs"] = params["vitastorfs"]
}
config := make(map[string]interface{})
configFD, err := os.Open(configPath)
@@ -114,22 +112,6 @@ func GetConnectionParams(params map[string]string) (map[string]string, error)
return ctxVars, nil
}
func system(program string, args ...string) ([]byte, []byte, error)
{
klog.Infof("Running "+program+" "+strings.Join(args, " "))
c := exec.Command(program, args...)
var stdout, stderr bytes.Buffer
c.Stdout, c.Stderr = &stdout, &stderr
err := c.Run()
if (err != nil)
{
stdoutStr, stderrStr := string(stdout.Bytes()), string(stderr.Bytes())
klog.Errorf(program+" "+strings.Join(args, " ")+" failed: %s, status %s\n", stdoutStr+stderrStr, err)
return nil, nil, status.Error(codes.Internal, stdoutStr+stderrStr+" (status "+err.Error()+")")
}
return stdout.Bytes(), stderr.Bytes(), nil
}
func invokeCLI(ctxVars map[string]string, args []string) ([]byte, error)
{
if (ctxVars["configPath"] != "")
@@ -158,27 +140,57 @@ func (cs *ControllerServer) CreateVolume(ctx context.Context, req *csi.CreateVol
return nil, status.Error(codes.InvalidArgument, "volume capabilities is a required field")
}
etcdVolumePrefix := req.Parameters["etcdVolumePrefix"]
poolId, _ := strconv.ParseUint(req.Parameters["poolId"], 10, 64)
if (poolId == 0)
{
return nil, status.Error(codes.InvalidArgument, "poolId is missing in storage class configuration")
}
volName := etcdVolumePrefix + req.GetName()
volSize := 1 * GB
if capRange := req.GetCapacityRange(); capRange != nil
{
volSize = ((capRange.GetRequiredBytes() + MB - 1) / MB) * MB
}
ctxVars, err := GetConnectionParams(req.Parameters)
if (err != nil)
{
return nil, err
}
args := []string{ "create", volName, "-s", fmt.Sprintf("%v", volSize), "--pool", fmt.Sprintf("%v", poolId) }
err = cs.checkCaps(volumeCapabilities, ctxVars["vitastorfs"] != "")
if (err != nil)
{
return nil, err
}
pool := req.Parameters["poolId"]
if (pool == "")
{
return nil, status.Error(codes.InvalidArgument, "poolId is missing in storage class configuration")
}
volumePrefix := req.Parameters["volumePrefix"]
if (volumePrefix == "")
{
// Old name
volumePrefix = req.Parameters["etcdVolumePrefix"]
}
volName := volumePrefix + req.GetName()
volSize := 1 * GB
if capRange := req.GetCapacityRange(); capRange != nil
{
volSize = ((capRange.GetRequiredBytes() + MB - 1) / MB) * MB
}
if (ctxVars["vitastorfs"] != "")
{
// Nothing to create, subdirectories are created during mounting
// FIXME: It would be cool to support quotas some day and set it here
if (req.VolumeContentSource.GetSnapshot() != nil)
{
return nil, status.Error(codes.InvalidArgument, "VitastorFS doesn't support snapshots")
}
ctxVars["name"] = volName
ctxVars["pool"] = pool
volumeIdJson, _ := json.Marshal(ctxVars)
return &csi.CreateVolumeResponse{
Volume: &csi.Volume{
// Ugly, but VolumeContext isn't passed to DeleteVolume :-(
VolumeId: string(volumeIdJson),
CapacityBytes: volSize,
},
}, nil
}
args := []string{ "create", volName, "-s", fmt.Sprintf("%v", volSize), "--pool", pool }
// Support creation from snapshot
var src *csi.VolumeContentSource
@@ -261,6 +273,12 @@ func (cs *ControllerServer) DeleteVolume(ctx context.Context, req *csi.DeleteVol
return nil, err
}
if (ctxVars["vitastorfs"] != "")
{
// FIXME: Delete FS subdirectory
return &csi.DeleteVolumeResponse{}, nil
}
_, err = invokeCLI(ctxVars, []string{ "rm", volName })
if (err != nil)
{
@@ -295,19 +313,72 @@ func (cs *ControllerServer) ValidateVolumeCapabilities(ctx context.Context, req
{
return nil, status.Error(codes.InvalidArgument, "volumeId is nil")
}
volVars := make(map[string]string)
err := json.Unmarshal([]byte(volumeID), &volVars)
if (err != nil)
{
return nil, status.Error(codes.Internal, "volume ID not in JSON format")
}
ctxVars, err := GetConnectionParams(volVars)
if (err != nil)
{
return nil, err
}
volumeCapabilities := req.GetVolumeCapabilities()
if (volumeCapabilities == nil)
{
return nil, status.Error(codes.InvalidArgument, "volumeCapabilities is nil")
}
err = cs.checkCaps(volumeCapabilities, ctxVars["vitastorfs"] != "")
if (err != nil)
{
return nil, err
}
return &csi.ValidateVolumeCapabilitiesResponse{
Confirmed: &csi.ValidateVolumeCapabilitiesResponse_Confirmed{
VolumeCapabilities: req.VolumeCapabilities,
},
}, nil
}
func (cs *ControllerServer) checkCaps(volumeCapabilities []*csi.VolumeCapability, fs bool) error
{
var volumeCapabilityAccessModes []*csi.VolumeCapability_AccessMode
for _, mode := range []csi.VolumeCapability_AccessMode_Mode{
csi.VolumeCapability_AccessMode_SINGLE_NODE_WRITER,
csi.VolumeCapability_AccessMode_MULTI_NODE_MULTI_WRITER,
csi.VolumeCapability_AccessMode_SINGLE_NODE_READER_ONLY,
csi.VolumeCapability_AccessMode_MULTI_NODE_READER_ONLY,
csi.VolumeCapability_AccessMode_SINGLE_NODE_SINGLE_WRITER,
csi.VolumeCapability_AccessMode_SINGLE_NODE_MULTI_WRITER,
} {
volumeCapabilityAccessModes = append(volumeCapabilityAccessModes, &csi.VolumeCapability_AccessMode{Mode: mode})
}
for _, capability := range volumeCapabilities
{
if (capability.GetBlock() != nil)
{
if (fs)
{
return status.Errorf(codes.InvalidArgument, "%v not supported with FS-based volumes", capability)
}
for _, mode := range []csi.VolumeCapability_AccessMode_Mode{
csi.VolumeCapability_AccessMode_MULTI_NODE_SINGLE_WRITER,
csi.VolumeCapability_AccessMode_MULTI_NODE_MULTI_WRITER,
} {
volumeCapabilityAccessModes = append(volumeCapabilityAccessModes, &csi.VolumeCapability_AccessMode{Mode: mode})
}
break
}
}
if (fs)
{
// All access modes including RWX are supported with FS-based volumes
return nil
}
capabilitySupport := false
for _, capability := range volumeCapabilities
@@ -323,14 +394,10 @@ func (cs *ControllerServer) ValidateVolumeCapabilities(ctx context.Context, req
if (!capabilitySupport)
{
return nil, status.Errorf(codes.NotFound, "%v not supported", req.GetVolumeCapabilities())
return status.Errorf(codes.InvalidArgument, "%v not supported", volumeCapabilities)
}
return &csi.ValidateVolumeCapabilitiesResponse{
Confirmed: &csi.ValidateVolumeCapabilitiesResponse_Confirmed{
VolumeCapabilities: req.VolumeCapabilities,
},
}, nil
return nil
}
// ListVolumes returns a list of volumes
@@ -419,6 +486,12 @@ func (cs *ControllerServer) CreateSnapshot(ctx context.Context, req *csi.CreateS
{
return nil, status.Error(codes.Internal, "volume ID not in JSON format")
}
if (ctxVars["vitastorfs"] != "")
{
return nil, status.Error(codes.InvalidArgument, "VitastorFS doesn't support snapshots")
}
volName := ctxVars["name"]
// Create image using vitastor-cli
@@ -477,6 +550,11 @@ func (cs *ControllerServer) DeleteSnapshot(ctx context.Context, req *csi.DeleteS
return nil, err
}
if (ctxVars["vitastorfs"] != "")
{
return nil, status.Error(codes.InvalidArgument, "VitastorFS doesn't support snapshots")
}
_, err = invokeCLI(ctxVars, []string{ "rm", volName+"@"+snapName })
if (err != nil)
{
@@ -508,6 +586,11 @@ func (cs *ControllerServer) ListSnapshots(ctx context.Context, req *csi.ListSnap
return nil, err
}
if (ctxVars["vitastorfs"] != "")
{
return nil, status.Error(codes.InvalidArgument, "VitastorFS doesn't support snapshots")
}
inodeCfg, err := invokeList(ctxVars, volName+"@*", false)
if (err != nil)
{
@@ -571,6 +654,16 @@ func (cs *ControllerServer) ControllerExpandVolume(ctx context.Context, req *csi
return nil, err
}
if (ctxVars["vitastorfs"] != "")
{
// Nothing to change
// FIXME: Support quotas and change quota here
return &csi.ControllerExpandVolumeResponse{
CapacityBytes: req.CapacityRange.RequiredBytes,
NodeExpansionRequired: false,
}, nil
}
inodeCfg, err := invokeList(ctxVars, volName, true)
if (err != nil)
{

File diff suppressed because it is too large Load Diff

342
csi/src/utils.go Normal file
View File

@@ -0,0 +1,342 @@
// Copyright (c) Vitaliy Filippov, 2019+
// License: VNPL-1.1 or GNU GPL-2.0+ (see README.md for details)
package vitastor
import (
"bytes"
"errors"
"encoding/json"
"fmt"
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
"syscall"
"k8s.io/klog"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
func Contains(list []string, s string) bool
{
for i := 0; i < len(list); i++
{
if (list[i] == s)
{
return true
}
}
return false
}
func checkVduseSupport() bool
{
// Check VDUSE support (vdpa, vduse, virtio-vdpa kernel modules)
vduse := true
for _, mod := range []string{"vdpa", "vduse", "virtio-vdpa"}
{
_, err := os.Stat("/sys/module/"+mod)
if (err != nil)
{
if (!errors.Is(err, os.ErrNotExist))
{
klog.Errorf("failed to check /sys/module/%s: %v", mod, err)
}
c := exec.Command("/sbin/modprobe", mod)
c.Stdout = os.Stderr
c.Stderr = os.Stderr
err := c.Run()
if (err != nil)
{
klog.Errorf("/sbin/modprobe %s failed: %v", mod, err)
vduse = false
break
}
}
}
// Check that vdpa tool functions
if (vduse)
{
c := exec.Command("/sbin/vdpa", "-j", "dev")
c.Stderr = os.Stderr
err := c.Run()
if (err != nil)
{
klog.Errorf("/sbin/vdpa -j dev failed: %v", err)
vduse = false
}
}
if (!vduse)
{
klog.Errorf(
"Your host apparently has no VDUSE support. VDUSE support disabled, NBD will be used to map devices."+
" For VDUSE you need at least Linux 5.15 and the following kernel modules: vdpa, virtio-vdpa, vduse.",
)
}
else
{
klog.Infof("VDUSE support enabled successfully")
}
return vduse
}
func mapNbd(volName string, ctxVars map[string]string, readonly bool) (string, error)
{
// Map NBD device
// FIXME: Check if already mapped
args := []string{
"map", "--image", volName,
}
if (ctxVars["configPath"] != "")
{
args = append(args, "--config_path", ctxVars["configPath"])
}
if (readonly)
{
args = append(args, "--readonly", "1")
}
stdout, stderr, err := system("/usr/bin/vitastor-nbd", args...)
dev := strings.TrimSpace(string(stdout))
if (dev == "")
{
return "", fmt.Errorf("vitastor-nbd did not return the name of NBD device. output: %s", stderr)
}
klog.Infof("Attached volume %s via NBD as %s", volName, dev)
return dev, err
}
func unmapNbd(devicePath string)
{
// unmap NBD device
unmapOut, unmapErr := exec.Command("/usr/bin/vitastor-nbd", "unmap", devicePath).CombinedOutput()
if (unmapErr != nil)
{
klog.Errorf("failed to unmap NBD device %s: %s, error: %v", devicePath, unmapOut, unmapErr)
}
}
func findByPidFile(pidFile string) (*os.Process, error)
{
pidBuf, err := os.ReadFile(pidFile)
if (err != nil)
{
return nil, err
}
pid, err := strconv.ParseInt(strings.TrimSpace(string(pidBuf)), 0, 64)
if (err != nil)
{
return nil, err
}
proc, err := os.FindProcess(int(pid))
if (err != nil)
{
return nil, err
}
return proc, nil
}
func killByPidFile(pidFile string) error
{
klog.Infof("killing process with PID from file %s", pidFile)
proc, err := findByPidFile(pidFile)
if (err != nil)
{
return err
}
return proc.Signal(syscall.SIGTERM)
}
func startStorageDaemon(vdpaId, volName, pidFile, configPath string, readonly bool) error
{
// Start qemu-storage-daemon
blockSpec := map[string]interface{}{
"node-name": "disk1",
"driver": "vitastor",
"image": volName,
"cache": map[string]bool{
"direct": true,
"no-flush": false,
},
"discard": "unmap",
}
if (configPath != "")
{
blockSpec["config-path"] = configPath
}
blockSpecJson, _ := json.Marshal(blockSpec)
writable := "true"
if (readonly)
{
writable = "false"
}
_, _, err := system(
"/usr/bin/qemu-storage-daemon", "--daemonize", "--pidfile", pidFile, "--blockdev", string(blockSpecJson),
"--export", "vduse-blk,id="+vdpaId+",node-name=disk1,name="+vdpaId+",num-queues=16,queue-size=128,writable="+writable,
)
return err
}
func mapVduse(stateDir string, volName string, ctxVars map[string]string, readonly bool) (string, string, error)
{
// Generate state file
stateFd, err := os.CreateTemp(stateDir, "vitastor-vduse-*.json")
if (err != nil)
{
return "", "", err
}
stateFile := stateFd.Name()
stateFd.Close()
vdpaId := filepath.Base(stateFile)
vdpaId = vdpaId[0:len(vdpaId)-5] // remove ".json"
pidFile := stateDir + vdpaId + ".pid"
// Map VDUSE device via qemu-storage-daemon
err = startStorageDaemon(vdpaId, volName, pidFile, ctxVars["configPath"], readonly)
if (err == nil)
{
// Add device to VDPA bus
_, _, err = system("/sbin/vdpa", "-j", "dev", "add", "name", vdpaId, "mgmtdev", "vduse")
if (err == nil)
{
// Find block device name
var matches []string
matches, err = filepath.Glob("/sys/bus/vdpa/devices/"+vdpaId+"/virtio*/block/*")
if (err == nil && len(matches) == 0)
{
err = errors.New("/sys/bus/vdpa/devices/"+vdpaId+"/virtio*/block/* is not found")
}
if (err == nil)
{
blockdev := "/dev/"+filepath.Base(matches[0])
_, err = os.Stat(blockdev)
if (err == nil)
{
// Generate state file
stateJSON, _ := json.Marshal(&DeviceState{
ConfigPath: ctxVars["configPath"],
VdpaId: vdpaId,
Image: volName,
Blockdev: blockdev,
Readonly: readonly,
PidFile: pidFile,
})
err = os.WriteFile(stateFile, stateJSON, 0600)
if (err == nil)
{
klog.Infof("Attached volume %s via VDUSE as %s (VDPA ID %s)", volName, blockdev, vdpaId)
return blockdev, vdpaId, nil
}
}
}
}
killErr := killByPidFile(pidFile)
if (killErr != nil)
{
klog.Errorf("Failed to kill started qemu-storage-daemon: %v", killErr)
}
os.Remove(stateFile)
os.Remove(pidFile)
}
return "", "", err
}
func unmapVduse(stateDir, devicePath string)
{
if (len(devicePath) < 6 || devicePath[0:6] != "/dev/v")
{
klog.Errorf("%s does not start with /dev/v", devicePath)
return
}
vduseDev, err := os.Readlink("/sys/block/"+devicePath[5:])
if (err != nil)
{
klog.Errorf("%s is not a symbolic link to VDUSE device (../devices/virtual/vduse/xxx): %v", devicePath, err)
return
}
vdpaId := ""
p := strings.Index(vduseDev, "/vduse/")
if (p >= 0)
{
vduseDev = vduseDev[p+7:]
p = strings.Index(vduseDev, "/")
if (p >= 0)
{
vdpaId = vduseDev[0:p]
}
}
if (vdpaId == "")
{
klog.Errorf("%s is not a symbolic link to VDUSE device (../devices/virtual/vduse/xxx), but is %v", devicePath, vduseDev)
return
}
unmapVduseById(stateDir, vdpaId)
}
func unmapVduseById(stateDir, vdpaId string)
{
_, err := os.Stat("/sys/bus/vdpa/devices/"+vdpaId)
if (err != nil)
{
klog.Errorf("failed to stat /sys/bus/vdpa/devices/"+vdpaId+": %v", err)
}
else
{
_, _, _ = system("/sbin/vdpa", "-j", "dev", "del", vdpaId)
}
stateFile := stateDir + vdpaId + ".json"
os.Remove(stateFile)
pidFile := stateDir + vdpaId + ".pid"
_, err = os.Stat(pidFile)
if (os.IsNotExist(err))
{
// ok, already killed
}
else if (err != nil)
{
klog.Errorf("Failed to stat %v: %v", pidFile, err)
return
}
else
{
err = killByPidFile(pidFile)
if (err != nil)
{
klog.Errorf("Failed to kill started qemu-storage-daemon: %v", err)
}
os.Remove(pidFile)
}
}
func system(program string, args ...string) ([]byte, []byte, error)
{
klog.Infof("Running "+program+" "+strings.Join(args, " "))
c := exec.Command(program, args...)
var stdout, stderr bytes.Buffer
c.Stdout, c.Stderr = &stdout, &stderr
err := c.Run()
if (err != nil)
{
stdoutStr, stderrStr := string(stdout.Bytes()), string(stderr.Bytes())
klog.Errorf(program+" "+strings.Join(args, " ")+" failed: %s\nOutput:\n%s", err, stdoutStr+stderrStr)
return nil, nil, status.Error(codes.Internal, stdoutStr+stderrStr+" (status "+err.Error()+")")
}
return stdout.Bytes(), stderr.Bytes(), nil
}
func systemCombined(program string, args ...string) ([]byte, error)
{
klog.Infof("Running "+program+" "+strings.Join(args, " "))
c := exec.Command(program, args...)
var out bytes.Buffer
c.Stdout, c.Stderr = &out, &out
err := c.Run()
if (err != nil)
{
outStr := string(out.Bytes())
klog.Errorf(program+" "+strings.Join(args, " ")+" failed: %s, status %s\n", outStr, err)
return nil, status.Error(codes.Internal, outStr+" (status "+err.Error()+")")
}
return out.Bytes(), nil
}

View File

@@ -3,5 +3,5 @@
cat < vitastor.Dockerfile > ../Dockerfile
cd ..
mkdir -p packages
sudo podman build --build-arg REL=bookworm -v `pwd`/packages:/root/packages -f Dockerfile .
sudo podman build --build-arg DISTRO=debian --build-arg REL=bookworm -v `pwd`/packages:/root/packages -f Dockerfile .
rm Dockerfile

View File

@@ -3,5 +3,5 @@
cat < vitastor.Dockerfile > ../Dockerfile
cd ..
mkdir -p packages
sudo podman build --build-arg REL=bullseye -v `pwd`/packages:/root/packages -f Dockerfile .
sudo podman build --build-arg DISTRO=debian --build-arg REL=bullseye -v `pwd`/packages:/root/packages -f Dockerfile .
rm Dockerfile

View File

@@ -3,5 +3,5 @@
cat < vitastor.Dockerfile > ../Dockerfile
cd ..
mkdir -p packages
sudo podman build --build-arg REL=buster -v `pwd`/packages:/root/packages -f Dockerfile .
sudo podman build --build-arg DISTRO=debian --build-arg REL=buster -v `pwd`/packages:/root/packages -f Dockerfile .
rm Dockerfile

7
debian/build-vitastor-ubuntu-jammy.sh vendored Executable file
View File

@@ -0,0 +1,7 @@
#!/bin/bash
cat < vitastor.Dockerfile > ../Dockerfile
cd ..
mkdir -p packages
sudo podman build --build-arg DISTRO=ubuntu --build-arg REL=jammy -v `pwd`/packages:/root/packages -f Dockerfile .
rm Dockerfile

2
debian/changelog vendored
View File

@@ -1,4 +1,4 @@
vitastor (1.4.3-1) unstable; urgency=medium
vitastor (2.2.2-1) unstable; urgency=medium
* Bugfixes

17
debian/control vendored
View File

@@ -2,7 +2,10 @@ Source: vitastor
Section: admin
Priority: optional
Maintainer: Vitaliy Filippov <vitalif@yourcmc.ru>
Build-Depends: debhelper, liburing-dev (>= 0.6), g++ (>= 8), libstdc++6 (>= 8), linux-libc-dev, libgoogle-perftools-dev, libjerasure-dev, libgf-complete-dev, libibverbs-dev, libisal-dev, cmake, pkg-config
Build-Depends: debhelper, liburing-dev (>= 0.6), g++ (>= 8), libstdc++6 (>= 8),
linux-libc-dev, libgoogle-perftools-dev, libjerasure-dev, libgf-complete-dev,
libibverbs-dev, libisal-dev, cmake, pkg-config, libnl-3-dev, libnl-genl-3-dev,
node-bindings <!nocheck>, node-gyp, node-nan
Standards-Version: 4.5.0
Homepage: https://vitastor.io/
Rules-Requires-Root: no
@@ -53,3 +56,15 @@ Architecture: amd64
Depends: ${shlibs:Depends}, ${misc:Depends}, vitastor-client (= ${binary:Version})
Description: Vitastor Proxmox Virtual Environment storage plugin
Vitastor storage plugin for Proxmox Virtual Environment.
Package: vitastor-opennebula
Architecture: amd64
Depends: ${shlibs:Depends}, ${misc:Depends}, vitastor-client, patch, python3, jq
Description: Vitastor OpenNebula storage plugin
Vitastor storage plugin for OpenNebula.
Package: node-vitastor
Architecture: amd64
Depends: ${shlibs:Depends}, ${misc:Depends}, node-bindings
Description: Node.js bindings for Vitastor client
Node.js native bindings for the Vitastor client library (vitastor-client).

View File

@@ -1,13 +1,14 @@
# Build patched libvirt for Debian Buster or Bullseye/Sid inside a container
# cd ..; podman build --build-arg REL=bullseye -v `pwd`/packages:/root/packages -f debian/libvirt.Dockerfile .
# cd ..; podman build --build-arg DISTRO=debian --build-arg REL=bullseye -v `pwd`/packages:/root/packages -f debian/libvirt.Dockerfile .
ARG DISTRO=
ARG REL=
FROM debian:$REL
FROM $DISTRO:$REL
ARG REL=
WORKDIR /root
RUN if [ "$REL" = "buster" -o "$REL" = "bullseye" ]; then \
RUN if ([ "${DISTRO}" = "debian" ]) && ( [ "${REL}" = "buster" -o "${REL}" = "bullseye" ] ); then \
echo "deb http://deb.debian.org/debian $REL-backports main" >> /etc/apt/sources.list; \
echo >> /etc/apt/preferences; \
echo 'Package: *' >> /etc/apt/preferences; \
@@ -23,7 +24,7 @@ RUN apt-get -y build-dep libvirt0
RUN apt-get -y install libglusterfs-dev
RUN apt-get --download-only source libvirt
ADD patches/libvirt-5.0-vitastor.diff patches/libvirt-7.0-vitastor.diff patches/libvirt-7.5-vitastor.diff patches/libvirt-7.6-vitastor.diff /root
ADD patches/libvirt-5.0-vitastor.diff patches/libvirt-7.0-vitastor.diff patches/libvirt-7.5-vitastor.diff patches/libvirt-7.6-vitastor.diff patches/libvirt-8.0-vitastor.diff /root
RUN set -e; \
mkdir -p /root/packages/libvirt-$REL; \
rm -rf /root/packages/libvirt-$REL/*; \

1
debian/node-vitastor.install vendored Normal file
View File

@@ -0,0 +1 @@
usr/lib/x86_64-linux-gnu/nodejs/vitastor

View File

@@ -1,17 +1,23 @@
# Build patched QEMU for Debian inside a container
# cd ..; podman build --build-arg REL=bullseye -v `pwd`/packages:/root/packages -f debian/patched-qemu.Dockerfile .
ARG DISTRO=debian
ARG REL=
FROM debian:$REL
FROM $DISTRO:$REL
ARG DISTRO=debian
ARG REL=
WORKDIR /root
RUN if [ "$REL" = "buster" -o "$REL" = "bullseye" -o "$REL" = "bookworm" ]; then \
echo "deb http://deb.debian.org/debian $REL-backports main" >> /etc/apt/sources.list; \
if [ "$REL" = "buster" ]; then \
echo "deb http://archive.debian.org/debian $REL-backports main" >> /etc/apt/sources.list; \
else \
echo "deb http://deb.debian.org/debian $REL-backports main" >> /etc/apt/sources.list; \
fi; \
echo >> /etc/apt/preferences; \
echo 'Package: *' >> /etc/apt/preferences; \
echo "Pin: release a=$REL-backports" >> /etc/apt/preferences; \
echo "Pin: release n=$REL-backports" >> /etc/apt/preferences; \
echo 'Pin-Priority: 500' >> /etc/apt/preferences; \
fi; \
grep '^deb ' /etc/apt/sources.list | perl -pe 's/^deb/deb-src/' >> /etc/apt/sources.list; \
@@ -20,14 +26,14 @@ RUN if [ "$REL" = "buster" -o "$REL" = "bullseye" -o "$REL" = "bookworm" ]; then
echo 'APT::Install-Suggests false;' >> /etc/apt/apt.conf
RUN apt-get update
RUN apt-get -y install fio liburing-dev libgoogle-perftools-dev devscripts
RUN apt-get -y build-dep qemu
RUN DEBIAN_FRONTEND=noninteractive TZ=Europe/Moscow apt-get -y install fio liburing-dev libgoogle-perftools-dev devscripts
RUN DEBIAN_FRONTEND=noninteractive TZ=Europe/Moscow apt-get -y build-dep qemu
# To build a custom version
#RUN cp /root/packages/qemu-orig/* /root
RUN apt-get --download-only source qemu
ADD patches /root/vitastor/patches
ADD src/qemu_driver.c /root/vitastor/src/qemu_driver.c
ADD src/client/qemu_driver.c /root/qemu_driver.c
#RUN set -e; \
# apt-get install -y wget; \
@@ -38,9 +44,9 @@ ADD src/qemu_driver.c /root/vitastor/src/qemu_driver.c
# apt-get install -y vitastor-client vitastor-client-dev quilt
RUN set -e; \
dpkg -i /root/packages/vitastor-$REL/vitastor-client_*.deb /root/packages/vitastor-$REL/vitastor-client-dev_*.deb; \
DEBIAN_FRONTEND=noninteractive TZ=Europe/Moscow apt-get -y install /root/packages/vitastor-$REL/vitastor-client_*.deb /root/packages/vitastor-$REL/vitastor-client-dev_*.deb; \
apt-get update; \
apt-get install -y quilt; \
DEBIAN_FRONTEND=noninteractive TZ=Europe/Moscow apt-get -y install quilt; \
mkdir -p /root/packages/qemu-$REL; \
rm -rf /root/packages/qemu-$REL/*; \
cd /root/packages/qemu-$REL; \
@@ -52,9 +58,9 @@ RUN set -e; \
cd /root/packages/qemu-$REL/qemu-*/; \
quilt push -a; \
quilt add block/vitastor.c; \
cp /root/vitastor/src/qemu_driver.c block/vitastor.c; \
cp /root/qemu_driver.c block/vitastor.c; \
quilt refresh; \
V=$(head -n1 debian/changelog | perl -pe 's/5\.2\+dfsg-9/5.2+dfsg-11/; s/^.*\((.*?)(~bpo[\d\+]*)?\).*$/$1/')+vitastor4; \
V=$(head -n1 debian/changelog | perl -pe 's/5\.2\+dfsg-9/5.2+dfsg-11/; s/^.*\((.*?)(\+deb\d+u\d+)?(~bpo[\d\+]*)?\).*$/$1/')+vitastor5; \
if [ "$REL" = bullseye ]; then V=${V}bullseye; fi; \
DEBEMAIL="Vitaliy Filippov <vitalif@yourcmc.ru>" dch -D $REL -v $V 'Plug Vitastor block driver'; \
DEB_BUILD_OPTIONS=nocheck dpkg-buildpackage --jobs=auto -sa; \

8
debian/rules vendored
View File

@@ -4,6 +4,14 @@ export DH_VERBOSE = 1
%:
dh $@
override_dh_install:
perl -pe 's!prefix=/usr!prefix='`pwd`'/debian/tmp/usr!' < obj-x86_64-linux-gnu/src/client/vitastor.pc > node-binding/vitastor.pc
cd node-binding && PKG_CONFIG_PATH=./ PKG_CONFIG_ALLOW_SYSTEM_CFLAGS=1 npm install --unsafe-perm || exit 1
mkdir -p debian/tmp/usr/lib/x86_64-linux-gnu/nodejs/vitastor/build/Release
cp -v node-binding/package.json node-binding/index.js node-binding/addon.cc node-binding/addon.h node-binding/client.cc node-binding/client.h debian/tmp/usr/lib/x86_64-linux-gnu/nodejs/vitastor
cp -v node-binding/build/Release/addon.node debian/tmp/usr/lib/x86_64-linux-gnu/nodejs/vitastor/build/Release
dh_install
override_dh_installdeb:
cat debian/fio_version >> debian/vitastor-fio.substvars
[ -f debian/qemu_version ] && (cat debian/qemu_version >> debian/vitastor-qemu.substvars) || true

View File

@@ -3,4 +3,6 @@ usr/bin/vitastor-cli
usr/bin/vitastor-rm
usr/bin/vitastor-nbd
usr/bin/vitastor-nfs
usr/bin/vitastor-kv
usr/bin/vitastor-kv-stress
usr/lib/*/libvitastor*.so*

View File

@@ -1,2 +1,3 @@
mon usr/lib/vitastor
mon/vitastor-mon.service /lib/systemd/system
mon usr/lib/vitastor/
mon/scripts/make-etcd usr/lib/vitastor/mon
mon/scripts/vitastor-mon.service /lib/systemd/system

View File

@@ -6,4 +6,6 @@ if [ "$1" = "configure" ]; then
addgroup --system --quiet vitastor
adduser --system --quiet --ingroup vitastor --no-create-home --home /nonexistent vitastor
mkdir -p /etc/vitastor
mkdir -p /var/lib/vitastor
chown vitastor:vitastor /var/lib/vitastor
fi

3
debian/vitastor-opennebula.install vendored Normal file
View File

@@ -0,0 +1,3 @@
opennebula/remotes var/lib/one/
opennebula/sudoers.d etc/
opennebula/install.sh var/lib/one/remotes/datastore/vitastor/

7
debian/vitastor-opennebula.postinst vendored Normal file
View File

@@ -0,0 +1,7 @@
#!/bin/sh
set -e
if [ "$1" = "configure" ]; then
/var/lib/one/remotes/datastore/vitastor/install.sh
fi

4
debian/vitastor-opennebula.triggers vendored Normal file
View File

@@ -0,0 +1,4 @@
interest /var/lib/one/remotes/datastore/downloader.sh
interest /etc/one/oned.conf
interest /etc/one/vmm_exec/vmm_execrc
interest /etc/apparmor.d/local/abstractions/libvirt-qemu

View File

@@ -1,6 +1,6 @@
usr/bin/vitastor-osd
usr/bin/vitastor-disk
usr/bin/vitastor-dump-journal
mon/vitastor-osd@.service /lib/systemd/system
mon/vitastor.target /lib/systemd/system
mon/90-vitastor.rules /lib/udev/rules.d
mon/scripts/vitastor-osd@.service /lib/systemd/system
mon/scripts/vitastor.target /lib/systemd/system
mon/scripts/90-vitastor.rules /lib/udev/rules.d

View File

@@ -1,29 +1,31 @@
# Build Vitastor packages for Debian inside a container
# cd ..; podman build --build-arg REL=bullseye -v `pwd`/packages:/root/packages -f debian/vitastor.Dockerfile .
# cd ..; podman build --build-arg DISTRO=debian --build-arg REL=bullseye -v `pwd`/packages:/root/packages -f debian/vitastor.Dockerfile .
ARG DISTRO=debian
ARG REL=
FROM debian:$REL
FROM $DISTRO:$REL
ARG DISTRO=debian
ARG REL=
WORKDIR /root
RUN if [ "$REL" = "buster" -o "$REL" = "bullseye" ]; then \
echo "deb http://deb.debian.org/debian $REL-backports main" >> /etc/apt/sources.list; \
echo >> /etc/apt/preferences; \
echo 'Package: *' >> /etc/apt/preferences; \
echo "Pin: release a=$REL-backports" >> /etc/apt/preferences; \
echo 'Pin-Priority: 500' >> /etc/apt/preferences; \
RUN set -e -x; \
if [ "$REL" = "buster" ]; then \
apt-get update; \
apt-get -y install wget; \
wget https://vitastor.io/debian/pubkey.gpg -O /etc/apt/trusted.gpg.d/vitastor.gpg; \
echo "deb https://vitastor.io/debian $REL main" >> /etc/apt/sources.list; \
fi; \
grep '^deb ' /etc/apt/sources.list | perl -pe 's/^deb/deb-src/' >> /etc/apt/sources.list; \
perl -i -pe 's/Types: deb$/Types: deb deb-src/' /etc/apt/sources.list.d/debian.sources || true; \
echo 'APT::Install-Recommends false;' >> /etc/apt/apt.conf; \
echo 'APT::Install-Suggests false;' >> /etc/apt/apt.conf
RUN apt-get update
RUN apt-get -y install fio liburing-dev libgoogle-perftools-dev devscripts
RUN apt-get -y build-dep fio
RUN apt-get --download-only source fio
RUN apt-get update && apt-get -y install libjerasure-dev cmake libibverbs-dev libisal-dev
RUN apt-get update && \
apt-get -y install fio liburing-dev libgoogle-perftools-dev devscripts libjerasure-dev cmake \
libibverbs-dev librdmacm-dev libisal-dev libnl-3-dev libnl-genl-3-dev curl nodejs npm node-nan node-bindings && \
apt-get -y build-dep fio && \
apt-get --download-only source fio
ADD . /root/vitastor
RUN set -e -x; \
@@ -35,8 +37,10 @@ RUN set -e -x; \
mkdir -p /root/packages/vitastor-$REL; \
rm -rf /root/packages/vitastor-$REL/*; \
cd /root/packages/vitastor-$REL; \
cp -r /root/vitastor vitastor-1.4.3; \
cd vitastor-1.4.3; \
FULLVER=$(head -n1 /root/vitastor/debian/changelog | perl -pe 's/^.*\((.*?)\).*$/$1/'); \
VER=${FULLVER%%-*}; \
cp -r /root/vitastor vitastor-$VER; \
cd vitastor-$VER; \
ln -s /root/fio-build/fio-*/ ./fio; \
FIO=$(head -n1 fio/debian/changelog | perl -pe 's/^.*\((.*?)\).*$/$1/'); \
ls /usr/include/linux/raw.h || cp ./debian/raw.h /usr/include/linux/raw.h; \
@@ -48,10 +52,14 @@ RUN set -e -x; \
echo fio-headers.patch >> debian/patches/series; \
rm -rf a b; \
echo "dep:fio=$FIO" > debian/fio_version; \
cd /root/packages/vitastor-$REL/vitastor-$VER; \
mkdir mon/node_modules; \
cd mon/node_modules; \
curl -s https://git.yourcmc.ru/vitalif/antietcd/archive/master.tar.gz | tar -zx; \
curl -s https://git.yourcmc.ru/vitalif/tinyraft/archive/master.tar.gz | tar -zx; \
cd /root/packages/vitastor-$REL; \
tar --sort=name --mtime='2020-01-01' --owner=0 --group=0 --exclude=debian -cJf vitastor_1.4.3.orig.tar.xz vitastor-1.4.3; \
cd vitastor-1.4.3; \
V=$(head -n1 debian/changelog | perl -pe 's/^.*\((.*?)\).*$/$1/'); \
DEBFULLNAME="Vitaliy Filippov <vitalif@yourcmc.ru>" dch -D $REL -v "$V""$REL" "Rebuild for $REL"; \
tar --sort=name --mtime='2020-01-01' --owner=0 --group=0 --exclude=debian -cJf vitastor_$VER.orig.tar.xz vitastor-$VER; \
cd vitastor-$VER; \
DEBFULLNAME="Vitaliy Filippov <vitalif@yourcmc.ru>" dch -D $REL -v "$FULLVER""$REL" "Rebuild for $REL"; \
DEB_BUILD_OPTIONS=nocheck dpkg-buildpackage --jobs=auto -sa; \
rm -rf /root/packages/vitastor-$REL/vitastor-*/

View File

@@ -1,9 +1,11 @@
# Build Docker image with Vitastor packages
FROM debian:bullseye
FROM debian:bookworm
ADD vitastor.list /etc/apt/sources.list.d
ADD vitastor.gpg /etc/apt/trusted.gpg.d
ADD vitastor.pref /etc/apt/preferences.d
ADD apt.conf /etc/apt/
RUN apt-get update && apt-get -y install vitastor qemu-system-x86 qemu-system-common && apt-get clean
ADD etc/apt /etc/apt/
RUN apt-get update && apt-get -y install vitastor udev systemd qemu-system-x86 qemu-system-common qemu-block-extra qemu-utils jq nfs-common && apt-get clean
ADD sleep.sh /usr/bin/
ADD install.sh /usr/bin/
ADD scripts /opt/scripts/
ADD etc /etc/
RUN ln -s /usr/lib/vitastor/mon/make-etcd /usr/bin/make-etcd

9
docker/Makefile Normal file
View File

@@ -0,0 +1,9 @@
VITASTOR_VERSION ?= v2.2.2
all: build push
build:
@docker build --no-cache --rm -t vitalif/vitastor:$(VITASTOR_VERSION) .
push:
@docker push vitalif/vitastor:$(VITASTOR_VERSION)

View File

@@ -0,0 +1,2 @@
deb http://vitastor.io/debian bookworm main
deb http://http.debian.net/debian/ bookworm-backports main

View File

@@ -0,0 +1,27 @@
[Unit]
Description=Containerized etcd for Vitastor
After=network-online.target local-fs.target time-sync.target docker.service vitastor-host.service
Wants=network-online.target local-fs.target time-sync.target docker.service vitastor-host.service
PartOf=vitastor.target
[Service]
Restart=always
Environment=GOGC=50
EnvironmentFile=/etc/vitastor/docker.conf
EnvironmentFile=/etc/vitastor/etcd.conf
SyslogIdentifier=etcd
ExecStart=bash -c 'docker run --rm -i -v /var/lib/vitastor/etcd:/data \
--log-driver none --network host $CONTAINER_OPTIONS --name vitastor-etcd \
$ETCD_IMAGE /usr/local/bin/etcd --name "$ETCD_NAME" --data-dir /data \
--snapshot-count 10000 --advertise-client-urls http://$ETCD_IP:2379 --listen-client-urls http://$ETCD_IP:2379 \
--initial-advertise-peer-urls http://$ETCD_IP:2380 --listen-peer-urls http://$ETCD_IP:2380 \
--initial-cluster-token vitastor-etcd-1 --initial-cluster "$ETCD_INITIAL_CLUSTER" \
--initial-cluster-state new --max-txn-ops=100000 --max-request-bytes=104857600 \
--auto-compaction-retention=10 --auto-compaction-mode=revision'
ExecStop=docker stop vitastor-etcd
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,23 @@
[Unit]
Description=Empty container for running Vitastor commands
After=network-online.target local-fs.target time-sync.target docker.service
Wants=network-online.target local-fs.target time-sync.target docker.service
PartOf=vitastor.target
[Service]
Restart=always
EnvironmentFile=/etc/vitastor/docker.conf
ExecStart=bash -c 'docker run --rm -i -v /etc/vitastor:/etc/vitastor -v /dev:/dev -v /run:/run \
--security-opt seccomp=unconfined --privileged --pid=host --log-driver none --network host --name vitastor vitastor:$VITASTOR_VERSION \
sleep.sh'
ExecStartPost=udevadm trigger
ExecStop=docker stop vitastor
WorkingDirectory=/
PrivateTmp=false
TasksMax=infinity
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,23 @@
[Unit]
Description=Containerized Vitastor monitor
After=network-online.target local-fs.target time-sync.target docker.service
Wants=network-online.target local-fs.target time-sync.target docker.service
PartOf=vitastor.target
[Service]
Restart=always
EnvironmentFile=/etc/vitastor/docker.conf
SyslogIdentifier=vitastor-mon
ExecStart=bash -c 'docker run --rm -i -v /etc/vitastor:/etc/vitastor -v /var/lib/vitastor:/var/lib/vitastor -v /dev:/dev \
--log-driver none --network host $CONTAINER_OPTIONS --name vitastor-mon vitastor:$VITASTOR_VERSION \
node /usr/lib/vitastor/mon/mon-main.js'
ExecStop=docker stop vitastor-mon
WorkingDirectory=/
PrivateTmp=false
TasksMax=infinity
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,28 @@
[Unit]
Description=Containerized Vitastor object storage daemon osd.%i
After=network-online.target local-fs.target time-sync.target docker.service vitastor-host.service
Wants=network-online.target local-fs.target time-sync.target docker.service vitastor-host.service
PartOf=vitastor.target
[Service]
LimitNOFILE=1048576
LimitNPROC=1048576
LimitMEMLOCK=infinity
EnvironmentFile=/etc/vitastor/docker.conf
SyslogIdentifier=vitastor-osd%i
ExecStart=bash -c 'docker run --rm -i -v /etc/vitastor:/etc/vitastor -v /dev:/dev \
$(for i in $(ls /dev/vitastor/osd%i-*); do echo --device $i:$i; done) \
--log-driver none --network host --ulimit nofile=1048576 --ulimit memlock=-1 \
--security-opt seccomp=unconfined $CONTAINER_OPTIONS --name vitastor-osd%i \
vitastor:$VITASTOR_VERSION vitastor-disk exec-osd /dev/vitastor/osd%i-data'
ExecStartPre=+docker exec vitastor vitastor-disk pre-exec /dev/vitastor/osd%i-data
ExecStop=docker stop vitastor-etcd%i
WorkingDirectory=/
PrivateTmp=false
TasksMax=infinity
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=vitastor.target

View File

@@ -0,0 +1,7 @@
SUBSYSTEM=="block", ENV{ID_PART_ENTRY_TYPE}=="e7009fac-a5a1-4d72-af72-53de13059903", \
OWNER="vitastor", GROUP="vitastor", \
IMPORT{program}="/usr/bin/docker exec vitastor vitastor-disk udev $devnode", \
SYMLINK+="vitastor/$env{VITASTOR_ALIAS}"
ENV{VITASTOR_OSD_NUM}!="", ACTION=="add", RUN{program}+="/usr/bin/systemctl enable --now --no-block vitastor-osd@$env{VITASTOR_OSD_NUM}"
ENV{VITASTOR_OSD_NUM}!="", ACTION=="remove", RUN{program}+="/usr/bin/systemctl disable --now --no-block vitastor-osd@$env{VITASTOR_OSD_NUM}"

View File

@@ -0,0 +1,11 @@
#
# Configuration file for containerized Vitastor installation
# (non-Kubernetes, with systemd and udev-based orchestration)
#
# Desired Vitastor version
VITASTOR_VERSION=v2.2.2
# Additional arguments for all containers
# For example, you may want to specify a custom logging driver here
CONTAINER_OPTIONS=""

View File

@@ -0,0 +1,4 @@
ETCD_IMAGE=quay.io/coreos/etcd:v3.5.18
ETCD_NAME=""
ETCD_IP=""
ETCD_INITIAL_CLUSTER=""

View File

@@ -0,0 +1,2 @@
{
}

9
docker/install.sh Executable file
View File

@@ -0,0 +1,9 @@
#!/bin/bash
set -e
cp -urv /etc/default /host-etc/
cp -urv /etc/systemd /host-etc/
cp -urv /etc/udev /host-etc/
cp -urnv /etc/vitastor /host-etc/
cp -urnv /opt/scripts/* /host-bin/

3
docker/scripts/vitastor-cli Executable file
View File

@@ -0,0 +1,3 @@
#!/bin/bash
docker exec -it vitastor vitastor-cli "$@"

3
docker/scripts/vitastor-disk Executable file
View File

@@ -0,0 +1,3 @@
#!/bin/bash
docker exec -it vitastor vitastor-disk "$@"

3
docker/scripts/vitastor-fio Executable file
View File

@@ -0,0 +1,3 @@
#!/bin/bash
docker exec -it vitastor fio "$@"

3
docker/scripts/vitastor-nbd Executable file
View File

@@ -0,0 +1,3 @@
#!/bin/bash
docker exec -it vitastor vitastor-nbd "$@"

3
docker/sleep.sh Executable file
View File

@@ -0,0 +1,3 @@
#!/bin/bash
while :; do sleep infinity; done

View File

@@ -1 +0,0 @@
deb http://vitastor.io/debian bullseye main

View File

@@ -13,7 +13,7 @@ Vitastor configuration consists of:
- [Separate OSD settings](config/pool.en.md#osd-settings)
- [Inode configuration](config/inode.en.md) i.e. image metadata like name, size and parent reference
Configuration parameters can be set in 3 places:
Configuration parameters can be set in 4 places:
- Configuration file (`/etc/vitastor/vitastor.conf` or other path)
- etcd key `/vitastor/config/global`. Most variables can be set there, but etcd
connection parameters should obviously be set in the configuration file.

View File

@@ -14,7 +14,7 @@
- [Настроек инодов](config/inode.ru.md), т.е. метаданных образов, таких, как имя, размер и ссылки на
родительский образ
Параметры конфигурации могут задаваться в 3 местах:
Параметры конфигурации могут задаваться в 4 местах:
- Файле конфигурации (`/etc/vitastor/vitastor.conf` или по другому пути)
- Ключе в etcd `/vitastor/config/global`. Большая часть параметров может
задаваться там, кроме, естественно, самих параметров соединения с etcd,

View File

@@ -9,6 +9,11 @@
These parameters apply only to Vitastor clients (QEMU, fio, NBD and so on) and
affect their interaction with the cluster.
- [client_iothread_count](#client_iothread_count)
- [client_retry_interval](#client_retry_interval)
- [client_eio_retry_interval](#client_eio_retry_interval)
- [client_retry_enospc](#client_retry_enospc)
- [client_wait_up_timeout](#client_wait_up_timeout)
- [client_max_dirty_bytes](#client_max_dirty_bytes)
- [client_max_dirty_ops](#client_max_dirty_ops)
- [client_enable_writeback](#client_enable_writeback)
@@ -18,6 +23,67 @@ affect their interaction with the cluster.
- [nbd_timeout](#nbd_timeout)
- [nbd_max_devices](#nbd_max_devices)
- [nbd_max_part](#nbd_max_part)
- [osd_nearfull_ratio](#osd_nearfull_ratio)
- [hostname](#hostname)
## client_iothread_count
- Type: integer
- Default: 0
Number of separate threads for handling TCP network I/O at client library
side. Enabling 4 threads usually allows to increase peak performance of each
client from approx. 2-3 to 7-8 GByte/s linear read/write and from approx.
100-150 to 400 thousand iops, but at the same time it increases latency.
Latency increase depends on CPU: with CPU power saving disabled latency
only increases by ~10 us (equivalent to Q=1 iops decrease from 10500 to 9500),
with CPU power saving enabled it may be as high as 500 us (equivalent to Q=1
iops decrease from 2000 to 1000). RDMA isn't affected by this option.
It's recommended to enable client I/O threads if you don't use RDMA and want
to increase peak client performance.
## client_retry_interval
- Type: milliseconds
- Default: 50
- Minimum: 10
- Can be changed online: yes
Retry time for I/O requests failed due to inactive PGs or network
connectivity errors.
## client_eio_retry_interval
- Type: milliseconds
- Default: 1000
- Can be changed online: yes
Retry time for I/O requests failed due to data corruption or unfinished
EC object deletions (has_incomplete PG state). 0 disables such retries
and clients are not blocked and just get EIO error code instead.
## client_retry_enospc
- Type: boolean
- Default: true
- Can be changed online: yes
Retry writes on out of space errors to wait until some space is freed on
OSDs.
## client_wait_up_timeout
- Type: seconds
- Default: 16
- Can be changed online: yes
Wait for this number of seconds until PGs are up when doing operations
which require all PGs to be up. Currently only used by object listings
in delete and merge-based commands ([vitastor-cli rm](../usage/cli.en.md#rm), merge and so on).
The default value is calculated as `1 + OSD lease timeout`, which is
`1 + etcd_report_interval + max_etcd_attempts*2*etcd_quick_timeout`.
## client_max_dirty_bytes
@@ -135,3 +201,27 @@ Maximum number of NBD devices in the system. This value is passed as
Maximum number of partitions per NBD device. This value is passed as
`max_part` parameter for the nbd kernel module when vitastor-nbd autoloads it.
Note that (nbds_max)*(1+max_part) usually can't exceed 256.
## osd_nearfull_ratio
- Type: number
- Default: 0.95
- Can be changed online: yes
Ratio of used space on OSD to treat it as "almost full" in vitastor-cli status output.
Remember that some client writes may hang or complete with an error if even
just one OSD becomes 100 % full!
However, unlike in Ceph, 100 % full Vitastor OSDs don't crash (in Ceph they're
unable to start at all), so you'll be able to recover from "out of space" errors
without destroying and recreating OSDs.
## hostname
- Type: string
- Can be changed online: yes
Clients use host name to find their distance to OSDs when [localized reads](pool.en.md#local_reads)
are enabled. By default, standard [gethostname](https://man7.org/linux/man-pages/man2/gethostname.2.html)
function is used to determine host name, but you can also override it with this parameter.

View File

@@ -9,6 +9,11 @@
Данные параметры применяются только к клиентам Vitastor (QEMU, fio, NBD и т.п.) и
затрагивают логику их работы с кластером.
- [client_iothread_count](#client_iothread_count)
- [client_retry_interval](#client_retry_interval)
- [client_eio_retry_interval](#client_eio_retry_interval)
- [client_retry_enospc](#client_retry_enospc)
- [client_wait_up_timeout](#client_wait_up_timeout)
- [client_max_dirty_bytes](#client_max_dirty_bytes)
- [client_max_dirty_ops](#client_max_dirty_ops)
- [client_enable_writeback](#client_enable_writeback)
@@ -18,6 +23,69 @@
- [nbd_timeout](#nbd_timeout)
- [nbd_max_devices](#nbd_max_devices)
- [nbd_max_part](#nbd_max_part)
- [osd_nearfull_ratio](#osd_nearfull_ratio)
- [hostname](#hostname)
## client_iothread_count
- Тип: целое число
- Значение по умолчанию: 0
Число отдельных потоков для обработки ввода-вывода через TCP сеть на стороне
клиентской библиотеки. Включение 4 потоков обычно позволяет поднять пиковую
производительность каждого клиента примерно с 2-3 до 7-8 Гбайт/с линейного
чтения/записи и примерно с 100-150 до 400 тысяч операций ввода-вывода в
секунду, но ухудшает задержку. Увеличение задержки зависит от процессора:
при отключённом энергосбережении CPU это всего ~10 микросекунд (равносильно
падению iops с Q=1 с 10500 до 9500), а при включённом это может быть
и 500 микросекунд (равносильно падению iops с Q=1 с 2000 до 1000). На работу
RDMA данная опция не влияет.
Рекомендуется включать клиентские потоки ввода-вывода, если вы не используете
RDMA и хотите повысить пиковую производительность клиентов.
## client_retry_interval
- Тип: миллисекунды
- Значение по умолчанию: 50
- Минимальное значение: 10
- Можно менять на лету: да
Время повтора запросов ввода-вывода, неудачных из-за неактивных PG или
ошибок сети.
## client_eio_retry_interval
- Тип: миллисекунды
- Значение по умолчанию: 1000
- Можно менять на лету: да
Время повтора запросов ввода-вывода, неудачных из-за повреждения данных
или незавершённых удалений EC-объектов (состояния PG has_incomplete).
0 отключает повторы таких запросов и клиенты не блокируются, а вместо
этого просто получают код ошибки EIO.
## client_retry_enospc
- Тип: булево (да/нет)
- Значение по умолчанию: true
- Можно менять на лету: да
Повторять запросы записи, завершившиеся с ошибками нехватки места, т.е.
ожидать, пока на OSD не освободится место.
## client_wait_up_timeout
- Тип: секунды
- Значение по умолчанию: 16
- Можно менять на лету: да
Время ожидания поднятия PG при операциях, требующих активности всех PG.
В данный момент используется листингами объектов в командах, использующих
удаление и слияние ([vitastor-cli rm](../usage/cli.ru.md#rm), merge и подобные).
Значение по умолчанию вычисляется как `1 + время lease OSD`, равное
`1 + etcd_report_interval + max_etcd_attempts*2*etcd_quick_timeout`.
## client_max_dirty_bytes
@@ -135,3 +203,30 @@
Максимальное число разделов на одном NBD-устройстве. Данное значение передаётся
модулю ядра nbd как параметр `max_part`, когда его загружает vitastor-nbd.
Имейте в виду, что (nbds_max)*(1+max_part) обычно не может превышать 256.
## osd_nearfull_ratio
- Тип: число
- Значение по умолчанию: 0.95
- Можно менять на лету: да
Доля занятого места на OSD, начиная с которой он считается "почти заполненным" в
выводе vitastor-cli status.
Помните, что часть клиентских запросов может зависнуть или завершиться с ошибкой,
если на 100 % заполнится хотя бы 1 OSD!
Однако, в отличие от Ceph, заполненные на 100 % OSD Vitastor не падают (в Ceph
заполненные на 100% OSD вообще не могут стартовать), так что вы сможете
восстановить работу кластера после ошибок отсутствия свободного места
без уничтожения и пересоздания OSD.
## hostname
- Тип: строка
- Можно менять на лету: да
Клиенты используют имя хоста для определения расстояния до OSD, когда включены
[локальные чтения](pool.ru.md#local_reads). По умолчанию для определения имени
хоста используется стандартная функция [gethostname](https://man7.org/linux/man-pages/man2/gethostname.2.html),
но вы также можете задать имя хоста вручную данным параметром.

View File

@@ -56,14 +56,24 @@ Can't be smaller than the OSD data device sector.
## immediate_commit
- Type: string
- Default: false
- Default: all
Another parameter which is really important for performance.
One of "none", "all" or "small". Global value, may be overriden [at pool level](pool.en.md#immediate_commit).
This parameter is also really important for performance.
TLDR: default "all" is optimal for server-grade SSDs with supercapacitor-based
power loss protection (nonvolatile write-through cache) and also for most HDDs.
"none" or "small" should be only selected if you use desktop SSDs without
capacitors or drives with slow write-back cache that can't be disabled. Check
immediate_commit of your OSDs in [ls-osd](../usage/cli.en.md#ls-osd).
Detailed explanation:
Desktop SSDs are very fast (100000+ iops) for simple random writes
without cache flush. However, they are really slow (only around 1000 iops)
if you try to fsync() each write, that is, when you want to guarantee that
each change gets immediately persisted to the physical media.
if you try to fsync() each write, that is, if you want to guarantee that
each change gets actually persisted to the physical media.
Server-grade SSDs with "Advanced/Enhanced Power Loss Protection" or with
"Supercapacitor-based Power Loss Protection", on the other hand, are equally
@@ -75,8 +85,8 @@ really slow when used with desktop SSDs. Vitastor, however, can also
efficiently utilize desktop SSDs by postponing fsync until the client calls
it explicitly.
This is what this parameter regulates. When it's set to "all" the whole
Vitastor cluster commits each change to disks immediately and clients just
This is what this parameter regulates. When it's set to "all" Vitastor
cluster commits each change to disks immediately and clients just
ignore fsyncs because they know for sure that they're unneeded. This reduces
the amount of network roundtrips performed by clients and improves
performance. So it's always better to use server grade SSDs with
@@ -96,12 +106,8 @@ SSD cache or "media-cache" - for example, a lot of Seagate EXOS drives have
it (they have internal SSD cache even though it's not stated in datasheets).
Setting this parameter to "all" or "small" in OSD parameters requires enabling
[disable_journal_fsync](layout-osd.en.yml#disable_journal_fsync) and
[disable_meta_fsync](layout-osd.en.yml#disable_meta_fsync), setting it to
"all" also requires enabling [disable_data_fsync](layout-osd.en.yml#disable_data_fsync).
TLDR: For optimal performance, set immediate_commit to "all" if you only use
SSDs with supercapacitor-based power loss protection (nonvolatile
write-through cache) for both data and journals in the whole Vitastor
cluster. Set it to "small" if you only use such SSDs for journals. Leave
empty if your drives have write-back cache.
[disable_journal_fsync](layout-osd.en.md#disable_journal_fsync) and
[disable_meta_fsync](layout-osd.en.md#disable_meta_fsync), setting it to
"all" also requires enabling [disable_data_fsync](layout-osd.en.md#disable_data_fsync).
vitastor-disk tried to do that by default, first checking/disabling drive cache.
If it can't disable drive cache, OSD get initialized with "none".

View File

@@ -57,9 +57,18 @@ amplification) и эффективность распределения нагр
## immediate_commit
- Тип: строка
- Значение по умолчанию: false
- Значение по умолчанию: all
Ещё один важный для производительности параметр.
Одно из значений "none", "small" или "all". Глобальное значение, может быть
переопределено [на уровне пула](pool.ru.md#immediate_commit).
Данный параметр тоже важен для производительности.
Вкратце: значение по умолчанию "all" оптимально для всех серверных SSD с
суперконденсаторами и также для большинства HDD. "none" и "small" имеет смысл
устанавливать только при использовании SSD настольного класса без
суперконденсаторов или дисков с медленным неотключаемым кэшем записи.
Проверьте настройку immediate_commit своих OSD в выводе команды [ls-osd](../usage/cli.ru.md#ls-osd).
Модели SSD для настольных компьютеров очень быстрые (100000+ операций в
секунду) при простой случайной записи без сбросов кэша. Однако они очень
@@ -80,7 +89,7 @@ Power Loss Protection" - одинаково быстрые и со сбросо
эффективно утилизировать настольные SSD.
Данный параметр влияет как раз на это. Когда он установлен в значение "all",
весь кластер Vitastor мгновенно фиксирует каждое изменение на физические
кластер Vitastor мгновенно фиксирует каждое изменение на физические
носители и клиенты могут просто игнорировать запросы fsync, т.к. они точно
знают, что fsync-и не нужны. Это уменьшает число необходимых обращений к OSD
по сети и улучшает производительность. Поэтому даже с Vitastor лучше всегда
@@ -103,13 +112,6 @@ HDD-дисках с внутренним SSD или "медиа" кэшем - н
указано в спецификациях).
Указание "all" или "small" в настройках / командной строке OSD требует
включения [disable_journal_fsync](layout-osd.ru.yml#disable_journal_fsync) и
[disable_meta_fsync](layout-osd.ru.yml#disable_meta_fsync), значение "all"
также требует включения [disable_data_fsync](layout-osd.ru.yml#disable_data_fsync).
Итого, вкратце: для оптимальной производительности установите
immediate_commit в значение "all", если вы используете в кластере только SSD
с суперконденсаторами и для данных, и для журналов. Если вы используете
такие SSD для всех журналов, но не для данных - можете установить параметр
в "small". Если и какие-то из дисков журналов имеют волатильный кэш записи -
оставьте параметр пустым.
включения [disable_journal_fsync](layout-osd.ru.md#disable_journal_fsync) и
[disable_meta_fsync](layout-osd.ru.md#disable_meta_fsync), значение "all"
также требует включения [disable_data_fsync](layout-osd.ru.md#disable_data_fsync).

View File

@@ -118,12 +118,13 @@ Physical block size of the journal device. Must be a multiple of
- Type: boolean
- Default: false
Do not issue fsyncs to the data device, i.e. do not flush its cache.
Safe ONLY if your data device has write-through cache. If you disable
the cache yourself using `hdparm` or `scsi_disk/cache_type` then make sure
that the cache disable command is run every time before starting Vitastor
OSD, for example, in the systemd unit. See also `immediate_commit` option
for the instructions to disable cache and how to benefit from it.
Do not issue fsyncs to the data device, i.e. do not force it to flush cache.
Safe ONLY if your data device has write-through cache or if write-back
cache is disabled. If you disable drive cache manually with `hdparm` or
writing to `/sys/.../scsi_disk/cache_type` then make sure that you do it
every time before starting Vitastor OSD (vitastor-disk does it automatically).
See also [immediate_commit](layout-cluster.en.md#immediate_commit)
for information about how to benefit from disabled cache.
## disable_meta_fsync
@@ -171,8 +172,7 @@ size, it actually has to write the whole 4 KB sector.
Because of this it can actually be beneficial to use SSDs which work well
with 512 byte sectors and use 512 byte disk_alignment, journal_block_size
and meta_block_size. But the only SSD that may fit into this category is
Intel Optane (probably, not tested yet).
and meta_block_size. But at the moment, no such SSDs are known...
Clients don't need to be aware of disk_alignment, so it's not required to
put a modified value into etcd key /vitastor/config/global.

View File

@@ -122,13 +122,14 @@ SSD-диске, иначе производительность пострада
- Тип: булево (да/нет)
- Значение по умолчанию: false
Не отправлять fsync-и устройству данных, т.е. не сбрасывать его кэш.
Не отправлять fsync-и устройству данных, т.е. не заставлять его сбрасывать кэш.
Безопасно, ТОЛЬКО если ваше устройство данных имеет кэш со сквозной
записью (write-through). Если вы отключаете кэш через `hdparm` или
`scsi_disk/cache_type`, то удостоверьтесь, что команда отключения кэша
выполняется перед каждым запуском Vitastor OSD, например, в systemd unit-е.
Смотрите также опцию `immediate_commit` для инструкций по отключению кэша
и о том, как из этого извлечь выгоду.
записью (write-through) или если кэш с отложенной записью (write-back) отключён.
Если вы отключаете кэш вручную через `hdparm` или запись в `/sys/.../scsi_disk/cache_type`,
то удостоверьтесь, что вы делаете это каждый раз перед запуском Vitastor OSD
(vitastor-disk делает это автоматически). Смотрите также опцию
[immediate_commit](layout-cluster.ru.md#immediate_commit) для информации о том,
как извлечь выгоду из отключённого кэша.
## disable_meta_fsync
@@ -179,9 +180,8 @@ SSD и HDD диски используют 4 КБ физические сект
Поэтому, на самом деле, может быть выгодно найти SSD, хорошо работающие с
меньшими, 512-байтными, блоками и использовать 512-байтные disk_alignment,
journal_block_size и meta_block_size. Однако единственные SSD, которые
теоретически могут попасть в эту категорию - это Intel Optane (но и это
пока не проверялось автором).
journal_block_size и meta_block_size. Однако на данный момент такие SSD
не известны...
Клиентам не обязательно знать про disk_alignment, так что помещать значение
этого параметра в etcd в /vitastor/config/global не нужно.

View File

@@ -8,6 +8,14 @@
These parameters only apply to Monitors.
- [use_antietcd](#use_antietcd)
- [enable_prometheus](#enable_prometheus)
- [mon_http_port](#mon_http_port)
- [mon_http_ip](#mon_http_ip)
- [mon_https_cert](#mon_https_cert)
- [mon_https_key](#mon_https_key)
- [mon_https_client_auth](#mon_https_client_auth)
- [mon_https_ca](#mon_https_ca)
- [etcd_mon_ttl](#etcd_mon_ttl)
- [etcd_mon_timeout](#etcd_mon_timeout)
- [etcd_mon_retries](#etcd_mon_retries)
@@ -15,12 +23,95 @@ These parameters only apply to Monitors.
- [mon_stats_timeout](#mon_stats_timeout)
- [osd_out_time](#osd_out_time)
- [placement_levels](#placement_levels)
- [use_old_pg_combinator](#use_old_pg_combinator)
- [osd_backfillfull_ratio](#osd_backfillfull_ratio)
## use_antietcd
- Type: boolean
- Default: false
Enable experimental built-in etcd replacement (clustered key-value database):
[antietcd](https://git.yourcmc.ru/vitalif/antietcd/).
When set to true, monitor runs internal antietcd automatically if it finds
a network interface with an IP address matching one of addresses in the
`etcd_address` configuration option (in `/etc/vitastor/vitastor.conf` or in
the monitor command line). If there are multiple matching addresses, it also
checks `antietcd_port` and antietcd is started for address with matching port.
By default, antietcd accepts connection on the selected IP address, but it
can also be overridden manually in the `antietcd_ip` option.
When antietcd is started, monitor stores cluster metadata itself and exposes
a etcd-compatible REST API. On disk, these metadata are stored in
`/var/lib/vitastor/mon_2379.json.gz` (can be overridden in antietcd_data_file
or antietcd_data_dir options). All other antietcd parameters
(see [here](https://git.yourcmc.ru/vitalif/antietcd/)) except node_id,
cluster, cluster_key, persist_filter, stale_read can also be set in
Vitastor configuration with `antietcd_` prefix.
You can dump/load data to or from antietcd using Antietcd `anticli` tool:
```
npm exec anticli -e http://etcd:2379/v3 get --prefix '' --no-temp > dump.json
npm exec anticli -e http://antietcd:2379/v3 load < dump.json
```
## enable_prometheus
- Type: boolean
- Default: true
Enable built-in Prometheus metrics exporter at mon_http_port (8060 by default).
Note that only the active (master) monitor exposes metrics, others return
HTTP 503. So you should add all monitor URLs to your Prometheus job configuration.
Grafana dashboard suitable for this exporter is here: [Vitastor-Grafana-6+.json](../../mon/scripts/Vitastor-Grafana-6+.json).
## mon_http_port
- Type: integer
- Default: 8060
HTTP port for monitors to listen to (including metrics exporter)
## mon_http_ip
- Type: string
IP address for monitors to listen to (all addresses by default)
## mon_https_cert
- Type: string
Path to PEM SSL certificate file for monitor to listen using HTTPS
## mon_https_key
- Type: string
Path to PEM SSL private key file for monitor to listen using HTTPS
## mon_https_client_auth
- Type: boolean
- Default: false
Enable HTTPS client certificate-based authorization for monitor connections
## mon_https_ca
- Type: string
Path to CA certificate for client HTTPS authorization
## etcd_mon_ttl
- Type: seconds
- Default: 30
- Minimum: 10
- Default: 1
- Minimum: 5
Monitor etcd lease refresh interval in seconds
@@ -77,3 +168,26 @@ values. Smaller priority means higher level in tree. For example,
levels are always predefined and can't be removed. If one of them is not
present in the configuration, then it is defined with the default priority
(100 for "host", 101 for "osd").
## use_old_pg_combinator
- Type: boolean
- Default: false
Use the old PG combination generator which doesn't support [level_placement](pool.en.md#level_placement)
and [raw_placement](pool.en.md#raw_placement) for pools which don't use this features.
## osd_backfillfull_ratio
- Type: number
- Default: 0.99
Monitors try to prevent OSDs becoming 100% full during rebalance or recovery by
calculating how much space will be occupied on every OSD after all rebalance
and recovery operations finish, and pausing rebalance and recovery if that
amount of space exceeds OSD capacity multiplied by the value of this
configuration parameter.
Future used space is calculated by summing space used by all user data blocks
(objects) in all PGs placed on a specific OSD, even if some of these objects
currently reside on a different set of OSDs.

View File

@@ -8,6 +8,14 @@
Данные параметры используются только мониторами Vitastor.
- [use_antietcd](#use_antietcd)
- [enable_prometheus](#enable_prometheus)
- [mon_http_port](#mon_http_port)
- [mon_http_ip](#mon_http_ip)
- [mon_https_cert](#mon_https_cert)
- [mon_https_key](#mon_https_key)
- [mon_https_client_auth](#mon_https_client_auth)
- [mon_https_ca](#mon_https_ca)
- [etcd_mon_ttl](#etcd_mon_ttl)
- [etcd_mon_timeout](#etcd_mon_timeout)
- [etcd_mon_retries](#etcd_mon_retries)
@@ -15,12 +23,97 @@
- [mon_stats_timeout](#mon_stats_timeout)
- [osd_out_time](#osd_out_time)
- [placement_levels](#placement_levels)
- [use_old_pg_combinator](#use_old_pg_combinator)
- [osd_backfillfull_ratio](#osd_backfillfull_ratio)
## use_antietcd
- Тип: булево (да/нет)
- Значение по умолчанию: false
Включить экспериментальный встроенный заменитель etcd (кластерную БД ключ-значение):
[antietcd](https://git.yourcmc.ru/vitalif/antietcd/).
Если параметр установлен в true, монитор запускает antietcd автоматически,
если обнаруживает сетевой интерфейс с одним из адресов, указанных в опции
конфигурации `etcd_address``/etc/vitastor/vitastor.conf` или в опциях
командной строки монитора). Если таких адресов несколько, также проверяется
опция `antietcd_port` и antietcd запускается для адреса с соответствующим
портом. По умолчанию antietcd принимает подключения по выбранному совпадающему
IP, но его также можно определить вручную опцией `antietcd_ip`.
При запуске antietcd монитор сам хранит центральные метаданные кластера и
выставляет etcd-совместимое REST API. На диске эти метаданные хранятся в файле
`/var/lib/vitastor/mon_2379.json.gz` (можно переопределить параметрами
antietcd_data_file или antietcd_data_dir). Все остальные параметры antietcd
(смотрите [по ссылке](https://git.yourcmc.ru/vitalif/antietcd/)), за исключением
node_id, cluster, cluster_key, persist_filter, stale_read также можно задавать
в конфигурации Vitastor с префиксом `antietcd_`.
Вы можете выгружать/загружать данные в или из antietcd с помощью его инструмента
`anticli`:
```
npm exec anticli -e http://etcd:2379/v3 get --prefix '' --no-temp > dump.json
npm exec anticli -e http://antietcd:2379/v3 load < dump.json
```
## enable_prometheus
- Тип: булево (да/нет)
- Значение по умолчанию: true
Включить встроенный Prometheus-экспортер метрик на порту mon_http_port (по умолчанию 8060).
Обратите внимание, что метрики выставляет только активный (главный) монитор, остальные
возвращают статус HTTP 503, поэтому вам следует добавлять адреса всех мониторов
в задание по сбору метрик Prometheus.
Дашборд для Grafana, подходящий для этого экспортера: [Vitastor-Grafana-6+.json](../../mon/scripts/Vitastor-Grafana-6+.json).
## mon_http_port
- Тип: целое число
- Значение по умолчанию: 8060
Порт, на котором мониторы принимают HTTP-соединения (в том числе для отдачи метрик)
## mon_http_ip
- Тип: строка
IP-адрес, на котором мониторы принимают HTTP-соединения (по умолчанию все адреса)
## mon_https_cert
- Тип: строка
Путь к PEM-файлу SSL-сертификата для монитора, чтобы принимать соединения через HTTPS
## mon_https_key
- Тип: строка
Путь к PEM-файлу секретного SSL-ключа для монитора, чтобы принимать соединения через HTTPS
## mon_https_client_auth
- Тип: булево (да/нет)
- Значение по умолчанию: false
Включить в HTTPS-сервере монитора авторизацию по клиентским сертификатам
## mon_https_ca
- Тип: строка
Путь к удостоверяющему сертификату для авторизации клиентских HTTPS соединений
## etcd_mon_ttl
- Тип: секунды
- Значение по умолчанию: 30
- Минимальное значение: 10
- Значение по умолчанию: 1
- Минимальное значение: 5
Интервал обновления etcd резервации (lease) монитором
@@ -78,3 +171,27 @@ OSD перед обновлением агрегированной статис
"host" и "osd" являются предопределёнными и не могут быть удалены. Если
один из них отсутствует в конфигурации, он доопределяется с приоритетом по
умолчанию (100 для уровня "host", 101 для "osd").
## use_old_pg_combinator
- Тип: булево (да/нет)
- Значение по умолчанию: false
Использовать старый генератор комбинаций PG, не поддерживающий [level_placement](pool.ru.md#level_placement)
и [raw_placement](pool.ru.md#raw_placement) для пулов, которые не используют данные функции.
## osd_backfillfull_ratio
- Тип: число
- Значение по умолчанию: 0.99
Мониторы стараются предотвратить 100% заполнение OSD в процессе ребаланса
или восстановления, рассчитывая, сколько места будет занято на каждом OSD после
завершения всех операций ребаланса и восстановления, и приостанавливая
ребаланс и восстановление, если рассчитанный объём превышает ёмкость OSD,
умноженную на значение данного параметра.
Будущее занятое место рассчитывается сложением места, занятого всеми
пользовательскими блоками данных (объектами) во всех PG, расположенных
на конкретном OSD, даже если часть этих объектов в данный момент находится
на другом наборе OSD.

View File

@@ -9,9 +9,11 @@
These parameters apply to clients and OSDs and affect network connection logic
between clients, OSDs and etcd.
- [tcp_header_buffer_size](#tcp_header_buffer_size)
- [use_sync_send_recv](#use_sync_send_recv)
- [osd_network](#osd_network)
- [osd_cluster_network](#osd_cluster_network)
- [use_rdma](#use_rdma)
- [use_rdmacm](#use_rdmacm)
- [disable_tcp](#disable_tcp)
- [rdma_device](#rdma_device)
- [rdma_port_num](#rdma_port_num)
- [rdma_gid_index](#rdma_gid_index)
@@ -25,55 +27,85 @@ between clients, OSDs and etcd.
- [peer_connect_timeout](#peer_connect_timeout)
- [osd_idle_timeout](#osd_idle_timeout)
- [osd_ping_timeout](#osd_ping_timeout)
- [up_wait_retry_interval](#up_wait_retry_interval)
- [max_etcd_attempts](#max_etcd_attempts)
- [etcd_quick_timeout](#etcd_quick_timeout)
- [etcd_slow_timeout](#etcd_slow_timeout)
- [etcd_keepalive_timeout](#etcd_keepalive_timeout)
- [etcd_ws_keepalive_timeout](#etcd_ws_keepalive_timeout)
- [etcd_ws_keepalive_interval](#etcd_ws_keepalive_interval)
- [etcd_min_reload_interval](#etcd_min_reload_interval)
- [tcp_header_buffer_size](#tcp_header_buffer_size)
- [min_zerocopy_send_size](#min_zerocopy_send_size)
- [use_sync_send_recv](#use_sync_send_recv)
## tcp_header_buffer_size
## osd_network
- Type: integer
- Default: 65536
- Type: string or array of strings
Size of the buffer used to read data using an additional copy. Vitastor
packet headers are 128 bytes, payload is always at least 4 KB, so it is
usually beneficial to try to read multiple packets at once even though
it requires to copy the data an additional time. The rest of each packet
is received without an additional copy. You can try to play with this
parameter and see how it affects random iops and linear bandwidth if you
want.
Network mask of public OSD network(s) (IPv4 or IPv6). Each OSD listens to all
addresses of UP + RUNNING interfaces matching one of these networks, on the
same port. Port is auto-selected except if [bind_port](osd.en.md#bind_port) is
explicitly specified. Bind address(es) may also be overridden manually by
specifying [bind_address](osd.en.md#bind_address). If OSD networks are not specified
at all, OSD just listens to a wildcard address (0.0.0.0).
## use_sync_send_recv
## osd_cluster_network
- Type: boolean
- Default: false
- Type: string or array of strings
If true, synchronous send/recv syscalls are used instead of io_uring for
socket communication. Useless for OSDs because they require io_uring anyway,
but may be required for clients with old kernel versions.
Network mask of separate network(s) (IPv4 or IPv6) to use for OSD
cluster connections. I.e. OSDs will always attempt to use these networks
to connect to other OSDs, while clients will attempt to use networks from
[osd_network](#osd_network).
## use_rdma
- Type: boolean
- Default: true
Try to use RDMA for communication if it's available. Disable if you don't
want Vitastor to use RDMA. TCP-only clients can also talk to an RDMA-enabled
cluster, so disabling RDMA may be needed if clients have RDMA devices,
but they are not connected to the cluster.
Try to use RDMA through libibverbs for communication if it's available.
Disable if you don't want Vitastor to use RDMA. TCP-only clients can also
talk to an RDMA-enabled cluster, so disabling RDMA may be needed if clients
have RDMA devices, but they are not connected to the cluster.
`use_rdma` works with RoCEv1/RoCEv2 networks, but not with iWARP and,
maybe, with some Infiniband configurations which require RDMA-CM.
Consider `use_rdmacm` for such networks.
## use_rdmacm
- Type: boolean
- Default: true
Use an alternative implementation of RDMA through RDMA-CM (Connection
Manager). Works with all RDMA networks: Infiniband, iWARP and
RoCEv1/RoCEv2, and even allows to disable TCP and run only with RDMA.
OSDs always use random port numbers for RDMA-CM listeners, different
from their TCP ports. `use_rdma` is automatically disabled when
`use_rdmacm` is enabled.
## disable_tcp
- Type: boolean
- Default: true
Fully disable TCP and only use RDMA-CM for OSD communication.
## rdma_device
- Type: string
RDMA device name to use for Vitastor OSD communications (for example,
"rocep5s0f0"). Now Vitastor supports all adapters, even ones without
ODP support, like Mellanox ConnectX-3 and non-Mellanox cards.
"rocep5s0f0"). If not specified, Vitastor will try to find an RoCE
device matching [osd_network](osd.en.md#osd_network), preferring RoCEv2,
or choose the first available RDMA device if no RoCE devices are
found or if `osd_network` is not specified. Auto-selection is also
unsupported with old libibverbs < v32, like in Debian 10 Buster or
CentOS 7.
Versions up to Vitastor 1.2.0 required ODP which is only present in
Mellanox ConnectX >= 4. See also [rdma_odp](#rdma_odp).
Vitastor supports all adapters, even ones without ODP support, like
Mellanox ConnectX-3 and non-Mellanox cards. Versions up to Vitastor
1.2.0 required ODP which is only present in Mellanox ConnectX >= 4.
See also [rdma_odp](#rdma_odp).
Run `ibv_devinfo -v` as root to list available RDMA devices and their
features.
@@ -87,32 +119,36 @@ PFC (Priority Flow Control) and ECN (Explicit Congestion Notification).
## rdma_port_num
- Type: integer
- Default: 1
RDMA device port number to use. Only for devices that have more than 1 port.
See `phys_port_cnt` in `ibv_devinfo -v` output to determine how many ports
your device has.
Not relevant for RDMA-CM (use_rdmacm).
## rdma_gid_index
- Type: integer
- Default: 0
Global address identifier index of the RDMA device to use. Different GID
indexes may correspond to different protocols like RoCEv1, RoCEv2 and iWARP.
Search for "GID" in `ibv_devinfo -v` output to determine which GID index
you need.
**IMPORTANT:** If you want to use RoCEv2 (as recommended) then the correct
rdma_gid_index is usually 1 (IPv6) or 3 (IPv4).
If not specified, Vitastor will try to auto-select a RoCEv2 IPv4 GID, then
RoCEv2 IPv6 GID, then RoCEv1 IPv4 GID, then RoCEv1 IPv6 GID, then IB GID.
GID auto-selection is unsupported with libibverbs < v32.
A correct rdma_gid_index for RoCEv2 is usually 1 (IPv6) or 3 (IPv4).
Not relevant for RDMA-CM (use_rdmacm).
## rdma_mtu
- Type: integer
- Default: 4096
RDMA Path MTU to use. Must be 1024, 2048 or 4096. There is usually no
sense to change it from the default 4096.
RDMA Path MTU to use. Must be 1024, 2048 or 4096. Default is to use the
RDMA device's MTU.
## rdma_max_sge
@@ -212,17 +248,6 @@ Maximum time to wait for OSD keepalive responses. If an OSD doesn't respond
within this time, the connection to it is dropped and a reconnection attempt
is scheduled.
## up_wait_retry_interval
- Type: milliseconds
- Default: 500
- Minimum: 50
- Can be changed online: yes
OSDs respond to clients with a special error code when they receive I/O
requests for a PG that's not synchronized and started. This parameter sets
the time for the clients to wait before re-attempting such I/O requests.
## max_etcd_attempts
- Type: integer
@@ -257,11 +282,71 @@ Timeout for etcd requests which are allowed to wait for some time.
Timeout for etcd connection HTTP Keep-Alive. Should be higher than
etcd_report_interval to guarantee that keepalive actually works.
## etcd_ws_keepalive_timeout
## etcd_ws_keepalive_interval
- Type: seconds
- Default: 30
- Default: 5
- Can be changed online: yes
etcd websocket ping interval required to keep the connection alive and
detect disconnections quickly.
## etcd_min_reload_interval
- Type: milliseconds
- Default: 1000
- Can be changed online: yes
Minimum interval for full etcd state reload. Introduced to prevent
excessive load on etcd during outages when etcd can't keep up with event
streams and cancels them.
## tcp_header_buffer_size
- Type: integer
- Default: 65536
Size of the buffer used to read data using an additional copy. Vitastor
packet headers are 128 bytes, payload is always at least 4 KB, so it is
usually beneficial to try to read multiple packets at once even though
it requires to copy the data an additional time. The rest of each packet
is received without an additional copy. You can try to play with this
parameter and see how it affects random iops and linear bandwidth if you
want.
## min_zerocopy_send_size
- Type: integer
- Default: 32768
OSDs and clients will attempt to use io_uring-based zero-copy TCP send
for buffers larger than this number of bytes. Zero-copy send with io_uring is
supported since Linux kernel version 6.1. Support is auto-detected and disabled
automatically when not available. It can also be disabled explicitly by setting
this parameter to a negative value.
Warning! Zero-copy send performance may vary greatly from CPU to CPU and from
one kernel version to another. Generally, it tends to only make benefit with larger
messages. With smaller messages (say, 4 KB), it may actually be slower. 32 KB is
enough for almost all CPUs, but even smaller values are optimal for some of them.
For example, 4 KB is OK for EPYC Milan/Genoa and 12 KB is OK for Xeon Ice Lake
(but verify it yourself please).
Verification instructions:
1. Add `iommu=pt` into your Linux kernel command line and reboot.
2. Upgrade your kernel. For example, it's very important to use 6.11+ with recent AMD EPYCs.
3. Run some tests with the [send-zerocopy liburing example](https://github.com/axboe/liburing/blob/master/examples/send-zerocopy.c)
to find the minimal message size for which zero-copy is optimal.
Use `./send-zerocopy tcp -4 -R` at the server side and
`time ./send-zerocopy tcp -4 -b 0 -s BUFFER_SIZE -D SERVER_IP` at the client side with
`-z 0` (no zero-copy) and `-z 1` (zero-copy), and compare MB/s and used CPU time
(user+system).
## use_sync_send_recv
- Type: boolean
- Default: false
If true, synchronous send/recv syscalls are used instead of io_uring for
socket communication. Useless for OSDs because they require io_uring anyway,
but may be required for clients with old kernel versions.

View File

@@ -9,9 +9,11 @@
Данные параметры используются клиентами и OSD и влияют на логику сетевого
взаимодействия между клиентами, OSD, а также etcd.
- [tcp_header_buffer_size](#tcp_header_buffer_size)
- [use_sync_send_recv](#use_sync_send_recv)
- [osd_network](#osd_network)
- [osd_cluster_network](#osd_cluster_network)
- [use_rdma](#use_rdma)
- [use_rdmacm](#use_rdmacm)
- [disable_tcp](#disable_tcp)
- [rdma_device](#rdma_device)
- [rdma_port_num](#rdma_port_num)
- [rdma_gid_index](#rdma_gid_index)
@@ -25,59 +27,85 @@
- [peer_connect_timeout](#peer_connect_timeout)
- [osd_idle_timeout](#osd_idle_timeout)
- [osd_ping_timeout](#osd_ping_timeout)
- [up_wait_retry_interval](#up_wait_retry_interval)
- [max_etcd_attempts](#max_etcd_attempts)
- [etcd_quick_timeout](#etcd_quick_timeout)
- [etcd_slow_timeout](#etcd_slow_timeout)
- [etcd_keepalive_timeout](#etcd_keepalive_timeout)
- [etcd_ws_keepalive_timeout](#etcd_ws_keepalive_timeout)
- [etcd_ws_keepalive_interval](#etcd_ws_keepalive_interval)
- [etcd_min_reload_interval](#etcd_min_reload_interval)
- [tcp_header_buffer_size](#tcp_header_buffer_size)
- [min_zerocopy_send_size](#min_zerocopy_send_size)
- [use_sync_send_recv](#use_sync_send_recv)
## tcp_header_buffer_size
## osd_network
- Тип: целое число
- Значение по умолчанию: 65536
- Тип: строка или массив строк
Размер буфера для чтения данных с дополнительным копированием. Пакеты
Vitastor содержат 128-байтные заголовки, за которыми следуют данные размером
от 4 КБ и для мелких операций ввода-вывода обычно выгодно за 1 вызов читать
сразу несколько пакетов, даже не смотря на то, что это требует лишний раз
скопировать данные. Часть каждого пакета за пределами значения данного
параметра читается без дополнительного копирования. Вы можете попробовать
поменять этот параметр и посмотреть, как он влияет на производительность
случайного и линейного доступа.
Маски подсетей (IPv4 или IPv6) публичной сети или сетей OSD. Каждый OSD слушает
один и тот же порт на всех адресах поднятых (UP + RUNNING) сетевых интерфейсов,
соответствующих одной из указанных сетей. Порт выбирается автоматически, если
только [bind_port](osd.ru.md#bind_port) не задан явно. Адреса для подключений можно
также переопределить явно, задав [bind_address](osd.ru.md#bind_address). Если сети OSD
не заданы вообще, OSD слушает все адреса (0.0.0.0).
## use_sync_send_recv
## osd_cluster_network
- Тип: булево (да/нет)
- Значение по умолчанию: false
- Тип: строка или массив строк
Если установлено в истину, то вместо io_uring для передачи данных по сети
будут использоваться обычные синхронные системные вызовы send/recv. Для OSD
это бессмысленно, так как OSD в любом случае нуждается в io_uring, но, в
принципе, это может применяться для клиентов со старыми версиями ядра.
Маски подсетей (IPv4 или IPv6) отдельной кластерной сети или сетей OSD.
То есть, OSD будут всегда стараться использовать эти сети для соединений
с другими OSD, а клиенты будут стараться использовать сети из [osd_network](#osd_network).
## use_rdma
- Тип: булево (да/нет)
- Значение по умолчанию: true
Пытаться использовать RDMA для связи при наличии доступных устройств.
Отключите, если вы не хотите, чтобы Vitastor использовал RDMA.
TCP-клиенты также могут работать с RDMA-кластером, так что отключать
RDMA может быть нужно только если у клиентов есть RDMA-устройства,
но они не имеют соединения с кластером Vitastor.
Попробовать использовать RDMA через libibverbs для связи при наличии
доступных устройств. Отключите, если вы не хотите, чтобы Vitastor
использовал RDMA. TCP-клиенты также могут работать с RDMA-кластером,
так что отключать RDMA может быть нужно, только если у клиентов есть
RDMA-устройства, но они не имеют соединения с кластером Vitastor.
`use_rdma` работает с RoCEv1/RoCEv2 сетями, но не работает с iWARP и
может не работать с частью конфигураций Infiniband, требующих RDMA-CM.
Рассмотрите включение `use_rdmacm` для таких сетей.
## use_rdmacm
- Тип: булево (да/нет)
- Значение по умолчанию: true
Использовать альтернативную реализацию RDMA на основе RDMA-CM (Connection
Manager). Работает со всеми типами RDMA-сетей: Infiniband, iWARP и
RoCEv1/RoCEv2, и даже позволяет полностью отключить TCP и работать
только на RDMA. OSD используют случайные номера портов для ожидания
соединений через RDMA-CM, отличающиеся от их TCP-портов. Также при
включении `use_rdmacm` автоматически отключается опция `use_rdma`.
## disable_tcp
- Тип: булево (да/нет)
- Значение по умолчанию: true
Полностью отключить TCP и использовать только RDMA-CM для соединений с OSD.
## rdma_device
- Тип: строка
Название RDMA-устройства для связи с Vitastor OSD (например, "rocep5s0f0").
Сейчас Vitastor поддерживает все модели адаптеров, включая те, у которых
нет поддержки ODP, то есть вы можете использовать RDMA с ConnectX-3 и
картами производства не Mellanox.
Если не указано, Vitastor попробует найти RoCE-устройство, соответствующее
[osd_network](osd.en.md#osd_network), предпочитая RoCEv2, или выбрать первое
попавшееся RDMA-устройство, если RoCE-устройств нет или если сеть `osd_network`
не задана. Также автовыбор не поддерживается со старыми версиями библиотеки
libibverbs < v32, например в Debian 10 Buster или CentOS 7.
Версии Vitastor до 1.2.0 включительно требовали ODP, который есть только
на Mellanox ConnectX 4 и более новых. См. также [rdma_odp](#rdma_odp).
Vitastor поддерживает все модели адаптеров, включая те, у которых
нет поддержки ODP, то есть вы можете использовать RDMA с ConnectX-3 и
картами производства не Mellanox. Версии Vitastor до 1.2.0 включительно
требовали ODP, который есть только на Mellanox ConnectX 4 и более новых.
См. также [rdma_odp](#rdma_odp).
Запустите `ibv_devinfo -v` от имени суперпользователя, чтобы посмотреть
список доступных RDMA-устройств, их параметры и возможности.
@@ -92,33 +120,38 @@ Control) и ECN (Explicit Congestion Notification).
## rdma_port_num
- Тип: целое число
- Значение по умолчанию: 1
Номер порта RDMA-устройства, который следует использовать. Имеет смысл
только для устройств, у которых более 1 порта. Чтобы узнать, сколько портов
у вашего адаптера, посмотрите `phys_port_cnt` в выводе команды
`ibv_devinfo -v`.
Опция неприменима к RDMA-CM (use_rdmacm).
## rdma_gid_index
- Тип: целое число
- Значение по умолчанию: 0
Номер глобального идентификатора адреса RDMA-устройства, который следует
использовать. Разным gid_index могут соответствовать разные протоколы связи:
RoCEv1, RoCEv2, iWARP. Чтобы понять, какой нужен вам - смотрите строчки со
словом "GID" в выводе команды `ibv_devinfo -v`.
**ВАЖНО:** Если вы хотите использовать RoCEv2 (как мы и рекомендуем), то
правильный rdma_gid_index, как правило, 1 (IPv6) или 3 (IPv4).
Если не указан, Vitastor попробует автоматически выбрать сначала GID,
соответствующий RoCEv2 IPv4, потом RoCEv2 IPv6, потом RoCEv1 IPv4, потом
RoCEv1 IPv6, потом IB. Авто-выбор GID не поддерживается со старыми версиями
libibverbs < v32.
Правильный rdma_gid_index для RoCEv2, как правило, 1 (IPv6) или 3 (IPv4).
Опция неприменима к RDMA-CM (use_rdmacm).
## rdma_mtu
- Тип: целое число
- Значение по умолчанию: 4096
Максимальная единица передачи (Path MTU) для RDMA. Должно быть равно 1024,
2048 или 4096. Обычно нет смысла менять значение по умолчанию, равное 4096.
2048 или 4096. По умолчанию используется значение MTU RDMA-устройства.
## rdma_max_sge
@@ -221,19 +254,6 @@ OSD в любом случае согласовывают реальное зн
Если OSD не отвечает за это время, соединение отключается и производится
повторная попытка соединения.
## up_wait_retry_interval
- Тип: миллисекунды
- Значение по умолчанию: 500
- Минимальное значение: 50
- Можно менять на лету: да
Когда OSD получают от клиентов запросы ввода-вывода, относящиеся к не
поднятым на данный момент на них PG, либо к PG в процессе синхронизации,
они отвечают клиентам специальным кодом ошибки, означающим, что клиент
должен некоторое время подождать перед повторением запроса. Именно это время
ожидания задаёт данный параметр.
## max_etcd_attempts
- Тип: целое число
@@ -270,10 +290,72 @@ OSD в любом случае согласовывают реальное зн
Таймаут для HTTP Keep-Alive в соединениях к etcd. Должен быть больше, чем
etcd_report_interval, чтобы keepalive гарантированно работал.
## etcd_ws_keepalive_timeout
## etcd_ws_keepalive_interval
- Тип: секунды
- Значение по умолчанию: 30
- Значение по умолчанию: 5
- Можно менять на лету: да
Интервал проверки живости вебсокет-подключений к etcd.
## etcd_min_reload_interval
- Тип: миллисекунды
- Значение по умолчанию: 1000
- Можно менять на лету: да
Минимальный интервал полной перезагрузки состояния из etcd. Добавлено для
предотвращения избыточной нагрузки на etcd во время отказов, когда etcd не
успевает рассылать потоки событий и отменяет их.
## tcp_header_buffer_size
- Тип: целое число
- Значение по умолчанию: 65536
Размер буфера для чтения данных с дополнительным копированием. Пакеты
Vitastor содержат 128-байтные заголовки, за которыми следуют данные размером
от 4 КБ и для мелких операций ввода-вывода обычно выгодно за 1 вызов читать
сразу несколько пакетов, даже не смотря на то, что это требует лишний раз
скопировать данные. Часть каждого пакета за пределами значения данного
параметра читается без дополнительного копирования. Вы можете попробовать
поменять этот параметр и посмотреть, как он влияет на производительность
случайного и линейного доступа.
## min_zerocopy_send_size
- Тип: целое число
- Значение по умолчанию: 32768
OSD и клиенты будут пробовать использовать TCP-отправку без копирования (zero-copy) на
основе io_uring для буферов, больших, чем это число байт. Отправка без копирования
поддерживается в io_uring, начиная с версии ядра Linux 6.1. Наличие поддержки
проверяется автоматически и zero-copy отключается, когда поддержки нет. Также
её можно отключить явно, установив данный параметр в отрицательное значение.
Внимание! Производительность данной функции может сильно отличаться на разных
процессорах и на разных версиях ядра Linux. В целом, zero-copy обычно быстрее с
большими сообщениями, а с мелкими (например, 4 КБ) zero-copy может быть даже
медленнее. 32 КБ достаточно почти для всех процессоров, но для каких-то можно
использовать даже меньшие значения. Например, для EPYC Milan/Genoa подходит 4 КБ,
а для Xeon Ice Lake - 12 КБ (но, пожалуйста, перепроверьте это сами).
Инструкция по проверке:
1. Добавьте `iommu=pt` в командную строку загрузки вашего ядра Linux и перезагрузитесь.
2. Обновите ядро. Например, для AMD EPYC очень важно использовать версию 6.11+.
3. Позапускайте тесты с помощью [send-zerocopy из примеров liburing](https://github.com/axboe/liburing/blob/master/examples/send-zerocopy.c),
чтобы найти минимальный размер сообщения, для которого zero-copy отправка оптимальна.
Запускайте `./send-zerocopy tcp -4 -R` на стороне сервера и
`time ./send-zerocopy tcp -4 -b 0 -s РАЗМЕРУФЕРА -D АДРЕС_СЕРВЕРА` на стороне клиента
с опцией `-z 0` (обычная отправка) и `-z 1` (отправка без копирования), и сравнивайте
скорость в МБ/с и занятое процессорное время (user+system).
## use_sync_send_recv
- Тип: булево (да/нет)
- Значение по умолчанию: false
Если установлено в истину, то вместо io_uring для передачи данных по сети
будут использоваться обычные синхронные системные вызовы send/recv. Для OSD
это бессмысленно, так как OSD в любом случае нуждается в io_uring, но, в
принципе, это может применяться для клиентов со старыми версиями ядра.

View File

@@ -7,15 +7,15 @@
# Runtime OSD Parameters
These parameters only apply to OSDs, are not fixed at the moment of OSD drive
initialization and can be changed - either with an OSD restart or, for some of
them, even without restarting by updating configuration in etcd.
initialization and can be changed - in /etc/vitastor/vitastor.conf or [vitastor-disk update-sb](../usage/disk.en.md#update-sb)
with an OSD restart or, for some of them, even without restarting by updating configuration in etcd.
- [bind_address](#bind_address)
- [bind_port](#bind_port)
- [osd_iothread_count](#osd_iothread_count)
- [etcd_report_interval](#etcd_report_interval)
- [etcd_stats_interval](#etcd_stats_interval)
- [run_primary](#run_primary)
- [osd_network](#osd_network)
- [bind_address](#bind_address)
- [bind_port](#bind_port)
- [autosync_interval](#autosync_interval)
- [autosync_writes](#autosync_writes)
- [recovery_queue_depth](#recovery_queue_depth)
@@ -59,6 +59,41 @@ them, even without restarting by updating configuration in etcd.
- [recovery_tune_client_util_high](#recovery_tune_client_util_high)
- [recovery_tune_agg_interval](#recovery_tune_agg_interval)
- [recovery_tune_sleep_min_us](#recovery_tune_sleep_min_us)
- [recovery_tune_sleep_cutoff_us](#recovery_tune_sleep_cutoff_us)
- [discard_on_start](#discard_on_start)
- [min_discard_size](#min_discard_size)
- [allow_net_split](#allow_net_split)
- [enable_pg_locks](#enable_pg_locks)
- [pg_lock_retry_interval_ms](#pg_lock_retry_interval_ms)
## bind_address
- Type: string or array of strings
Instead of the network masks ([osd_network](network.en.md#osd_network) and
[osd_cluster_network](network.en.md#osd_cluster_network)), you can also set
OSD listen addresses explicitly using this parameter. May be useful if you
want to start OSDs on interfaces that are not UP + RUNNING.
## bind_port
- Type: integer
By default, OSDs pick random ports to use for incoming connections
automatically. With this option you can set a specific port for a specific
OSD by hand.
## osd_iothread_count
- Type: integer
- Default: 0
TCP network I/O thread count for OSD. When non-zero, a single OSD process
may handle more TCP I/O, but at a cost of increased latency because thread
switching overhead occurs. RDMA isn't affected by this option.
Because of latency, instead of enabling OSD I/O threads it's recommended to
just create multiple OSDs per disk, or use RDMA.
## etcd_report_interval
@@ -90,34 +125,6 @@ debugging purposes. It's possible to implement additional feature for the
monitor which may allow to separate primary and secondary OSDs, but it's
unclear why anyone could need it, so it's not implemented.
## osd_network
- Type: string or array of strings
Network mask of the network (IPv4 or IPv6) to use for OSDs. Note that
although it's possible to specify multiple networks here, this does not
mean that OSDs will create multiple listening sockets - they'll only
pick the first matching address of an UP + RUNNING interface. Separate
networks for cluster and client connections are also not implemented, but
they are mostly useless anyway, so it's not a big deal.
## bind_address
- Type: string
- Default: 0.0.0.0
Instead of the network mask, you can also set OSD listen address explicitly
using this parameter. May be useful if you want to start OSDs on interfaces
that are not UP + RUNNING.
## bind_port
- Type: integer
By default, OSDs pick random ports to use for incoming connections
automatically. With this option you can set a specific port for a specific
OSD by hand.
## autosync_interval
- Type: seconds
@@ -302,7 +309,7 @@ for hot data and slower disks - HDDs and maybe SATA SSDs - but will slightly
decrease write performance for fast disks because page cache is an overhead
itself.
Choose "directsync" to use [immediate_commit](layout-cluster.ru.md#immediate_commit)
Choose "directsync" to use [immediate_commit](layout-cluster.en.md#immediate_commit)
(which requires disable_data_fsync) with drives having write-back cache
which can't be turned off, for example, Intel Optane. Also note that *some*
desktop SSDs (for example, HP EX950) may ignore O_SYNC thus making
@@ -604,5 +611,58 @@ is usually fine.
- Default: 10
- Can be changed online: yes
Minimum possible value for auto-tuned recovery_sleep_us. Values lower
than this value are changed to 0.
Minimum possible value for auto-tuned recovery_sleep_us. Lower values
are changed to 0.
## recovery_tune_sleep_cutoff_us
- Type: microseconds
- Default: 10000000
- Can be changed online: yes
Maximum possible value for auto-tuned recovery_sleep_us. Higher values
are treated as outliers and ignored in aggregation.
## discard_on_start
- Type: boolean
Discard (SSD TRIM) unused data device blocks on every OSD startup.
## min_discard_size
- Type: integer
- Default: 1048576
Minimum consecutive block size to TRIM it.
## allow_net_split
- Type: boolean
- Default: false
Allow "safe" cases of network splits/partitions - allow to start PGs without
connections to some OSDs currently registered as alive in etcd, if the number
of actually connected PG OSDs is at least pg_minsize. That is, allow some OSDs to lose
connectivity with some other OSDs as long as it doesn't break pg_minsize guarantees.
The downside is that it increases the probability of writing data into just pg_minsize
OSDs during failover which can lead to PGs becoming incomplete after additional outages.
The old behaviour in versions up to 2.0.0 was equal to enabled allow_net_split.
## enable_pg_locks
- Type: boolean
Vitastor 2.2.0 introduces a new layer of split-brain prevention mechanism in
addition to etcd: PG locks. They prevent split-brain even in abnormal theoretical cases
when etcd is extremely laggy. As a new feature, by default, PG locks are only enabled
for pools where they're required - pools with [localized reads](pool.en.md#local_reads).
Use this parameter to enable or disable this function for all pools.
## pg_lock_retry_interval_ms
- Type: milliseconds
- Default: 100
Retry interval for failed PG lock attempts.

View File

@@ -8,15 +8,15 @@
Данные параметры используются только OSD, но, в отличие от дисковых параметров,
не фиксируются в момент инициализации дисков OSD и могут быть изменены в любой
момент с помощью перезапуска OSD, а некоторые и без перезапуска, с помощью
изменения конфигурации в etcd.
момент с перезапуском OSD в /etc/vitastor/vitastor.conf или [vitastor-disk update-sb](../usage/disk.ru.md#update-sb),
а некоторые и без перезапуска, с помощью изменения конфигурации в etcd.
- [bind_address](#bind_address)
- [bind_port](#bind_port)
- [osd_iothread_count](#osd_iothread_count)
- [etcd_report_interval](#etcd_report_interval)
- [etcd_stats_interval](#etcd_stats_interval)
- [run_primary](#run_primary)
- [osd_network](#osd_network)
- [bind_address](#bind_address)
- [bind_port](#bind_port)
- [autosync_interval](#autosync_interval)
- [autosync_writes](#autosync_writes)
- [recovery_queue_depth](#recovery_queue_depth)
@@ -60,6 +60,42 @@
- [recovery_tune_client_util_high](#recovery_tune_client_util_high)
- [recovery_tune_agg_interval](#recovery_tune_agg_interval)
- [recovery_tune_sleep_min_us](#recovery_tune_sleep_min_us)
- [recovery_tune_sleep_cutoff_us](#recovery_tune_sleep_cutoff_us)
- [discard_on_start](#discard_on_start)
- [min_discard_size](#min_discard_size)
- [allow_net_split](#allow_net_split)
- [enable_pg_locks](#enable_pg_locks)
- [pg_lock_retry_interval_ms](#pg_lock_retry_interval_ms)
## bind_address
- Тип: строка или массив строк
Вместо использования масок подсети ([osd_network](network.ru.md#osd_network) и
[osd_cluster_network](network.ru.md#osd_cluster_network)), вы также можете явно
задать адрес(а), на которых будут ожидать соединений OSD, с помощью данного
параметра. Это может быть полезно, например, чтобы запускать OSD на неподнятых
интерфейсах (не UP + RUNNING).
## bind_port
- Тип: целое число
По умолчанию OSD сами выбирают случайные порты для входящих подключений.
С помощью данной опции вы можете задать порт для отдельного OSD вручную.
## osd_iothread_count
- Тип: целое число
- Значение по умолчанию: 0
Число отдельных потоков для обработки ввода-вывода через TCP-сеть на
стороне OSD. Включение опции позволяет каждому отдельному OSD передавать
по сети больше данных, но ухудшает задержку из-за накладных расходов
переключения потоков. На работу RDMA опция не влияет.
Из-за задержек вместо включения потоков ввода-вывода OSD рекомендуется
просто создавать по несколько OSD на каждом диске, или использовать RDMA.
## etcd_report_interval
@@ -92,34 +128,6 @@ max_etcd_attempts * etcd_quick_timeout.
первичные OSD от вторичных, но пока не понятно, зачем это может кому-то
понадобиться, поэтому это не реализовано.
## osd_network
- Тип: строка или массив строк
Маска подсети (IPv4 или IPv6) для использования для соединений с OSD.
Имейте в виду, что хотя сейчас и можно передать в этот параметр несколько
подсетей, это не означает, что OSD будут создавать несколько слушающих
сокетов - они лишь будут выбирать адрес первого поднятого (состояние UP +
RUNNING), подходящий под заданную маску. Также не реализовано разделение
кластерной и публичной сетей OSD. Правда, от него обычно всё равно довольно
мало толку, так что особенной проблемы в этом нет.
## bind_address
- Тип: строка
- Значение по умолчанию: 0.0.0.0
Этим параметром можно явным образом задать адрес, на котором будет ожидать
соединений OSD (вместо использования маски подсети). Может быть полезно,
например, чтобы запускать OSD на неподнятых интерфейсах (не UP + RUNNING).
## bind_port
- Тип: целое число
По умолчанию OSD сами выбирают случайные порты для входящих подключений.
С помощью данной опции вы можете задать порт для отдельного OSD вручную.
## autosync_interval
- Тип: секунды
@@ -634,4 +642,60 @@ EC (кодов коррекции ошибок) с более, чем 1 диск
- Можно менять на лету: да
Минимальное возможное значение авто-подстроенного recovery_sleep_us.
Значения ниже данного заменяются на 0.
Меньшие значения заменяются на 0.
## recovery_tune_sleep_cutoff_us
- Тип: микросекунды
- Значение по умолчанию: 10000000
- Можно менять на лету: да
Максимальное возможное значение авто-подстроенного recovery_sleep_us.
Большие значения считаются случайными выбросами и игнорируются в
усреднении.
## discard_on_start
- Тип: булево (да/нет)
Освобождать (SSD TRIM) неиспользуемые блоки диска данных при каждом запуске OSD.
## min_discard_size
- Тип: целое число
- Значение по умолчанию: 1048576
Минимальный размер последовательного блока данных, чтобы освобождать его через TRIM.
## allow_net_split
- Тип: булево (да/нет)
- Значение по умолчанию: false
Разрешить "безопасные" случаи разделений сети - разрешить активировать PG без
соединений к некоторым OSD, помеченным активными в etcd, если общее число активных
OSD в PG составляет как минимум pg_minsize. То есть, разрешать некоторым OSD терять
соединения с некоторыми другими OSD, если это не нарушает гарантий pg_minsize.
Минус такого разрешения в том, что оно повышает вероятность записи данных ровно в
pg_minsize OSD во время переключений, что может потом привести к тому, что PG станут
неполными (incomplete), если упадут ещё какие-то OSD.
Старое поведение в версиях до 2.0.0 было идентично включённому allow_net_split.
## enable_pg_locks
- Тип: булево (да/нет)
В Vitastor 2.2.0 появился новый слой защиты от сплитбрейна в дополнение к etcd -
блокировки PG. Они гарантируют порядок даже в теоретических ненормальных случаях,
когда etcd очень сильно тормозит. Так как функция новая, по умолчанию она включается
только для пулов, в которых она необходима - а именно, в пулах с включёнными
[локальными чтениями](pool.ru.md#local_reads). Ну а с помощью данного параметра
можно включить блокировки PG для всех пулов.
## pg_lock_retry_interval_ms
- Тип: миллисекунды
- Значение по умолчанию: 100
Интервал повтора неудачных попыток блокировки PG.

View File

@@ -32,6 +32,9 @@ Parameters:
- [pg_minsize](#pg_minsize)
- [pg_count](#pg_count)
- [failure_domain](#failure_domain)
- [level_placement](#level_placement)
- [raw_placement](#raw_placement)
- [local_reads](#local_reads)
- [max_osd_combinations](#max_osd_combinations)
- [block_size](#block_size)
- [bitmap_granularity](#bitmap_granularity)
@@ -41,6 +44,7 @@ Parameters:
- [osd_tags](#osd_tags)
- [primary_affinity_tags](#primary_affinity_tags)
- [scrub_interval](#scrub_interval)
- [used_for_app](#used_for_app)
Examples:
@@ -52,7 +56,7 @@ Examples:
OSD placement tree is set in a separate etcd key `/vitastor/config/node_placement`
in the following JSON format:
`
```
{
"<node name or OSD number>": {
"level": "<level>",
@@ -60,7 +64,7 @@ in the following JSON format:
},
...
}
`
```
Here, if a node name is a number then it is assumed to refer to an OSD.
Level of the OSD is always "osd" and cannot be overriden. You may only
@@ -83,7 +87,11 @@ Parent node reference is required for intermediate tree nodes.
Separate OSD settings are set in etc keys `/vitastor/config/osd/<number>`
in JSON format `{"<key>":<value>}`.
As of now, two settings are supported:
As of now, the following settings are supported:
- [reweight](#reweight)
- [tags](#tags)
- [noout](#noout)
## reweight
@@ -106,6 +114,14 @@ subsets and then use a specific subset for pool instead of all OSDs.
For example you can mark SSD OSDs with tag "ssd" and HDD OSDs with "hdd" and
such tags will work as device classes.
## noout
- Type: boolean
- Default: false
If set to true, [osd_out_time](monitor.en.md#osd_out_time) is ignored for this
OSD and it's never removed from data distribution by the monitor.
# Pool parameters
## name
@@ -118,8 +134,8 @@ Pool name.
## scheme
- Type: string
- Required
- One of: "replicated", "xor", "ec" or "jerasure"
- Required
Redundancy scheme used for data in this pool. "jerasure" is an alias for "ec",
both use Reed-Solomon-Vandermonde codes based on ISA-L or jerasure libraries.
@@ -154,6 +170,29 @@ That is, if it becomes impossible to place PG data on at least (pg_minsize)
OSDs, PG is deactivated for both read and write. So you know that a fresh
write always goes to at least (pg_minsize) OSDs (disks).
For example, the difference between pg_minsize 2 and 1 in a 3-way replicated
pool (pg_size=3) is:
- If 2 hosts go down with pg_minsize=2, the pool becomes inactive and remains
inactive for [osd_out_time](monitor.en.md#osd_out_time) (10 minutes). After
this timeout, the monitor selects replacement hosts/OSDs and the pool comes
up and starts to heal. Therefore, if you don't have replacement OSDs, i.e.
if you only have 3 hosts with OSDs and 2 of them are down, the pool remains
inactive until you add or return at least 1 host (or change failure_domain
to "osd").
- If 2 hosts go down with pg_minsize=1, the pool only experiences a short
I/O pause until the monitor notices that OSDs are down (5-10 seconds with
the default [etcd_report_interval](osd.en.md#etcd_report_interval)). After
this pause, I/O resumes, but new data is temporarily written in only 1 copy.
Then, after osd_out_time, the monitor also selects replacement OSDs and the
pool starts to heal.
So, pg_minsize regulates the number of failures that a pool can tolerate
without temporary downtime for [osd_out_time](monitor.en.md#osd_out_time),
but at a cost of slightly reduced storage reliability.
See also [allow_net_split](osd.en.md#allow_net_split) and
[PG state descriptions](../usage/admin.en.md#pg-states).
FIXME: pg_minsize behaviour may be changed in the future to only make PGs
read-only instead of deactivating them.
@@ -165,8 +204,8 @@ read-only instead of deactivating them.
Number of PGs for this pool. The value should be big enough for the monitor /
LP solver to be able to optimize data placement.
"Enough" is usually around 64-128 PGs per OSD, i.e. you set pg_count for pool
to (total OSD count * 100 / pg_size). You can round it to the closest power of 2,
"Enough" is usually around 10-100 PGs per OSD, i.e. you set pg_count for pool
to (total OSD count * 10 / pg_size). You can round it to the closest power of 2,
because it makes it easier to reduce or increase PG count later by dividing or
multiplying it by 2.
@@ -188,6 +227,93 @@ never put on OSDs in the same failure domain (for example, on the same host).
So failure domain specifies the unit which failure you are protecting yourself
from.
## level_placement
- Type: string
Additional failure domain rules, applied in conjuction with failure_domain.
Must be specified in the following form:
`<placement level>=<sequence of characters>, <level2>=<sequence2>, ...`
Sequence should be exactly [pg_size](#pg_size) character long. Each character
corresponds to an OSD in the PG of this pool. Equal characters mean that
corresponding items of the PG should be placed into the same placement tree
item at this level. Different characters mean that items should be placed into
different items.
For example, if you want a EC 4+2 pool and you want every 2 chunks to be stored
in its own datacenter and you also want each chunk to be stored on a different
host, you should set `level_placement` to `dc=112233 host=123456`.
Or you can set `level_placement` to `dc=112233` and leave `failure_domain` empty,
because `host` is the default `failure_domain` and it will be applied anyway.
Without this rule, it may happen that 3 chunks will be stored on OSDs in the
same datacenter, and the data will become inaccessibly if that datacenter goes
down in this case.
Of course, you should group your hosts into datacenters before applying the rule
by setting [placement_levels](monitor.en.md#placement_levels) to something like
`{"dc":90,"host":100,"osd":110}` and add DCs to [node_placement](#placement-tree),
like `{"dc1":{"level":"dc"},"host1":{"parent":"dc1"},...}`.
## raw_placement
- Type: string
Raw PG placement rules, specified in the form of a DSL (domain-specific language).
Use only if you really know what you're doing :)
DSL specification:
```
dsl := item | item ("\n" | ",") items
item := "any" | rules
rules := rule | rule rules
rule := level operator arg
level := /\w+/
operator := "!=" | "=" | ">" | "?="
arg := value | "(" values ")"
values := value | value "," values
value := item_ref | constant_id
item_ref := /\d+/
constant_id := /"([^"]+)"/
```
"?=" operator means "preferred". I.e. `dc ?= "meow"` means "prefer datacenter meow
for this chunk, but put into another dc if it's unavailable".
Examples:
- Simple 3 replicas with failure_domain=host: `any, host!=1, host!=(1,2)`
- EC 4+2 in 3 DC: `any, dc=1 host!=1, dc!=1, dc=3 host!=3, dc!=(1,3), dc=5 host!=5`
- 1 replica in fixed DC + 2 in random DCs: `dc?=meow, dc!=1, dc!=(1,2)`
## local_reads
- Type: string
- One of: "primary", "nearest" or "random"
- Default: primary
By default, Vitastor serves all read and write requests from the primary OSD of each PG.
But it can also serve read requests for replicated pools from secondary OSDs in clean PGs
(active or active+left_on_dead) which may be useful if you have OSDs with different network
latency to the client - for example, if you have a cross-datacenter setup.
If you set this parameter to "nearest", clients will try to read from the nearest OSD
in the [Placement Tree](#placement-tree), i.e. from an OSD from the same host or datacenter.
Distance to different OSDs will be calculated based on client hostname, determined
automatically or set manually in the [hostname](client.en.md#hostname) parameter.
If you set this parameter to "random", clients will try to distribute read requests over
all available secondary OSDs. This mode is mainly useful for tests, but, probably, not
really required in production setups.
[PG locks](osd.en.md#enable_pg_locks) are required for local reads to function. However,
PG locks are enabled automatically by default for pools with enabled local reads, so you
don't have to enable them explicitly.
## max_osd_combinations
- Type: integer
@@ -223,7 +349,8 @@ Read more about this parameter in [Cluster-Wide Disk Layout Parameters](layout-c
## immediate_commit
- Type: string, one of "all", "small" and "none"
- Type: string
- One of: "all", "small" or "none"
- Default: none
Immediate commit setting for this pool. The value from /vitastor/config/global
@@ -279,6 +406,38 @@ of the OSDs containing a data chunk for a PG.
Automatic scrubbing interval for this pool. Overrides
[global scrub_interval setting](osd.en.md#scrub_interval).
## used_for_app
- Type: string
If non-empty, the pool is marked as used for a separate application, for example,
VitastorFS or S3, which allocates Vitastor volume IDs by itself and does not use
image/inode metadata in etcd.
When a pool is marked as used for such app, regular block volume creation in it
is disabled (vitastor-cli refuses to create images without --force) to protect
the user from block volume and FS/S3 volume ID collisions and data loss.
Also such pools do not calculate per-inode space usage statistics in etcd because
using it for an external application implies that it may contain a very large
number of volumes and their statistics may take too much space in etcd.
Setting used_for_app to `fs:<name>` tells Vitastor that the pool is used for VitastorFS
with VitastorKV metadata base stored in a block image (regular Vitastor volume) named
`<name>`.
[vitastor-nfs](../usage/nfs.en.md), in its turn, refuses to use pools not marked
for the corresponding FS when starting. This also implies that you can use one
pool only for one VitastorFS.
If you plan to use the pool for S3, set its used_for_app to `s3:<name>`. `<name>` may
be basically anything you want (for example, `s3:standard`) - it's not validated
by Vitastor S3 components in any way.
All other values except prefixed with `fs:` or `s3:` may be used freely and don't
mean anything special for Vitastor core components. For now, you can use them as
you wish.
# Examples
## Replicated pool

View File

@@ -31,6 +31,9 @@
- [pg_minsize](#pg_minsize)
- [pg_count](#pg_count)
- [failure_domain](#failure_domain)
- [level_placement](#level_placement)
- [raw_placement](#raw_placement)
- [local_reads](#local_reads)
- [max_osd_combinations](#max_osd_combinations)
- [block_size](#block_size)
- [bitmap_granularity](#bitmap_granularity)
@@ -40,6 +43,7 @@
- [osd_tags](#osd_tags)
- [primary_affinity_tags](#primary_affinity_tags)
- [scrub_interval](#scrub_interval)
- [used_for_app](#used_for_app)
Примеры:
@@ -51,7 +55,7 @@
Дерево размещения OSD задаётся в отдельном ключе etcd `/vitastor/config/node_placement`
в следующем JSON-формате:
`
```
{
"<имя узла или номер OSD>": {
"level": "<уровень>",
@@ -59,7 +63,7 @@
},
...
}
`
```
Здесь, если название узла - число, считается, что это OSD. Уровень OSD
всегда равен "osd" и не может быть переопределён. Для OSD вы можете только
@@ -82,10 +86,11 @@
Настройки отдельных OSD задаются в ключах etcd `/vitastor/config/osd/<number>`
в JSON-формате `{"<key>":<value>}`.
На данный момент поддерживаются две настройки:
На данный момент поддерживаются следующие настройки:
- [reweight](#reweight)
- [tags](#tags)
- [noout](#noout)
## reweight
@@ -109,6 +114,14 @@
всех. Можно, например, пометить SSD OSD тегом "ssd", а HDD тегом "hdd", в
этом смысле теги работают аналогично классам устройств.
## noout
- Тип: булево (да/нет)
- Значение по умолчанию: false
Если установлено в true, то [osd_out_time](monitor.ru.md#osd_out_time) для этого
OSD игнорируется и OSD не удаляется из распределения данных монитором.
# Параметры
## name
@@ -121,8 +134,8 @@
## scheme
- Тип: строка
- Обязательный
- Возможные значения: "replicated", "xor", "ec" или "jerasure"
- Обязательный
Схема избыточности, используемая в данном пуле. "jerasure" - синоним для "ec",
в обеих схемах используются коды Рида-Соломона-Вандермонда, реализованные на
@@ -157,6 +170,26 @@
OSD, PG деактивируется на чтение и запись. Иными словами, всегда известно,
что новые блоки данных всегда записываются как минимум на pg_minsize дисков.
Для примера, разница между pg_minsize 2 и 1 в реплицированном пуле с 3 копиями
данных (pg_size=3), проявляется следующим образом:
- Если 2 сервера отключаются при pg_minsize=2, пул становится неактивным и
остаётся неактивным в течение [osd_out_time](monitor.ru.md#osd_out_time)
(10 минут), после чего монитор назначает другие OSD/серверы на замену, пул
поднимается и начинает восстанавливать недостающие копии данных. Соответственно,
если OSD на замену нет - то есть, если у вас всего 3 сервера с OSD и 2 из них
недоступны - пул так и остаётся недоступным до тех пор, пока вы не вернёте
или не добавите хотя бы 1 сервер (или не переключите failure_domain на "osd").
- Если 2 сервера отключаются при pg_minsize=1, ввод-вывод лишь приостанавливается
на короткое время, до тех пор, пока монитор не поймёт, что OSD отключены
(что занимает 5-10 секунд при стандартном [etcd_report_interval](osd.ru.md#etcd_report_interval)).
После этого ввод-вывод восстанавливается, но новые данные временно пишутся
всего в 1 копии. Когда же проходит osd_out_time, монитор точно так же назначает
другие OSD на замену выбывшим и пул начинает восстанавливать копии данных.
То есть, pg_minsize регулирует число отказов, которые пул может пережить без
временной остановки обслуживания на [osd_out_time](monitor.ru.md#osd_out_time),
но ценой немного пониженных гарантий надёжности.
FIXME: Поведение pg_minsize может быть изменено в будущем с полной деактивации
PG на перевод их в режим только для чтения.
@@ -168,8 +201,8 @@ PG на перевод их в режим только для чтения.
Число PG для данного пула. Число должно быть достаточно большим, чтобы монитор
мог равномерно распределить по ним данные.
Обычно это означает примерно 64-128 PG на 1 OSD, т.е. pg_count можно устанавливать
равным (общему числу OSD * 100 / pg_size). Значение можно округлить до ближайшей
Обычно это означает примерно 10-100 PG на 1 OSD, т.е. pg_count можно устанавливать
равным (общему числу OSD * 10 / pg_size). Значение можно округлить до ближайшей
степени 2, чтобы потом было легче уменьшать или увеличивать число PG, умножая
или деля его на 2.
@@ -190,6 +223,95 @@ PG в Vitastor эферемерны, то есть вы можете менят
Иными словами, домен отказа - это то, от отказа чего вы защищаете себя избыточным
хранением.
## level_placement
- Тип: строка
Правила дополнительных доменов отказа, применяемые вместе с failure_domain.
Должны задаваться в следующем виде:
`<уровень>=<последовательность символов>, <уровень2>=<последовательность2>, ...`
Каждая `<последовательность>` должна состоять ровно из [pg_size](#pg_size) символов.
Каждый символ соответствует одному OSD (размещению одной части PG) этого пула.
Одинаковые символы означают, что соответствующие части размещаются в один и тот же
узел дерева OSD на заданном `<уровне>`. Разные символы означают, что части
размещаются в разные узлы.
Например, если вы хотите сделать пул EC 4+2 и хотите поместить каждые 2 части
данных в свой датацентр, и также вы хотите, чтобы каждая часть размещалась на
другом хосте, то вы должны задать `level_placement` равным `dc=112233 host=123456`.
Либо вы просто можете задать `level_placement` равным `dc=112233` и оставить
`failure_domain` пустым, т.к. `host` это его значение по умолчанию и оно также
применится автоматически.
Без этого правила может получиться так, что в одном из датацентров окажется
3 части данных одной PG и данные окажутся недоступными при временном отключении
этого датацентра.
Естественно, перед установкой правила вам нужно сгруппировать ваши хосты в
датацентры, установив [placement_levels](monitor.ru.md#placement_levels) во что-то
типа `{"dc":90,"host":100,"osd":110}` и добавив датацентры в [node_placement](#дерево-размещения),
примерно так: `{"dc1":{"level":"dc"},"host1":{"parent":"dc1"},...}`.
## raw_placement
- Тип: строка
Низкоуровневые правила генерации PG в форме DSL (доменно-специфичного языка).
Используйте, только если действительно знаете, зачем вам это надо :)
Спецификация DSL:
```
dsl := item | item ("\n" | ",") items
item := "any" | rules
rules := rule | rule rules
rule := level operator arg
level := /\w+/
operator := "!=" | "=" | ">" | "?="
arg := value | "(" values ")"
values := value | value "," values
value := item_ref | constant_id
item_ref := /\d+/
constant_id := /"([^"]+)"/
```
Оператор "?=" означает "предпочитаемый". Т.е. `dc ?= "meow"` означает "предпочитать
датацентр meow для этой части данных, но разместить её в другом датацентре, если
meow недоступен".
Примеры:
- Простые 3 реплики с failure_domain=host: `any, host!=1, host!=(1,2)`
- EC 4+2 в 3 датацентрах: `any, dc=1 host!=1, dc!=1, dc=3 host!=3, dc!=(1,3), dc=5 host!=5`
- 1 копия в фиксированном ДЦ + 2 в других ДЦ: `dc?=meow, dc!=1, dc!=(1,2)`
## local_reads
- Тип: строка
- Возможные значения: "primary", "nearest" или "random"
- По умолчанию: primary
По умолчанию Vitastor обслуживает все запросы чтения и записи с первичного OSD каждой PG.
Однако, в чистых PG (active или active+left_on_dead) реплицированных пулов также есть
возможность обслуживать запросы чтения с вторичных OSD, что может быть полезно, если
у вас сильно отличается время сетевого обращения от клиента к разным OSD - например,
если у вас несколько дата-центров.
Если данный параметр установлен в значение "nearest", клиенты будут стараться читать с
ближайших по [Дереву размещения](#дерево-размещения) OSD, то есть, с OSD с того же хоста
или датацентра. Расстояние до разных OSD будет рассчитываться с помощью имени хоста клиента,
определяемого автоматически или заданного вручную параметром [hostname](client.ru.md#hostname).
Если данный параметр установлен в значение "random", клиенты будут стараться распределять
запросы чтения по всем доступным вторичным OSD. Этот режим в основном полезен для тестов,
но, скорее всего, редко нужен в реальных инсталляциях.
Для работы локальных чтений требуются [блокировки PG](osd.ru.md#enable_pg_locks). Включать
их явно не нужно - они включаются автоматически для пулов с включёнными локальными чтениями.
## max_osd_combinations
- Тип: целое число
@@ -227,7 +349,8 @@ PG в Vitastor эферемерны, то есть вы можете менят
## immediate_commit
- Тип: строка "all", "small" или "none"
- Тип: строка
- Возможные значения: "all", "small" или "none"
- По умолчанию: none
Настройка мгновенного коммита для данного пула. Если не задана, используется
@@ -286,6 +409,43 @@ OSD с "all".
Интервал скраба, то есть, автоматической фоновой проверки данных для данного пула.
Переопределяет [глобальную настройку scrub_interval](osd.ru.md#scrub_interval).
## used_for_app
- Тип: строка
Если непусто, пул помечается как используемый для отдельного приложения, например,
для VitastorFS или S3, которое распределяет ID образов в пуле само и не использует
метаданные образов/инодов в etcd.
Когда пул помечается используемым для такого приложения, создание обычных блочных
образов в нём запрещается (vitastor-cli отказывается создавать образы без --force),
чтобы защитить пользователя от коллизий ID блочных образов и томов ФС/S3, и,
таким образом, от потери данных.
Также для таких пулов отключается передача статистики в etcd по отдельным инодам,
так как использование для внешнего приложения подразумевает, что пул может содержать
очень много томов и их статистика может занять слишком много места в etcd.
Установка used_for_app в значение `fs:<name>` сообщает о том, что пул используется
для VitastorFS с базой метаданных VitastorKV, хранимой в блочном образе с именем
`<name>`.
[vitastor-nfs](../usage/nfs.ru.md), в свою очередь, при запуске отказывается
использовать для ФС пулы, не помеченные, как используемые для неё. Это также
означает, что один пул может использоваться только для одной VitastorFS.
Если же вы планируете использовать пул для данных S3, установите его used_for_app
в значение `s3:<name>`, где `<name>` - любое название по вашему усмотрению
(например, `s3:standard`) - конкретное содержимое `<name>` пока никак не проверяется
компонентами Vitastor S3.
Смотрите также [allow_net_split](osd.ru.md#allow_net_split) и
[документацию по состояниям PG](../usage/admin.ru.md#состояния-pg).
Все остальные значения used_for_app, кроме начинающихся на `fs:` или `s3:`, не
означают ничего особенного для основных компонентов Vitastor. Поэтому сейчас вы
можете использовать их свободно любым желаемым способом.
# Примеры
## Реплицированный пул

View File

@@ -1,3 +1,84 @@
- name: client_iothread_count
type: int
default: 0
online: false
info: |
Number of separate threads for handling TCP network I/O at client library
side. Enabling 4 threads usually allows to increase peak performance of each
client from approx. 2-3 to 7-8 GByte/s linear read/write and from approx.
100-150 to 400 thousand iops, but at the same time it increases latency.
Latency increase depends on CPU: with CPU power saving disabled latency
only increases by ~10 us (equivalent to Q=1 iops decrease from 10500 to 9500),
with CPU power saving enabled it may be as high as 500 us (equivalent to Q=1
iops decrease from 2000 to 1000). RDMA isn't affected by this option.
It's recommended to enable client I/O threads if you don't use RDMA and want
to increase peak client performance.
info_ru: |
Число отдельных потоков для обработки ввода-вывода через TCP сеть на стороне
клиентской библиотеки. Включение 4 потоков обычно позволяет поднять пиковую
производительность каждого клиента примерно с 2-3 до 7-8 Гбайт/с линейного
чтения/записи и примерно с 100-150 до 400 тысяч операций ввода-вывода в
секунду, но ухудшает задержку. Увеличение задержки зависит от процессора:
при отключённом энергосбережении CPU это всего ~10 микросекунд (равносильно
падению iops с Q=1 с 10500 до 9500), а при включённом это может быть
и 500 микросекунд (равносильно падению iops с Q=1 с 2000 до 1000). На работу
RDMA данная опция не влияет.
Рекомендуется включать клиентские потоки ввода-вывода, если вы не используете
RDMA и хотите повысить пиковую производительность клиентов.
- name: client_retry_interval
type: ms
min: 10
default: 50
online: true
info: |
Retry time for I/O requests failed due to inactive PGs or network
connectivity errors.
info_ru: |
Время повтора запросов ввода-вывода, неудачных из-за неактивных PG или
ошибок сети.
- name: client_eio_retry_interval
type: ms
default: 1000
online: true
info: |
Retry time for I/O requests failed due to data corruption or unfinished
EC object deletions (has_incomplete PG state). 0 disables such retries
and clients are not blocked and just get EIO error code instead.
info_ru: |
Время повтора запросов ввода-вывода, неудачных из-за повреждения данных
или незавершённых удалений EC-объектов (состояния PG has_incomplete).
0 отключает повторы таких запросов и клиенты не блокируются, а вместо
этого просто получают код ошибки EIO.
- name: client_retry_enospc
type: bool
default: true
online: true
info: |
Retry writes on out of space errors to wait until some space is freed on
OSDs.
info_ru: |
Повторять запросы записи, завершившиеся с ошибками нехватки места, т.е.
ожидать, пока на OSD не освободится место.
- name: client_wait_up_timeout
type: sec
default: 16
online: true
info: |
Wait for this number of seconds until PGs are up when doing operations
which require all PGs to be up. Currently only used by object listings
in delete and merge-based commands ([vitastor-cli rm](../usage/cli.en.md#rm), merge and so on).
The default value is calculated as `1 + OSD lease timeout`, which is
`1 + etcd_report_interval + max_etcd_attempts*2*etcd_quick_timeout`.
info_ru: |
Время ожидания поднятия PG при операциях, требующих активности всех PG.
В данный момент используется листингами объектов в командах, использующих
удаление и слияние ([vitastor-cli rm](../usage/cli.ru.md#rm), merge и подобные).
Значение по умолчанию вычисляется как `1 + время lease OSD`, равное
`1 + etcd_report_interval + max_etcd_attempts*2*etcd_quick_timeout`.
- name: client_max_dirty_bytes
type: int
default: 33554432
@@ -166,3 +247,39 @@
Максимальное число разделов на одном NBD-устройстве. Данное значение передаётся
модулю ядра nbd как параметр `max_part`, когда его загружает vitastor-nbd.
Имейте в виду, что (nbds_max)*(1+max_part) обычно не может превышать 256.
- name: osd_nearfull_ratio
type: float
default: 0.95
online: true
info: |
Ratio of used space on OSD to treat it as "almost full" in vitastor-cli status output.
Remember that some client writes may hang or complete with an error if even
just one OSD becomes 100 % full!
However, unlike in Ceph, 100 % full Vitastor OSDs don't crash (in Ceph they're
unable to start at all), so you'll be able to recover from "out of space" errors
without destroying and recreating OSDs.
info_ru: |
Доля занятого места на OSD, начиная с которой он считается "почти заполненным" в
выводе vitastor-cli status.
Помните, что часть клиентских запросов может зависнуть или завершиться с ошибкой,
если на 100 % заполнится хотя бы 1 OSD!
Однако, в отличие от Ceph, заполненные на 100 % OSD Vitastor не падают (в Ceph
заполненные на 100% OSD вообще не могут стартовать), так что вы сможете
восстановить работу кластера после ошибок отсутствия свободного места
без уничтожения и пересоздания OSD.
- name: hostname
type: string
online: true
info: |
Clients use host name to find their distance to OSDs when [localized reads](pool.en.md#local_reads)
are enabled. By default, standard [gethostname](https://man7.org/linux/man-pages/man2/gethostname.2.html)
function is used to determine host name, but you can also override it with this parameter.
info_ru: |
Клиенты используют имя хоста для определения расстояния до OSD, когда включены
[локальные чтения](pool.ru.md#local_reads). По умолчанию для определения имени
хоста используется стандартная функция [gethostname](https://man7.org/linux/man-pages/man2/gethostname.2.html),
но вы также можете задать имя хоста вручную данным параметром.

View File

@@ -14,8 +14,12 @@
{{../../installation/packages.en.md}}
{{../../installation/docker.en.md}}
{{../../installation/proxmox.en.md}}
{{../../installation/opennebula.en.md}}
{{../../installation/openstack.en.md}}
{{../../installation/kubernetes.en.md}}
@@ -56,6 +60,8 @@
{{../../usage/nfs.en.md}}
{{../../usage/admin.en.md}}
## Performance
{{../../performance/understanding.en.md}}
@@ -64,4 +70,6 @@
{{../../performance/comparison1.en.md}}
{{../../performance/bench2.en.md}}
{{../../intro/author.en.md|indent=1}}

View File

@@ -14,8 +14,12 @@
{{../../installation/packages.ru.md}}
{{../../installation/docker.ru.md}}
{{../../installation/proxmox.ru.md}}
{{../../installation/opennebula.ru.md}}
{{../../installation/openstack.ru.md}}
{{../../installation/kubernetes.ru.md}}
@@ -56,6 +60,8 @@
{{../../usage/nfs.ru.md}}
{{../../usage/admin.ru.md}}
## Производительность
{{../../performance/understanding.ru.md}}
@@ -64,4 +70,6 @@
{{../../performance/comparison1.ru.md}}
{{../../performance/bench2.ru.md}}
{{../../intro/author.ru.md|indent=1}}

View File

@@ -47,14 +47,24 @@
Не может быть меньше размера сектора дисков данных OSD.
- name: immediate_commit
type: string
default: false
default: all
info: |
Another parameter which is really important for performance.
One of "none", "all" or "small". Global value, may be overriden [at pool level](pool.en.md#immediate_commit).
This parameter is also really important for performance.
TLDR: default "all" is optimal for server-grade SSDs with supercapacitor-based
power loss protection (nonvolatile write-through cache) and also for most HDDs.
"none" or "small" should be only selected if you use desktop SSDs without
capacitors or drives with slow write-back cache that can't be disabled. Check
immediate_commit of your OSDs in [ls-osd](../usage/cli.en.md#ls-osd).
Detailed explanation:
Desktop SSDs are very fast (100000+ iops) for simple random writes
without cache flush. However, they are really slow (only around 1000 iops)
if you try to fsync() each write, that is, when you want to guarantee that
each change gets immediately persisted to the physical media.
if you try to fsync() each write, that is, if you want to guarantee that
each change gets actually persisted to the physical media.
Server-grade SSDs with "Advanced/Enhanced Power Loss Protection" or with
"Supercapacitor-based Power Loss Protection", on the other hand, are equally
@@ -66,8 +76,8 @@
efficiently utilize desktop SSDs by postponing fsync until the client calls
it explicitly.
This is what this parameter regulates. When it's set to "all" the whole
Vitastor cluster commits each change to disks immediately and clients just
This is what this parameter regulates. When it's set to "all" Vitastor
cluster commits each change to disks immediately and clients just
ignore fsyncs because they know for sure that they're unneeded. This reduces
the amount of network roundtrips performed by clients and improves
performance. So it's always better to use server grade SSDs with
@@ -87,17 +97,22 @@
it (they have internal SSD cache even though it's not stated in datasheets).
Setting this parameter to "all" or "small" in OSD parameters requires enabling
[disable_journal_fsync](layout-osd.en.yml#disable_journal_fsync) and
[disable_meta_fsync](layout-osd.en.yml#disable_meta_fsync), setting it to
"all" also requires enabling [disable_data_fsync](layout-osd.en.yml#disable_data_fsync).
TLDR: For optimal performance, set immediate_commit to "all" if you only use
SSDs with supercapacitor-based power loss protection (nonvolatile
write-through cache) for both data and journals in the whole Vitastor
cluster. Set it to "small" if you only use such SSDs for journals. Leave
empty if your drives have write-back cache.
[disable_journal_fsync](layout-osd.en.md#disable_journal_fsync) and
[disable_meta_fsync](layout-osd.en.md#disable_meta_fsync), setting it to
"all" also requires enabling [disable_data_fsync](layout-osd.en.md#disable_data_fsync).
vitastor-disk tried to do that by default, first checking/disabling drive cache.
If it can't disable drive cache, OSD get initialized with "none".
info_ru: |
Ещё один важный для производительности параметр.
Одно из значений "none", "small" или "all". Глобальное значение, может быть
переопределено [на уровне пула](pool.ru.md#immediate_commit).
Данный параметр тоже важен для производительности.
Вкратце: значение по умолчанию "all" оптимально для всех серверных SSD с
суперконденсаторами и также для большинства HDD. "none" и "small" имеет смысл
устанавливать только при использовании SSD настольного класса без
суперконденсаторов или дисков с медленным неотключаемым кэшем записи.
Проверьте настройку immediate_commit своих OSD в выводе команды [ls-osd](../usage/cli.ru.md#ls-osd).
Модели SSD для настольных компьютеров очень быстрые (100000+ операций в
секунду) при простой случайной записи без сбросов кэша. Однако они очень
@@ -118,7 +133,7 @@
эффективно утилизировать настольные SSD.
Данный параметр влияет как раз на это. Когда он установлен в значение "all",
весь кластер Vitastor мгновенно фиксирует каждое изменение на физические
кластер Vitastor мгновенно фиксирует каждое изменение на физические
носители и клиенты могут просто игнорировать запросы fsync, т.к. они точно
знают, что fsync-и не нужны. Это уменьшает число необходимых обращений к OSD
по сети и улучшает производительность. Поэтому даже с Vitastor лучше всегда
@@ -141,13 +156,6 @@
указано в спецификациях).
Указание "all" или "small" в настройках / командной строке OSD требует
включения [disable_journal_fsync](layout-osd.ru.yml#disable_journal_fsync) и
[disable_meta_fsync](layout-osd.ru.yml#disable_meta_fsync), значение "all"
также требует включения [disable_data_fsync](layout-osd.ru.yml#disable_data_fsync).
Итого, вкратце: для оптимальной производительности установите
immediate_commit в значение "all", если вы используете в кластере только SSD
с суперконденсаторами и для данных, и для журналов. Если вы используете
такие SSD для всех журналов, но не для данных - можете установить параметр
в "small". Если и какие-то из дисков журналов имеют волатильный кэш записи -
оставьте параметр пустым.
включения [disable_journal_fsync](layout-osd.ru.md#disable_journal_fsync) и
[disable_meta_fsync](layout-osd.ru.md#disable_meta_fsync), значение "all"
также требует включения [disable_data_fsync](layout-osd.ru.md#disable_data_fsync).

View File

@@ -110,20 +110,22 @@
type: bool
default: false
info: |
Do not issue fsyncs to the data device, i.e. do not flush its cache.
Safe ONLY if your data device has write-through cache. If you disable
the cache yourself using `hdparm` or `scsi_disk/cache_type` then make sure
that the cache disable command is run every time before starting Vitastor
OSD, for example, in the systemd unit. See also `immediate_commit` option
for the instructions to disable cache and how to benefit from it.
Do not issue fsyncs to the data device, i.e. do not force it to flush cache.
Safe ONLY if your data device has write-through cache or if write-back
cache is disabled. If you disable drive cache manually with `hdparm` or
writing to `/sys/.../scsi_disk/cache_type` then make sure that you do it
every time before starting Vitastor OSD (vitastor-disk does it automatically).
See also [immediate_commit](layout-cluster.en.md#immediate_commit)
for information about how to benefit from disabled cache.
info_ru: |
Не отправлять fsync-и устройству данных, т.е. не сбрасывать его кэш.
Не отправлять fsync-и устройству данных, т.е. не заставлять его сбрасывать кэш.
Безопасно, ТОЛЬКО если ваше устройство данных имеет кэш со сквозной
записью (write-through). Если вы отключаете кэш через `hdparm` или
`scsi_disk/cache_type`, то удостоверьтесь, что команда отключения кэша
выполняется перед каждым запуском Vitastor OSD, например, в systemd unit-е.
Смотрите также опцию `immediate_commit` для инструкций по отключению кэша
и о том, как из этого извлечь выгоду.
записью (write-through) или если кэш с отложенной записью (write-back) отключён.
Если вы отключаете кэш вручную через `hdparm` или запись в `/sys/.../scsi_disk/cache_type`,
то удостоверьтесь, что вы делаете это каждый раз перед запуском Vitastor OSD
(vitastor-disk делает это автоматически). Смотрите также опцию
[immediate_commit](layout-cluster.ru.md#immediate_commit) для информации о том,
как извлечь выгоду из отключённого кэша.
- name: disable_meta_fsync
type: bool
default: false
@@ -179,8 +181,7 @@
Because of this it can actually be beneficial to use SSDs which work well
with 512 byte sectors and use 512 byte disk_alignment, journal_block_size
and meta_block_size. But the only SSD that may fit into this category is
Intel Optane (probably, not tested yet).
and meta_block_size. But at the moment, no such SSDs are known...
Clients don't need to be aware of disk_alignment, so it's not required to
put a modified value into etcd key /vitastor/config/global.
@@ -198,9 +199,8 @@
Поэтому, на самом деле, может быть выгодно найти SSD, хорошо работающие с
меньшими, 512-байтными, блоками и использовать 512-байтные disk_alignment,
journal_block_size и meta_block_size. Однако единственные SSD, которые
теоретически могут попасть в эту категорию - это Intel Optane (но и это
пока не проверялось автором).
journal_block_size и meta_block_size. Однако на данный момент такие SSD
не известны...
Клиентам не обязательно знать про disk_alignment, так что помещать значение
этого параметра в etcd в /vitastor/config/global не нужно.

View File

@@ -1,3 +1,103 @@
- name: use_antietcd
type: bool
default: false
info: |
Enable experimental built-in etcd replacement (clustered key-value database):
[antietcd](https://git.yourcmc.ru/vitalif/antietcd/).
When set to true, monitor runs internal antietcd automatically if it finds
a network interface with an IP address matching one of addresses in the
`etcd_address` configuration option (in `/etc/vitastor/vitastor.conf` or in
the monitor command line). If there are multiple matching addresses, it also
checks `antietcd_port` and antietcd is started for address with matching port.
By default, antietcd accepts connection on the selected IP address, but it
can also be overridden manually in the `antietcd_ip` option.
When antietcd is started, monitor stores cluster metadata itself and exposes
a etcd-compatible REST API. On disk, these metadata are stored in
`/var/lib/vitastor/mon_2379.json.gz` (can be overridden in antietcd_data_file
or antietcd_data_dir options). All other antietcd parameters
(see [here](https://git.yourcmc.ru/vitalif/antietcd/)) except node_id,
cluster, cluster_key, persist_filter, stale_read can also be set in
Vitastor configuration with `antietcd_` prefix.
You can dump/load data to or from antietcd using Antietcd `anticli` tool:
```
npm exec anticli -e http://etcd:2379/v3 get --prefix '' --no-temp > dump.json
npm exec anticli -e http://antietcd:2379/v3 load < dump.json
```
info_ru: |
Включить экспериментальный встроенный заменитель etcd (кластерную БД ключ-значение):
[antietcd](https://git.yourcmc.ru/vitalif/antietcd/).
Если параметр установлен в true, монитор запускает antietcd автоматически,
если обнаруживает сетевой интерфейс с одним из адресов, указанных в опции
конфигурации `etcd_address` (в `/etc/vitastor/vitastor.conf` или в опциях
командной строки монитора). Если таких адресов несколько, также проверяется
опция `antietcd_port` и antietcd запускается для адреса с соответствующим
портом. По умолчанию antietcd принимает подключения по выбранному совпадающему
IP, но его также можно определить вручную опцией `antietcd_ip`.
При запуске antietcd монитор сам хранит центральные метаданные кластера и
выставляет etcd-совместимое REST API. На диске эти метаданные хранятся в файле
`/var/lib/vitastor/mon_2379.json.gz` (можно переопределить параметрами
antietcd_data_file или antietcd_data_dir). Все остальные параметры antietcd
(смотрите [по ссылке](https://git.yourcmc.ru/vitalif/antietcd/)), за исключением
node_id, cluster, cluster_key, persist_filter, stale_read также можно задавать
в конфигурации Vitastor с префиксом `antietcd_`.
Вы можете выгружать/загружать данные в или из antietcd с помощью его инструмента
`anticli`:
```
npm exec anticli -e http://etcd:2379/v3 get --prefix '' --no-temp > dump.json
npm exec anticli -e http://antietcd:2379/v3 load < dump.json
```
- name: enable_prometheus
type: bool
default: true
info: |
Enable built-in Prometheus metrics exporter at mon_http_port (8060 by default).
Note that only the active (master) monitor exposes metrics, others return
HTTP 503. So you should add all monitor URLs to your Prometheus job configuration.
Grafana dashboard suitable for this exporter is here: [Vitastor-Grafana-6+.json](../../mon/scripts/Vitastor-Grafana-6+.json).
info_ru: |
Включить встроенный Prometheus-экспортер метрик на порту mon_http_port (по умолчанию 8060).
Обратите внимание, что метрики выставляет только активный (главный) монитор, остальные
возвращают статус HTTP 503, поэтому вам следует добавлять адреса всех мониторов
в задание по сбору метрик Prometheus.
Дашборд для Grafana, подходящий для этого экспортера: [Vitastor-Grafana-6+.json](../../mon/scripts/Vitastor-Grafana-6+.json).
- name: mon_http_port
type: int
default: 8060
info: HTTP port for monitors to listen to (including metrics exporter)
info_ru: Порт, на котором мониторы принимают HTTP-соединения (в том числе для отдачи метрик)
- name: mon_http_ip
type: string
info: IP address for monitors to listen to (all addresses by default)
info_ru: IP-адрес, на котором мониторы принимают HTTP-соединения (по умолчанию все адреса)
- name: mon_https_cert
type: string
info: Path to PEM SSL certificate file for monitor to listen using HTTPS
info_ru: Путь к PEM-файлу SSL-сертификата для монитора, чтобы принимать соединения через HTTPS
- name: mon_https_key
type: string
info: Path to PEM SSL private key file for monitor to listen using HTTPS
info_ru: Путь к PEM-файлу секретного SSL-ключа для монитора, чтобы принимать соединения через HTTPS
- name: mon_https_client_auth
type: bool
default: false
info: Enable HTTPS client certificate-based authorization for monitor connections
info_ru: Включить в HTTPS-сервере монитора авторизацию по клиентским сертификатам
- name: mon_https_ca
type: string
info: Path to CA certificate for client HTTPS authorization
info_ru: Путь к удостоверяющему сертификату для авторизации клиентских HTTPS соединений
- name: etcd_mon_ttl
type: sec
min: 5
@@ -63,3 +163,36 @@
"host" и "osd" являются предопределёнными и не могут быть удалены. Если
один из них отсутствует в конфигурации, он доопределяется с приоритетом по
умолчанию (100 для уровня "host", 101 для "osd").
- name: use_old_pg_combinator
type: bool
default: false
info: |
Use the old PG combination generator which doesn't support [level_placement](pool.en.md#level_placement)
and [raw_placement](pool.en.md#raw_placement) for pools which don't use this features.
info_ru: |
Использовать старый генератор комбинаций PG, не поддерживающий [level_placement](pool.ru.md#level_placement)
и [raw_placement](pool.ru.md#raw_placement) для пулов, которые не используют данные функции.
- name: osd_backfillfull_ratio
type: float
default: 0.99
info: |
Monitors try to prevent OSDs becoming 100% full during rebalance or recovery by
calculating how much space will be occupied on every OSD after all rebalance
and recovery operations finish, and pausing rebalance and recovery if that
amount of space exceeds OSD capacity multiplied by the value of this
configuration parameter.
Future used space is calculated by summing space used by all user data blocks
(objects) in all PGs placed on a specific OSD, even if some of these objects
currently reside on a different set of OSDs.
info_ru: |
Мониторы стараются предотвратить 100% заполнение OSD в процессе ребаланса
или восстановления, рассчитывая, сколько места будет занято на каждом OSD после
завершения всех операций ребаланса и восстановления, и приостанавливая
ребаланс и восстановление, если рассчитанный объём превышает ёмкость OSD,
умноженную на значение данного параметра.
Будущее занятое место рассчитывается сложением места, занятого всеми
пользовательскими блоками данных (объектами) во всех PG, расположенных
на конкретном OSD, даже если часть этих объектов в данный момент находится
на другом наборе OSD.

View File

@@ -1,58 +1,93 @@
- name: tcp_header_buffer_size
type: int
default: 65536
- name: osd_network
type: string or array of strings
type_ru: строка или массив строк
info: |
Size of the buffer used to read data using an additional copy. Vitastor
packet headers are 128 bytes, payload is always at least 4 KB, so it is
usually beneficial to try to read multiple packets at once even though
it requires to copy the data an additional time. The rest of each packet
is received without an additional copy. You can try to play with this
parameter and see how it affects random iops and linear bandwidth if you
want.
Network mask of public OSD network(s) (IPv4 or IPv6). Each OSD listens to all
addresses of UP + RUNNING interfaces matching one of these networks, on the
same port. Port is auto-selected except if [bind_port](osd.en.md#bind_port) is
explicitly specified. Bind address(es) may also be overridden manually by
specifying [bind_address](osd.en.md#bind_address). If OSD networks are not specified
at all, OSD just listens to a wildcard address (0.0.0.0).
info_ru: |
Размер буфера для чтения данных с дополнительным копированием. Пакеты
Vitastor содержат 128-байтные заголовки, за которыми следуют данные размером
от 4 КБ и для мелких операций ввода-вывода обычно выгодно за 1 вызов читать
сразу несколько пакетов, даже не смотря на то, что это требует лишний раз
скопировать данные. Часть каждого пакета за пределами значения данного
параметра читается без дополнительного копирования. Вы можете попробовать
поменять этот параметр и посмотреть, как он влияет на производительность
случайного и линейного доступа.
- name: use_sync_send_recv
type: bool
default: false
Маски подсетей (IPv4 или IPv6) публичной сети или сетей OSD. Каждый OSD слушает
один и тот же порт на всех адресах поднятых (UP + RUNNING) сетевых интерфейсов,
соответствующих одной из указанных сетей. Порт выбирается автоматически, если
только [bind_port](osd.ru.md#bind_port) не задан явно. Адреса для подключений можно
также переопределить явно, задав [bind_address](osd.ru.md#bind_address). Если сети OSD
не заданы вообще, OSD слушает все адреса (0.0.0.0).
- name: osd_cluster_network
type: string or array of strings
type_ru: строка или массив строк
info: |
If true, synchronous send/recv syscalls are used instead of io_uring for
socket communication. Useless for OSDs because they require io_uring anyway,
but may be required for clients with old kernel versions.
Network mask of separate network(s) (IPv4 or IPv6) to use for OSD
cluster connections. I.e. OSDs will always attempt to use these networks
to connect to other OSDs, while clients will attempt to use networks from
[osd_network](#osd_network).
info_ru: |
Если установлено в истину, то вместо io_uring для передачи данных по сети
будут использоваться обычные синхронные системные вызовы send/recv. Для OSD
это бессмысленно, так как OSD в любом случае нуждается в io_uring, но, в
принципе, это может применяться для клиентов со старыми версиями ядра.
Маски подсетей (IPv4 или IPv6) отдельной кластерной сети или сетей OSD.
То есть, OSD будут всегда стараться использовать эти сети для соединений
с другими OSD, а клиенты будут стараться использовать сети из [osd_network](#osd_network).
- name: use_rdma
type: bool
default: true
info: |
Try to use RDMA for communication if it's available. Disable if you don't
want Vitastor to use RDMA. TCP-only clients can also talk to an RDMA-enabled
cluster, so disabling RDMA may be needed if clients have RDMA devices,
but they are not connected to the cluster.
Try to use RDMA through libibverbs for communication if it's available.
Disable if you don't want Vitastor to use RDMA. TCP-only clients can also
talk to an RDMA-enabled cluster, so disabling RDMA may be needed if clients
have RDMA devices, but they are not connected to the cluster.
`use_rdma` works with RoCEv1/RoCEv2 networks, but not with iWARP and,
maybe, with some Infiniband configurations which require RDMA-CM.
Consider `use_rdmacm` for such networks.
info_ru: |
Пытаться использовать RDMA для связи при наличии доступных устройств.
Отключите, если вы не хотите, чтобы Vitastor использовал RDMA.
TCP-клиенты также могут работать с RDMA-кластером, так что отключать
RDMA может быть нужно только если у клиентов есть RDMA-устройства,
но они не имеют соединения с кластером Vitastor.
Попробовать использовать RDMA через libibverbs для связи при наличии
доступных устройств. Отключите, если вы не хотите, чтобы Vitastor
использовал RDMA. TCP-клиенты также могут работать с RDMA-кластером,
так что отключать RDMA может быть нужно, только если у клиентов есть
RDMA-устройства, но они не имеют соединения с кластером Vitastor.
`use_rdma` работает с RoCEv1/RoCEv2 сетями, но не работает с iWARP и
может не работать с частью конфигураций Infiniband, требующих RDMA-CM.
Рассмотрите включение `use_rdmacm` для таких сетей.
- name: use_rdmacm
type: bool
default: true
info: |
Use an alternative implementation of RDMA through RDMA-CM (Connection
Manager). Works with all RDMA networks: Infiniband, iWARP and
RoCEv1/RoCEv2, and even allows to disable TCP and run only with RDMA.
OSDs always use random port numbers for RDMA-CM listeners, different
from their TCP ports. `use_rdma` is automatically disabled when
`use_rdmacm` is enabled.
info_ru: |
Использовать альтернативную реализацию RDMA на основе RDMA-CM (Connection
Manager). Работает со всеми типами RDMA-сетей: Infiniband, iWARP и
RoCEv1/RoCEv2, и даже позволяет полностью отключить TCP и работать
только на RDMA. OSD используют случайные номера портов для ожидания
соединений через RDMA-CM, отличающиеся от их TCP-портов. Также при
включении `use_rdmacm` автоматически отключается опция `use_rdma`.
- name: disable_tcp
type: bool
default: true
info: |
Fully disable TCP and only use RDMA-CM for OSD communication.
info_ru: |
Полностью отключить TCP и использовать только RDMA-CM для соединений с OSD.
- name: rdma_device
type: string
info: |
RDMA device name to use for Vitastor OSD communications (for example,
"rocep5s0f0"). Now Vitastor supports all adapters, even ones without
ODP support, like Mellanox ConnectX-3 and non-Mellanox cards.
"rocep5s0f0"). If not specified, Vitastor will try to find an RoCE
device matching [osd_network](osd.en.md#osd_network), preferring RoCEv2,
or choose the first available RDMA device if no RoCE devices are
found or if `osd_network` is not specified. Auto-selection is also
unsupported with old libibverbs < v32, like in Debian 10 Buster or
CentOS 7.
Versions up to Vitastor 1.2.0 required ODP which is only present in
Mellanox ConnectX >= 4. See also [rdma_odp](#rdma_odp).
Vitastor supports all adapters, even ones without ODP support, like
Mellanox ConnectX-3 and non-Mellanox cards. Versions up to Vitastor
1.2.0 required ODP which is only present in Mellanox ConnectX >= 4.
See also [rdma_odp](#rdma_odp).
Run `ibv_devinfo -v` as root to list available RDMA devices and their
features.
@@ -64,12 +99,17 @@
PFC (Priority Flow Control) and ECN (Explicit Congestion Notification).
info_ru: |
Название RDMA-устройства для связи с Vitastor OSD (например, "rocep5s0f0").
Сейчас Vitastor поддерживает все модели адаптеров, включая те, у которых
нет поддержки ODP, то есть вы можете использовать RDMA с ConnectX-3 и
картами производства не Mellanox.
Если не указано, Vitastor попробует найти RoCE-устройство, соответствующее
[osd_network](osd.en.md#osd_network), предпочитая RoCEv2, или выбрать первое
попавшееся RDMA-устройство, если RoCE-устройств нет или если сеть `osd_network`
не задана. Также автовыбор не поддерживается со старыми версиями библиотеки
libibverbs < v32, например в Debian 10 Buster или CentOS 7.
Версии Vitastor до 1.2.0 включительно требовали ODP, который есть только
на Mellanox ConnectX 4 и более новых. См. также [rdma_odp](#rdma_odp).
Vitastor поддерживает все модели адаптеров, включая те, у которых
нет поддержки ODP, то есть вы можете использовать RDMA с ConnectX-3 и
картами производства не Mellanox. Версии Vitastor до 1.2.0 включительно
требовали ODP, который есть только на Mellanox ConnectX 4 и более новых.
См. также [rdma_odp](#rdma_odp).
Запустите `ibv_devinfo -v` от имени суперпользователя, чтобы посмотреть
список доступных RDMA-устройств, их параметры и возможности.
@@ -82,44 +122,56 @@
Control) и ECN (Explicit Congestion Notification).
- name: rdma_port_num
type: int
default: 1
info: |
RDMA device port number to use. Only for devices that have more than 1 port.
See `phys_port_cnt` in `ibv_devinfo -v` output to determine how many ports
your device has.
Not relevant for RDMA-CM (use_rdmacm).
info_ru: |
Номер порта RDMA-устройства, который следует использовать. Имеет смысл
только для устройств, у которых более 1 порта. Чтобы узнать, сколько портов
у вашего адаптера, посмотрите `phys_port_cnt` в выводе команды
`ibv_devinfo -v`.
Опция неприменима к RDMA-CM (use_rdmacm).
- name: rdma_gid_index
type: int
default: 0
info: |
Global address identifier index of the RDMA device to use. Different GID
indexes may correspond to different protocols like RoCEv1, RoCEv2 and iWARP.
Search for "GID" in `ibv_devinfo -v` output to determine which GID index
you need.
**IMPORTANT:** If you want to use RoCEv2 (as recommended) then the correct
rdma_gid_index is usually 1 (IPv6) or 3 (IPv4).
If not specified, Vitastor will try to auto-select a RoCEv2 IPv4 GID, then
RoCEv2 IPv6 GID, then RoCEv1 IPv4 GID, then RoCEv1 IPv6 GID, then IB GID.
GID auto-selection is unsupported with libibverbs < v32.
A correct rdma_gid_index for RoCEv2 is usually 1 (IPv6) or 3 (IPv4).
Not relevant for RDMA-CM (use_rdmacm).
info_ru: |
Номер глобального идентификатора адреса RDMA-устройства, который следует
использовать. Разным gid_index могут соответствовать разные протоколы связи:
RoCEv1, RoCEv2, iWARP. Чтобы понять, какой нужен вам - смотрите строчки со
словом "GID" в выводе команды `ibv_devinfo -v`.
**ВАЖНО:** Если вы хотите использовать RoCEv2 (как мы и рекомендуем), то
правильный rdma_gid_index, как правило, 1 (IPv6) или 3 (IPv4).
Если не указан, Vitastor попробует автоматически выбрать сначала GID,
соответствующий RoCEv2 IPv4, потом RoCEv2 IPv6, потом RoCEv1 IPv4, потом
RoCEv1 IPv6, потом IB. Авто-выбор GID не поддерживается со старыми версиями
libibverbs < v32.
Правильный rdma_gid_index для RoCEv2, как правило, 1 (IPv6) или 3 (IPv4).
Опция неприменима к RDMA-CM (use_rdmacm).
- name: rdma_mtu
type: int
default: 4096
info: |
RDMA Path MTU to use. Must be 1024, 2048 or 4096. There is usually no
sense to change it from the default 4096.
RDMA Path MTU to use. Must be 1024, 2048 or 4096. Default is to use the
RDMA device's MTU.
info_ru: |
Максимальная единица передачи (Path MTU) для RDMA. Должно быть равно 1024,
2048 или 4096. Обычно нет смысла менять значение по умолчанию, равное 4096.
2048 или 4096. По умолчанию используется значение MTU RDMA-устройства.
- name: rdma_max_sge
type: int
default: 128
@@ -243,21 +295,6 @@
Максимальное время ожидания ответа на запрос проверки состояния соединения.
Если OSD не отвечает за это время, соединение отключается и производится
повторная попытка соединения.
- name: up_wait_retry_interval
type: ms
min: 10
default: 50
online: true
info: |
OSDs respond to clients with a special error code when they receive I/O
requests for a PG that's not synchronized and started. This parameter sets
the time for the clients to wait before re-attempting such I/O requests.
info_ru: |
Когда OSD получают от клиентов запросы ввода-вывода, относящиеся к не
поднятым на данный момент на них PG, либо к PG в процессе синхронизации,
они отвечают клиентам специальным кодом ошибки, означающим, что клиент
должен некоторое время подождать перед повторением запроса. Именно это время
ожидания задаёт данный параметр.
- name: max_etcd_attempts
type: int
default: 5
@@ -295,12 +332,105 @@
info_ru: |
Таймаут для HTTP Keep-Alive в соединениях к etcd. Должен быть больше, чем
etcd_report_interval, чтобы keepalive гарантированно работал.
- name: etcd_ws_keepalive_timeout
- name: etcd_ws_keepalive_interval
type: sec
default: 30
default: 5
online: true
info: |
etcd websocket ping interval required to keep the connection alive and
detect disconnections quickly.
info_ru: |
Интервал проверки живости вебсокет-подключений к etcd.
- name: etcd_min_reload_interval
type: ms
default: 1000
online: true
info: |
Minimum interval for full etcd state reload. Introduced to prevent
excessive load on etcd during outages when etcd can't keep up with event
streams and cancels them.
info_ru: |
Минимальный интервал полной перезагрузки состояния из etcd. Добавлено для
предотвращения избыточной нагрузки на etcd во время отказов, когда etcd не
успевает рассылать потоки событий и отменяет их.
- name: tcp_header_buffer_size
type: int
default: 65536
info: |
Size of the buffer used to read data using an additional copy. Vitastor
packet headers are 128 bytes, payload is always at least 4 KB, so it is
usually beneficial to try to read multiple packets at once even though
it requires to copy the data an additional time. The rest of each packet
is received without an additional copy. You can try to play with this
parameter and see how it affects random iops and linear bandwidth if you
want.
info_ru: |
Размер буфера для чтения данных с дополнительным копированием. Пакеты
Vitastor содержат 128-байтные заголовки, за которыми следуют данные размером
от 4 КБ и для мелких операций ввода-вывода обычно выгодно за 1 вызов читать
сразу несколько пакетов, даже не смотря на то, что это требует лишний раз
скопировать данные. Часть каждого пакета за пределами значения данного
параметра читается без дополнительного копирования. Вы можете попробовать
поменять этот параметр и посмотреть, как он влияет на производительность
случайного и линейного доступа.
- name: min_zerocopy_send_size
type: int
default: 32768
info: |
OSDs and clients will attempt to use io_uring-based zero-copy TCP send
for buffers larger than this number of bytes. Zero-copy send with io_uring is
supported since Linux kernel version 6.1. Support is auto-detected and disabled
automatically when not available. It can also be disabled explicitly by setting
this parameter to a negative value.
⚠️ Warning! Zero-copy send performance may vary greatly from CPU to CPU and from
one kernel version to another. Generally, it tends to only make benefit with larger
messages. With smaller messages (say, 4 KB), it may actually be slower. 32 KB is
enough for almost all CPUs, but even smaller values are optimal for some of them.
For example, 4 KB is OK for EPYC Milan/Genoa and 12 KB is OK for Xeon Ice Lake
(but verify it yourself please).
Verification instructions:
1. Add `iommu=pt` into your Linux kernel command line and reboot.
2. Upgrade your kernel. For example, it's very important to use 6.11+ with recent AMD EPYCs.
3. Run some tests with the [send-zerocopy liburing example](https://github.com/axboe/liburing/blob/master/examples/send-zerocopy.c)
to find the minimal message size for which zero-copy is optimal.
Use `./send-zerocopy tcp -4 -R` at the server side and
`time ./send-zerocopy tcp -4 -b 0 -s BUFFER_SIZE -D SERVER_IP` at the client side with
`-z 0` (no zero-copy) and `-z 1` (zero-copy), and compare MB/s and used CPU time
(user+system).
info_ru: |
OSD и клиенты будут пробовать использовать TCP-отправку без копирования (zero-copy) на
основе io_uring для буферов, больших, чем это число байт. Отправка без копирования
поддерживается в io_uring, начиная с версии ядра Linux 6.1. Наличие поддержки
проверяется автоматически и zero-copy отключается, когда поддержки нет. Также
её можно отключить явно, установив данный параметр в отрицательное значение.
⚠️ Внимание! Производительность данной функции может сильно отличаться на разных
процессорах и на разных версиях ядра Linux. В целом, zero-copy обычно быстрее с
большими сообщениями, а с мелкими (например, 4 КБ) zero-copy может быть даже
медленнее. 32 КБ достаточно почти для всех процессоров, но для каких-то можно
использовать даже меньшие значения. Например, для EPYC Milan/Genoa подходит 4 КБ,
а для Xeon Ice Lake - 12 КБ (но, пожалуйста, перепроверьте это сами).
Инструкция по проверке:
1. Добавьте `iommu=pt` в командную строку загрузки вашего ядра Linux и перезагрузитесь.
2. Обновите ядро. Например, для AMD EPYC очень важно использовать версию 6.11+.
3. Позапускайте тесты с помощью [send-zerocopy из примеров liburing](https://github.com/axboe/liburing/blob/master/examples/send-zerocopy.c),
чтобы найти минимальный размер сообщения, для которого zero-copy отправка оптимальна.
Запускайте `./send-zerocopy tcp -4 -R` на стороне сервера и
`time ./send-zerocopy tcp -4 -b 0 -s РАЗМЕРУФЕРА -D АДРЕС_СЕРВЕРА` на стороне клиента
с опцией `-z 0` (обычная отправка) и `-z 1` (отправка без копирования), и сравнивайте
скорость в МБ/с и занятое процессорное время (user+system).
- name: use_sync_send_recv
type: bool
default: false
info: |
If true, synchronous send/recv syscalls are used instead of io_uring for
socket communication. Useless for OSDs because they require io_uring anyway,
but may be required for clients with old kernel versions.
info_ru: |
Если установлено в истину, то вместо io_uring для передачи данных по сети
будут использоваться обычные синхронные системные вызовы send/recv. Для OSD
это бессмысленно, так как OSD в любом случае нуждается в io_uring, но, в
принципе, это может применяться для клиентов со старыми версиями ядра.

View File

@@ -1,5 +1,5 @@
# Runtime OSD Parameters
These parameters only apply to OSDs, are not fixed at the moment of OSD drive
initialization and can be changed - either with an OSD restart or, for some of
them, even without restarting by updating configuration in etcd.
initialization and can be changed - in /etc/vitastor/vitastor.conf or [vitastor-disk update-sb](../usage/disk.en.md#update-sb)
with an OSD restart or, for some of them, even without restarting by updating configuration in etcd.

View File

@@ -2,5 +2,5 @@
Данные параметры используются только OSD, но, в отличие от дисковых параметров,
не фиксируются в момент инициализации дисков OSD и могут быть изменены в любой
момент с помощью перезапуска OSD, а некоторые и без перезапуска, с помощью
изменения конфигурации в etcd.
момент с перезапуском OSD в /etc/vitastor/vitastor.conf или [vitastor-disk update-sb](../usage/disk.ru.md#update-sb),
а некоторые и без перезапуска, с помощью изменения конфигурации в etcd.

View File

@@ -1,3 +1,44 @@
- name: bind_address
type: string or array of strings
type_ru: строка или массив строк
info: |
Instead of the network masks ([osd_network](network.en.md#osd_network) and
[osd_cluster_network](network.en.md#osd_cluster_network)), you can also set
OSD listen addresses explicitly using this parameter. May be useful if you
want to start OSDs on interfaces that are not UP + RUNNING.
info_ru: |
Вместо использования масок подсети ([osd_network](network.ru.md#osd_network) и
[osd_cluster_network](network.ru.md#osd_cluster_network)), вы также можете явно
задать адрес(а), на которых будут ожидать соединений OSD, с помощью данного
параметра. Это может быть полезно, например, чтобы запускать OSD на неподнятых
интерфейсах (не UP + RUNNING).
- name: bind_port
type: int
info: |
By default, OSDs pick random ports to use for incoming connections
automatically. With this option you can set a specific port for a specific
OSD by hand.
info_ru: |
По умолчанию OSD сами выбирают случайные порты для входящих подключений.
С помощью данной опции вы можете задать порт для отдельного OSD вручную.
- name: osd_iothread_count
type: int
default: 0
info: |
TCP network I/O thread count for OSD. When non-zero, a single OSD process
may handle more TCP I/O, but at a cost of increased latency because thread
switching overhead occurs. RDMA isn't affected by this option.
Because of latency, instead of enabling OSD I/O threads it's recommended to
just create multiple OSDs per disk, or use RDMA.
info_ru: |
Число отдельных потоков для обработки ввода-вывода через TCP-сеть на
стороне OSD. Включение опции позволяет каждому отдельному OSD передавать
по сети больше данных, но ухудшает задержку из-за накладных расходов
переключения потоков. На работу RDMA опция не влияет.
Из-за задержек вместо включения потоков ввода-вывода OSD рекомендуется
просто создавать по несколько OSD на каждом диске, или использовать RDMA.
- name: etcd_report_interval
type: sec
default: 5
@@ -38,44 +79,6 @@
реализовать дополнительный режим для монитора, который позволит отделять
первичные OSD от вторичных, но пока не понятно, зачем это может кому-то
понадобиться, поэтому это не реализовано.
- name: osd_network
type: string or array of strings
type_ru: строка или массив строк
info: |
Network mask of the network (IPv4 or IPv6) to use for OSDs. Note that
although it's possible to specify multiple networks here, this does not
mean that OSDs will create multiple listening sockets - they'll only
pick the first matching address of an UP + RUNNING interface. Separate
networks for cluster and client connections are also not implemented, but
they are mostly useless anyway, so it's not a big deal.
info_ru: |
Маска подсети (IPv4 или IPv6) для использования для соединений с OSD.
Имейте в виду, что хотя сейчас и можно передать в этот параметр несколько
подсетей, это не означает, что OSD будут создавать несколько слушающих
сокетов - они лишь будут выбирать адрес первого поднятого (состояние UP +
RUNNING), подходящий под заданную маску. Также не реализовано разделение
кластерной и публичной сетей OSD. Правда, от него обычно всё равно довольно
мало толку, так что особенной проблемы в этом нет.
- name: bind_address
type: string
default: "0.0.0.0"
info: |
Instead of the network mask, you can also set OSD listen address explicitly
using this parameter. May be useful if you want to start OSDs on interfaces
that are not UP + RUNNING.
info_ru: |
Этим параметром можно явным образом задать адрес, на котором будет ожидать
соединений OSD (вместо использования маски подсети). Может быть полезно,
например, чтобы запускать OSD на неподнятых интерфейсах (не UP + RUNNING).
- name: bind_port
type: int
info: |
By default, OSDs pick random ports to use for incoming connections
automatically. With this option you can set a specific port for a specific
OSD by hand.
info_ru: |
По умолчанию OSD сами выбирают случайные порты для входящих подключений.
С помощью данной опции вы можете задать порт для отдельного OSD вручную.
- name: autosync_interval
type: sec
default: 5
@@ -297,7 +300,7 @@
decrease write performance for fast disks because page cache is an overhead
itself.
Choose "directsync" to use [immediate_commit](layout-cluster.ru.md#immediate_commit)
Choose "directsync" to use [immediate_commit](layout-cluster.en.md#immediate_commit)
(which requires disable_data_fsync) with drives having write-back cache
which can't be turned off, for example, Intel Optane. Also note that *some*
desktop SSDs (for example, HP EX950) may ignore O_SYNC thus making
@@ -731,8 +734,70 @@
default: 10
online: true
info: |
Minimum possible value for auto-tuned recovery_sleep_us. Values lower
than this value are changed to 0.
Minimum possible value for auto-tuned recovery_sleep_us. Lower values
are changed to 0.
info_ru: |
Минимальное возможное значение авто-подстроенного recovery_sleep_us.
Значения ниже данного заменяются на 0.
Меньшие значения заменяются на 0.
- name: recovery_tune_sleep_cutoff_us
type: us
default: 10000000
online: true
info: |
Maximum possible value for auto-tuned recovery_sleep_us. Higher values
are treated as outliers and ignored in aggregation.
info_ru: |
Максимальное возможное значение авто-подстроенного recovery_sleep_us.
Большие значения считаются случайными выбросами и игнорируются в
усреднении.
- name: discard_on_start
type: bool
info: Discard (SSD TRIM) unused data device blocks on every OSD startup.
info_ru: Освобождать (SSD TRIM) неиспользуемые блоки диска данных при каждом запуске OSD.
- name: min_discard_size
type: int
default: 1048576
info: Minimum consecutive block size to TRIM it.
info_ru: Минимальный размер последовательного блока данных, чтобы освобождать его через TRIM.
- name: allow_net_split
type: bool
default: false
info: |
Allow "safe" cases of network splits/partitions - allow to start PGs without
connections to some OSDs currently registered as alive in etcd, if the number
of actually connected PG OSDs is at least pg_minsize. That is, allow some OSDs to lose
connectivity with some other OSDs as long as it doesn't break pg_minsize guarantees.
The downside is that it increases the probability of writing data into just pg_minsize
OSDs during failover which can lead to PGs becoming incomplete after additional outages.
The old behaviour in versions up to 2.0.0 was equal to enabled allow_net_split.
info_ru: |
Разрешить "безопасные" случаи разделений сети - разрешить активировать PG без
соединений к некоторым OSD, помеченным активными в etcd, если общее число активных
OSD в PG составляет как минимум pg_minsize. То есть, разрешать некоторым OSD терять
соединения с некоторыми другими OSD, если это не нарушает гарантий pg_minsize.
Минус такого разрешения в том, что оно повышает вероятность записи данных ровно в
pg_minsize OSD во время переключений, что может потом привести к тому, что PG станут
неполными (incomplete), если упадут ещё какие-то OSD.
Старое поведение в версиях до 2.0.0 было идентично включённому allow_net_split.
- name: enable_pg_locks
type: bool
info: |
Vitastor 2.2.0 introduces a new layer of split-brain prevention mechanism in
addition to etcd: PG locks. They prevent split-brain even in abnormal theoretical cases
when etcd is extremely laggy. As a new feature, by default, PG locks are only enabled
for pools where they're required - pools with [localized reads](pool.en.md#local_reads).
Use this parameter to enable or disable this function for all pools.
info_ru: |
В Vitastor 2.2.0 появился новый слой защиты от сплитбрейна в дополнение к etcd -
блокировки PG. Они гарантируют порядок даже в теоретических ненормальных случаях,
когда etcd очень сильно тормозит. Так как функция новая, по умолчанию она включается
только для пулов, в которых она необходима - а именно, в пулах с включёнными
[локальными чтениями](pool.ru.md#local_reads). Ну а с помощью данного параметра
можно включить блокировки PG для всех пулов.
- name: pg_lock_retry_interval_ms
type: ms
default: 100
info: Retry interval for failed PG lock attempts.
info_ru: Интервал повтора неудачных попыток блокировки PG.

View File

@@ -0,0 +1,60 @@
[Documentation](../../README.md#documentation) → Installation → Dockerized Installation
-----
[Читать на русском](docker.ru.md)
# Dockerized Installation
Vitastor may be installed in Docker/Podman. In such setups etcd, monitors and OSD
all run in containers, but everything else looks as close as possible to a usual
setup with packages:
- host network is used
- auto-start is implemented through udev and systemd
- logs are written to journald (not docker json log files)
- command-line wrapper scripts are installed to the host system to call vitastor-disk,
vitastor-cli and others through the container
Such installations may be useful when it's impossible or inconvenient to install
Vitastor from packages, for example, in exotic Linux distributions.
If you don't want just a simple containerized installation, you can also take a look
at Vitastor Kubernetes operator: https://github.com/Antilles7227/vitastor-operator
## Installing Containers
The instruction is very simple.
1. Download a Docker image of the desired version: \
`docker pull vitastor:v2.2.2`
2. Install scripts to the host system: \
`docker run --rm -it -v /etc:/host-etc -v /usr/bin:/host-bin vitastor:v2.2.2 install.sh`
3. Reload udev rules: \
`udevadm control --reload-rules`
And you can return to [Quick Start](../intro/quickstart.en.md).
## Upgrading Containers
First make sure to check the topic [Upgrading Vitastor](../usage/admin.en.md#upgrading-vitastor)
to figure out if you need any additional steps.
Then, to upgrade a containerized installation, you just need to change the `VITASTOR_VERSION`
option in `/etc/vitastor/docker.conf` and restart all Vitastor services:
`systemctl restart vitastor.target`
## QEMU
Vitastor Docker image also contains QEMU, qemu-img and qemu-storage-daemon built with Vitastor support.
However, running QEMU in Docker is harder to setup and it depends on the used virtualization UI
(OpenNebula, Proxmox and so on). Some of them also required patched Libvirt.
That's why containerized installation of Vitastor doesn't contain a ready-made QEMU setup and it's
recommended to install QEMU from packages or build it manually.
## fio
Vitastor Docker image also contains fio and installs a wrapper called `vitastor-fio` to use it from
the host system.

View File

@@ -0,0 +1,60 @@
[Документация](../../README-ru.md#документация) → Установка → Установка в Docker
-----
[Read in English](docker.en.md)
# Установка в Docker
Vitastor можно установить в Docker/Podman. При этом etcd, мониторы и OSD запускаются
в контейнерах, но всё остальное выглядит максимально приближенно к установке из пакетов:
- используется сеть хост-системы
- для автозапуска используются udev и systemd
- журналы записываются в journald (не в json-файлы журналов docker)
- в хост-систему устанавливаются обёртки для вызова консольных инструментов vitastor-disk,
vitastor-cli и других через контейнер
Такая установка полезна тогда, когда установка из пакетов невозможна или неудобна,
например, в нестандартных Linux-дистрибутивах.
Если вам нужна не просто контейнеризованная инсталляция, вы также можете обратить внимание
на Vitastor Kubernetes-оператор: https://github.com/Antilles7227/vitastor-operator
## Установка контейнеров
Инструкция по установке максимально простая.
1. Скачайте Docker-образ желаемой версии: \
`docker pull vitastor:v2.2.2`
2. Установите скрипты в хост-систему командой: \
`docker run --rm -it -v /etc:/host-etc -v /usr/bin:/host-bin vitastor:v2.2.2 install.sh`
3. Перезагрузите правила udev: \
`udevadm control --reload-rules`
После этого вы можете возвращаться к разделу [Быстрый старт](../intro/quickstart.ru.md).
## Обновление контейнеров
Сначала обязательно проверьте раздел [Обновление Vitastor](../usage/admin.ru.md#обновление-vitastor),
чтобы понять, не требуются ли вам какие-то дополнительные действия.
После этого для обновления Docker-инсталляции вам нужно просто поменять опцию `VITASTOR_VERSION`
в файле `/etc/vitastor/docker.conf` и перезапустить все сервисы Vitastor командой:
`systemctl restart vitastor.target`
## QEMU
В Docker-образ также входят QEMU, qemu-img и qemu-storage-daemon, собранные с поддержкой Vitastor.
Однако настроить запуск QEMU в Docker сложнее и способ запуска зависит от используемого интерфейса
виртуализации (OpenNebula, Proxmox и т.п.). Также для OpenNebula, например, требуется патченый
Libvirt.
Поэтому по умолчанию Docker-сборка пока что не включает в себя готового способа запуска QEMU
и QEMU рекомендуется устанавливать из пакетов или собирать самостоятельно.
## fio
fio также входит в Docker-контейнер vitastor, и в хост-систему устанавливается обёртка `vitastor-fio`
для запуска fio в контейнер.

View File

@@ -6,9 +6,18 @@
# Kubernetes CSI
Vitastor has a CSI plugin for Kubernetes which supports RWO (and block RWX) volumes.
Vitastor has a CSI plugin for Kubernetes which supports block-based and VitastorFS-based volumes.
To deploy it, take manifests from [csi/deploy/](../../csi/deploy/) directory, put your
Block-based volumes may be formatted and mounted with a normal FS (ext4 or xfs). Such volumes
only support RWO (ReadWriteOnce) mode.
Block-based volumes may also be left without FS and attached into the container as a block
device. Such volumes also support RWX (ReadWriteMany) mode.
VitastorFS-based volumes use a clustered file system and support FS-based RWX (ReadWriteMany)
mode. However, such volumes don't support quotas and snapshots.
To deploy the CSI plugin, take manifests from [csi/deploy/](../../csi/deploy/) directory, put your
Vitastor configuration in [001-csi-config-map.yaml](../../csi/deploy/001-csi-config-map.yaml),
configure storage class in [009-storage-class.yaml](../../csi/deploy/009-storage-class.yaml)
and apply all `NNN-*.yaml` manifests to your Kubernetes installation:
@@ -23,16 +32,16 @@ After that you'll be able to create PersistentVolumes.
kernel modules enabled (vdpa, vduse, virtio-vdpa). If your distribution doesn't
have them pre-built - build them yourself ([instructions](../usage/qemu.en.md#vduse)),
I promise it's worth it :-). When VDUSE is unavailable, CSI driver uses [NBD](../usage/nbd.en.md)
to map Vitastor devices. NBD is slower and prone to timeout issues: if Vitastor
cluster becomes unresponsible for more than [nbd_timeout](../config/client.en.md#nbd_timeout),
the NBD device detaches and breaks pods using it.
to map Vitastor devices. NBD is slower and, with kernels older than 5.19, unmountable
if the cluster becomes unresponsible.
## Features
Vitastor CSI supports:
- Kubernetes starting with 1.20 (or 1.17 for older vitastor-csi <= 1.1.0)
- Filesystem RWO (ReadWriteOnce) volumes. Example: [PVC](../../csi/deploy/example-pvc.yaml), [pod](../../csi/deploy/example-test-pod.yaml)
- Block-based FS-formatted RWO (ReadWriteOnce) volumes. Example: [PVC](../../csi/deploy/example-pvc.yaml), [pod](../../csi/deploy/example-test-pod.yaml)
- Raw block RWX (ReadWriteMany) volumes. Example: [PVC](../../csi/deploy/example-pvc-block.yaml), [pod](../../csi/deploy/example-test-pod-block.yaml)
- VitastorFS-based volumes RWX (ReadWriteMany) volumes. Example: [storage class](../../csi/deploy/example-storage-class-fs.yaml)
- Volume expansion
- Volume snapshots. Example: [snapshot class](../../csi/deploy/example-snapshot-class.yaml), [snapshot](../../csi/deploy/example-snapshot.yaml), [clone](../../csi/deploy/example-snapshot-clone.yaml)
- [VDUSE](../usage/qemu.en.md#vduse) (preferred) and [NBD](../usage/nbd.en.md) device mapping methods

View File

@@ -6,7 +6,17 @@
# Kubernetes CSI
У Vitastor есть CSI-плагин для Kubernetes, поддерживающий RWO, а также блочные RWX, тома.
У Vitastor есть CSI-плагин для Kubernetes, поддерживающий блочные тома и тома на основе
кластерной ФС VitastorFS.
Блочные тома могут быть отформатированы и примонтированы со стандартной ФС (ext4 или xfs).
Такие тома поддерживают только режим RWO (ReadWriteOnce, одновременный доступ с одного узла).
Блочные тома также могут не форматироваться и подключаться в контейнер в виде блочного устройства.
В таком случае их можно подключать в режиме RWX (ReadWriteMany, одновременный доступ с многих узлов).
Тома на основе VitastorFS используют кластерную ФС и поэтому также поддерживают режим RWX
(ReadWriteMany). Однако, такие тома не поддерживают ограничение размера и снимки.
Для установки возьмите манифесты из директории [csi/deploy/](../../csi/deploy/), поместите
вашу конфигурацию подключения к Vitastor в [csi/deploy/001-csi-config-map.yaml](../../csi/deploy/001-csi-config-map.yaml),
@@ -33,6 +43,7 @@ CSI-плагин Vitastor поддерживает:
- Версии Kubernetes, начиная с 1.20 (или с 1.17 для более старых vitastor-csi <= 1.1.0)
- Файловые RWO (ReadWriteOnce) тома. Пример: [PVC](../../csi/deploy/example-pvc.yaml), [под](../../csi/deploy/example-test-pod.yaml)
- Сырые блочные RWX (ReadWriteMany) тома. Пример: [PVC](../../csi/deploy/example-pvc-block.yaml), [под](../../csi/deploy/example-test-pod-block.yaml)
- Основанные на VitastorFS RWX (ReadWriteMany) тома. Пример: [класс хранения](../../csi/deploy/example-storage-class-fs.yaml)
- Расширение размера томов
- Снимки томов. Пример: [класс снимков](../../csi/deploy/example-snapshot-class.yaml), [снимок](../../csi/deploy/example-snapshot.yaml), [клон снимка](../../csi/deploy/example-snapshot-clone.yaml)
- Способы подключения устройств [VDUSE](../usage/qemu.ru.md#vduse) (предпочитаемый) и [NBD](../usage/nbd.ru.md)

View File

@@ -0,0 +1,186 @@
[Documentation](../../README.md#documentation) → Installation → OpenNebula
-----
[Читать на русском](opennebula.ru.md)
# OpenNebula
## Automatic Installation
OpenNebula plugin is packaged as `vitastor-opennebula` Debian and RPM package since Vitastor 1.9.0. So:
- Run `apt-get install vitastor-opennebula` or `yum install vitastor-opennebula` after installing OpenNebula on all nodes
- Check that it prints "OK, Vitastor OpenNebula patches successfully applied" or "OK, Vitastor OpenNebula patches are already applied"
- If it does not, refer to [Manual Installation](#manual-installation) and apply configuration file changes manually
- Make sure that Vitastor patched versions of QEMU and libvirt are installed
(`dpkg -l qemu-system-x86`, `dpkg -l | grep libvirt`, `rpm -qa | grep qemu`, `rpm -qa | grep qemu`, `rpm -qa | grep libvirt-libs` should show "vitastor" in version names)
- [Block VM access to Vitastor cluster](#block-vm-access-to-vitastor-cluster)
## Manual Installation
Install OpenNebula. Then, on each node:
- Copy [opennebula/remotes](../../opennebula/remotes) into `/var/lib/one` recursively: `cp -r opennebula/remotes /var/lib/one/`
- Copy [opennebula/sudoers.d](../../opennebula/sudoers.d) to `/etc`: `cp -r opennebula/sudoers.d /etc/`
- Apply [downloader-vitastor.sh.diff](../../opennebula/remotes/datastore/vitastor/downloader-vitastor.sh.diff) to `/var/lib/one/remotes/datastore/downloader.sh`:
`patch /var/lib/one/remotes/datastore/downloader.sh < opennebula/remotes/datastore/vitastor/downloader-vitastor.sh.diff` - or read the patch and apply the same change manually
- Add `kvm-vitastor` to `LIVE_DISK_SNAPSHOTS` in `/etc/one/vmm_exec/vmm_execrc`
- If on Debian or Ubuntu (and AppArmor is used), add Vitastor config file path(s) to `/etc/apparmor.d/local/abstractions/libvirt-qemu`: for example,
`echo ' "/etc/vitastor/vitastor.conf" r,' >> /etc/apparmor.d/local/abstractions/libvirt-qemu`
- Apply changes to `/etc/one/oned.conf`
### oned.conf changes
1. Add deploy script override in kvm VM_MAD: add `-l deploy.vitastor` to ARGUMENTS.
```diff
VM_MAD = [
NAME = "kvm",
SUNSTONE_NAME = "KVM",
EXECUTABLE = "one_vmm_exec",
- ARGUMENTS = "-t 15 -r 0 kvm -p",
+ ARGUMENTS = "-t 15 -r 0 kvm -p -l deploy=deploy.vitastor",
DEFAULT = "vmm_exec/vmm_exec_kvm.conf",
TYPE = "kvm",
KEEP_SNAPSHOTS = "yes",
LIVE_RESIZE = "yes",
SUPPORT_SHAREABLE = "yes",
IMPORTED_VMS_ACTIONS = "terminate, terminate-hard, hold, release, suspend,
resume, delete, reboot, reboot-hard, resched, unresched, disk-attach,
disk-detach, nic-attach, nic-detach, snapshot-create, snapshot-delete,
resize, updateconf, update"
]
```
Optional: if you also want to save VM RAM checkpoints to Vitastor, use
`-l deploy=deploy.vitastor,save=save.vitastor,restore=restore.vitastor`
instead of just `-l deploy=deploy.vitastor`.
2. Add `vitastor` to TM_MAD.ARGUMENTS and DATASTORE_MAD.ARGUMENTS:
```diff
TM_MAD = [
EXECUTABLE = "one_tm",
- ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,fs_lvm_ssh,qcow2,ssh,ceph,dev,vcenter,iscsi_libvirt"
+ ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,fs_lvm_ssh,qcow2,ssh,ceph,vitastor,dev,vcenter,iscsi_libvirt"
]
DATASTORE_MAD = [
EXECUTABLE = "one_datastore",
- ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,dev,iscsi_libvirt,vcenter,restic,rsync -s shared,ssh,ceph,fs_lvm,fs_lvm_ssh,qcow2,vcenter"
+ ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,vitastor,dev,iscsi_libvirt,vcenter,restic,rsync -s shared,ssh,ceph,vitastor,fs_lvm,fs_lvm_ssh,qcow2,vcenter"
]
```
3. Add INHERIT_DATASTORE_ATTR for two Vitastor attributes:
```
INHERIT_DATASTORE_ATTR = "VITASTOR_CONF"
INHERIT_DATASTORE_ATTR = "IMAGE_PREFIX"
```
4. Add TM_MAD_CONF and DS_MAD_CONF for Vitastor:
```
TM_MAD_CONF = [
NAME = "vitastor", LN_TARGET = "NONE", CLONE_TARGET = "SELF", SHARED = "YES",
DS_MIGRATE = "NO", DRIVER = "raw", ALLOW_ORPHANS="format",
TM_MAD_SYSTEM = "ssh,shared", LN_TARGET_SSH = "SYSTEM", CLONE_TARGET_SSH = "SYSTEM",
DISK_TYPE_SSH = "FILE", LN_TARGET_SHARED = "NONE",
CLONE_TARGET_SHARED = "SELF", DISK_TYPE_SHARED = "FILE"
]
DS_MAD_CONF = [
NAME = "vitastor",
REQUIRED_ATTRS = "DISK_TYPE,BRIDGE_LIST",
PERSISTENT_ONLY = "NO",
MARKETPLACE_ACTIONS = "export"
]
```
## Create Datastores
Example Image and System Datastore definitions:
[opennebula/vitastor-imageds.conf](../../opennebula/vitastor-imageds.conf) and
[opennebula/vitastor-systemds.conf](../../opennebula/vitastor-systemds.conf).
Change parameters to your will:
- POOL_NAME is Vitastor pool name to store images.
- IMAGE_PREFIX is a string prepended to all Vitastor image names.
- BRIDGE_LIST is a list of hosts with access to Vitastor cluster, mostly used for image (not system) datastore operations.
- VITASTOR_CONF is the path to cluster configuration. Note that it should be also added to `/etc/apparmor.d/local/abstractions/libvirt-qemu` if you use AppArmor.
- STAGING_DIR is a temporary directory used when importing external images. Should have free space sufficient for downloading external images.
Then create datastores using `onedatastore create vitastor-imageds.conf` and `onedatastore create vitastor-systemds.conf` (or use UI).
## Block VM access to Vitastor cluster
Vitastor doesn't support any authentication yet, so you MUST block VM guest access to the Vitastor cluster at the network level.
If you use VLAN networking for VMs - make sure you use different VLANs for VMs and hypervisor/storage network and
block access between them using your firewall/switch configuration.
If you use something more stupid like bridged networking, you probably have to use manual firewall/iptables setup
to only allow access to Vitastor from hypervisor IPs.
Also you need to switch network to "Bridged & Security Groups" and enable IP spoofing filters in OpenNebula.
Problem is that OpenNebula's IP spoofing filter doesn't affect local interfaces of the hypervisor i.e. when
it's enabled a VM can't talk to other VMs or to the outer world using a spoofed IP, but it CAN talk to the
hypervisor if it takes an IP from its subnet. To fix that you also need some more iptables.
So the complete "stupid" bridged network filter setup could look like the following
(here `10.0.3.0/24` is the VM subnet and `10.0.2.0/24` is the hypervisor subnet):
```
# Allow incoming traffic from physical device
iptables -A INPUT -m physdev --physdev-in eth0 -j ACCEPT
# Do not allow incoming traffic from VMs, but not from VM subnet
iptables -A INPUT ! -s 10.0.3.0/24 -i onebr0 -j DROP
# Drop traffic from VMs to hypervisor/storage subnet
iptables -I FORWARD 1 -s 10.0.3.0/24 -d 10.0.2.0/24 -j DROP
```
## Testing
The OpenNebula plugin includes quite a bit of bash scripts, so here's their description to get an idea about what they actually do.
| Script | Action | How to Test |
| ----------------------- | ----------------------------------------- | ------------------------------------------------------------------------------------ |
| vmm/kvm/deploy.vitastor | Start a VM | Create and start a VM with Vitastor disk(s): persistent / non-persistent / volatile. |
| vmm/kvm/save.vitastor | Save VM memory checkpoint | Stop a VM using "Stop" command. |
| vmm/kvm/restore.vitastor| Restore VM memory checkpoint | Start a VM back after stopping it. |
| datastore/clone | Copy an image as persistent | Create a VM template and instantiate it as persistent. |
| datastore/cp | Import an external image | Import a VM template with images from Marketplace. |
| datastore/export | Export an image as URL | Probably: export a VM template with images to Marketplace. |
| datastore/mkfs | Create an image with FS | Storage → Images → Create → Type: Datablock, Location: Empty disk image, Filesystem: Not empty. |
| datastore/monitor | Monitor used space in image datastore | Check reported used/free space in image datastore list. |
| datastore/rm | Remove a persistent image | Storage → Images → Select an image → Delete. |
| datastore/snap_delete | Delete a snapshot of a persistent image | Storage → Images → Select an image → Select a snapshot → Delete; <br> To create an image with snapshot: attach a persistent image to a VM; create a snapshot; detach the image. |
| datastore/snap_flatten | Revert an image to snapshot and delete other snapshots | Storage → Images → Select an image → Select a snapshot → Flatten. |
| datastore/snap_revert | Revert an image to snapshot | Storage → Images → Select an image → Select a snapshot → Revert. |
| datastore/stat | Get virtual size of an image in MB | No idea. Seems to be unused both in Vitastor and Ceph datastores. |
| tm/clone | Clone a non-persistent image to a VM disk | Attach a non-persistent image to a VM. |
| tm/context | Generate a contextualisation VM disk | Create a VM with enabled contextualisation (default). Common host FS-based version is used in Vitastor and Ceph datastores. |
| tm/cpds | Copy a VM disk / its snapshot to an image | Select a VM → Select a disk → Optionally select a snapshot → Save as. |
| tm/delete | Delete a cloned or volatile VM disk | Detach a volatile disk or a non-persistent image from a VM. |
| tm/failmigrate | Handle live migration failure | No action. Script is empty in Vitastor and Ceph. In other datastores, should roll back actions done by tm/premigrate. |
| tm/ln | Attach a persistent image to a VM | No action. Script is empty in Vitastor and Ceph. |
| tm/mkimage | Create a volatile disk, maybe with FS | Attach a volatile disk to a VM, with or without file system. |
| tm/mkswap | Create a volatile swap disk | Attach a volatile disk to a VM, formatted as swap. |
| tm/monitor | Monitor used space in system datastore | Check reported used/free space in system datastore list. |
| tm/mv | Move a migrated VM disk between hosts | Migrate a VM between hosts. In Vitastor and Ceph datastores, doesn't do any storage action. |
| tm/mvds | Detach a persistent image from a VM | No action. The opposite of tm/ln. Script is empty in Vitastor and Ceph. In other datastores, script may copy the image from VM host back to the datastore. |
| tm/postbackup | Executed after backup | Seems that the script just removes temporary files after backup. Perform a VM backup and check that temporary files are cleaned up. |
| tm/postbackup_live | Executed after backup of a running VM | Same as tm/postbackup, but for a running VM. |
| tm/postmigrate | Executed after VM live migration | No action. Only executed for system datastore, so the script tries to call other TMs for other disks. Except that, the script does nothing in Vitastor and Ceph datastores. |
| tm/prebackup | Actual backup script: backup VM disks | Set up "rsync" backup datastore → Backup a VM to it. |
| tm/prebackup_live | Backup VM disks of a running VM | Same as tm/prebackup, but also does fsfreeze/thaw. So perform a live backup, restore it and check that disks are consistent. |
| tm/premigrate | Executed before live migration | No action. Only executed for system datastore, so the script tries to call other TMs for other disks. Except that, the script does nothing in Vitastor and Ceph datastores. |
| tm/resize | Resize a VM disk | Select a VM → Select a non-persistent disk → Resize. |
| tm/restore | Restore VM disks from backup | Set up "rsync" backup datastore → Backup a VM to it → Restore it back. |
| tm/snap_create | Create a VM disk snapshot | Select a VM → Select a disk → Create snapshot. |
| tm/snap_create_live | Create a VM disk snapshot for a live VM | Select a running VM → Select a disk → Create snapshot. |
| tm/snap_delete | Delete a VM disk snapshot | Select a VM → Select a disk → Select a snapshot → Delete. |
| tm/snap_revert | Revert a VM disk to a snapshot | Select a VM → Select a disk → Select a snapshot → Revert. |

View File

@@ -0,0 +1,189 @@
[Документация](../../README-ru.md#документация) → Установка → OpenNebula
-----
[Read in English](opennebula.en.md)
# OpenNebula
## Автоматическая установка
Плагин OpenNebula Vitastor распространяется как Debian и RPM пакет `vitastor-opennebula`, начиная с версии Vitastor 1.9.0. Так что:
- Запустите `apt-get install vitastor-opennebula` или `yum install vitastor-opennebula` после установки OpenNebula на всех серверах
- Проверьте, что он выводит "OK, Vitastor OpenNebula patches successfully applied" или "OK, Vitastor OpenNebula patches are already applied" в процессе установки
- Если сообщение не выведено, пройдите по шагам инструкцию [Ручная установка](#ручная-установка) и примените правки файлов конфигурации вручную
- Удостоверьтесь, что установлены версии QEMU и libvirt с изменениями Vitastor
(`dpkg -l qemu-system-x86`, `dpkg -l | grep libvirt`, `rpm -qa | grep qemu`, `rpm -qa | grep qemu`, `rpm -qa | grep libvirt-libs` должны показывать "vitastor" в номере версии)
- [Заблокируйте доступ виртуальных машин в Vitastor](#блокировка-доступа-вм-в-vitastor)
## Ручная установка
Сначала установите саму OpenNebula. После этого, на каждом сервере:
- Скопируйте директорию [opennebula/remotes](../../opennebula/remotes) в `/var/lib/one`: `cp -r opennebula/remotes /var/lib/one/`
- Скопируйте директорию [opennebula/sudoers.d](../../opennebula/sudoers.d) в `/etc`: `cp -r opennebula/sudoers.d /etc/`
- Примените патч [downloader-vitastor.sh.diff](../../opennebula/remotes/datastore/vitastor/downloader-vitastor.sh.diff) к `/var/lib/one/remotes/datastore/downloader.sh`:
`patch /var/lib/one/remotes/datastore/downloader.sh < opennebula/remotes/datastore/vitastor/downloader-vitastor.sh.diff` - либо прочитайте патч и примените изменение вручную
- Добавьте `kvm-vitastor` в список `LIVE_DISK_SNAPSHOTS` в файле `/etc/one/vmm_exec/vmm_execrc`
- Если вы используете Debian или Ubuntu (и AppArmor), добавьте пути к файлу(ам) конфигурации Vitastor в файл `/etc/apparmor.d/local/abstractions/libvirt-qemu`: например,
`echo ' "/etc/vitastor/vitastor.conf" r,' >> /etc/apparmor.d/local/abstractions/libvirt-qemu`
- Примените изменения `/etc/one/oned.conf`
### Изменения oned.conf
1. Добавьте переопределение скрипта deploy в VM_MAD kvm, добавив `-l deploy.vitastor` в `ARGUMENTS`:
```diff
VM_MAD = [
NAME = "kvm",
SUNSTONE_NAME = "KVM",
EXECUTABLE = "one_vmm_exec",
- ARGUMENTS = "-t 15 -r 0 kvm -p",
+ ARGUMENTS = "-t 15 -r 0 kvm -p -l deploy=deploy.vitastor",
DEFAULT = "vmm_exec/vmm_exec_kvm.conf",
TYPE = "kvm",
KEEP_SNAPSHOTS = "yes",
LIVE_RESIZE = "yes",
SUPPORT_SHAREABLE = "yes",
IMPORTED_VMS_ACTIONS = "terminate, terminate-hard, hold, release, suspend,
resume, delete, reboot, reboot-hard, resched, unresched, disk-attach,
disk-detach, nic-attach, nic-detach, snapshot-create, snapshot-delete,
resize, updateconf, update"
]
```
Опционально: если вы хотите также сохранять снимки памяти ВМ в Vitastor, добавьте
`-l deploy=deploy.vitastor,save=save.vitastor,restore=restore.vitastor`
вместо просто `-l deploy=deploy.vitastor`.
2. Добавьте `vitastor` в значения TM_MAD.ARGUMENTS и DATASTORE_MAD.ARGUMENTS:
```diff
TM_MAD = [
EXECUTABLE = "one_tm",
- ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,fs_lvm_ssh,qcow2,ssh,ceph,dev,vcenter,iscsi_libvirt"
+ ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,fs_lvm_ssh,qcow2,ssh,ceph,vitastor,dev,vcenter,iscsi_libvirt"
]
DATASTORE_MAD = [
EXECUTABLE = "one_datastore",
- ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,dev,iscsi_libvirt,vcenter,restic,rsync -s shared,ssh,ceph,fs_lvm,fs_lvm_ssh,qcow2,vcenter"
+ ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,vitastor,dev,iscsi_libvirt,vcenter,restic,rsync -s shared,ssh,ceph,vitastor,fs_lvm,fs_lvm_ssh,qcow2,vcenter"
]
```
3. Добавьте строчки с INHERIT_DATASTORE_ATTR для двух атрибутов Vitastor-хранилищ:
```
INHERIT_DATASTORE_ATTR = "VITASTOR_CONF"
INHERIT_DATASTORE_ATTR = "IMAGE_PREFIX"
```
4. Добавьте TM_MAD_CONF и DS_MAD_CONF для Vitastor:
```
TM_MAD_CONF = [
NAME = "vitastor", LN_TARGET = "NONE", CLONE_TARGET = "SELF", SHARED = "YES",
DS_MIGRATE = "NO", DRIVER = "raw", ALLOW_ORPHANS="format",
TM_MAD_SYSTEM = "ssh,shared", LN_TARGET_SSH = "SYSTEM", CLONE_TARGET_SSH = "SYSTEM",
DISK_TYPE_SSH = "FILE", LN_TARGET_SHARED = "NONE",
CLONE_TARGET_SHARED = "SELF", DISK_TYPE_SHARED = "FILE"
]
DS_MAD_CONF = [
NAME = "vitastor",
REQUIRED_ATTRS = "DISK_TYPE,BRIDGE_LIST",
PERSISTENT_ONLY = "NO",
MARKETPLACE_ACTIONS = "export"
]
```
## Создайте хранилища
Примеры настроек хранилищ образов (image) и дисков ВМ (system):
[opennebula/vitastor-imageds.conf](../../opennebula/vitastor-imageds.conf) и
[opennebula/vitastor-systemds.conf](../../opennebula/vitastor-systemds.conf).
Скопируйте настройки и поменяйте следующие параметры так, как вам необходимо:
- POOL_NAME - имя пула Vitastor для сохранения образов дисков.
- IMAGE_PREFIX - строка, добавляемая в начало имён образов дисков.
- BRIDGE_LIST - список серверов с доступом к кластеру Vitastor, используемых для операций с хранилищем образов (image, не system).
- VITASTOR_CONF - путь к конфигурации Vitastor. Имейте в виду, что этот путь также надо добавить в `/etc/apparmor.d/local/abstractions/libvirt-qemu`, если вы используете AppArmor.
- STAGING_DIR - путь к временному каталогу, используемому при импорте внешних образов. Должен иметь достаточно свободного места, чтобы вмещать скачанные образы.
После этого создайте хранилища с помощью команд `onedatastore create vitastor-imageds.conf` и `onedatastore create vitastor-systemds.conf` (либо через UI).
## Блокировка доступа ВМ в Vitastor
Vitastor пока не поддерживает никакую аутентификацию, так что вы ДОЛЖНЫ заблокировать доступ гостевых ВМ
в кластер Vitastor на сетевом уровне.
Если вы используете VLAN-сети для ВМ - удостоверьтесь, что ВМ и гипервизор/сеть хранения помещены в разные
изолированные друг от друга VLAN-ы.
Если вы используете что-то более примитивное, например, мосты (bridge), вам, скорее всего, придётся вручную
настроить iptables / межсетевой экран, чтобы разрешить доступ к Vitastor только с IP гипервизоров.
Также в этом случае нужно будет переключить обычные мосты на "Bridged & Security Groups" и включить фильтр
спуфинга IP в OpenNebula. Правда, реализация этого фильтра пока не полная, и она не блокирует доступ к
локальным интерфейсам гипервизора. То есть, включённый фильтр спуфинга IP запрещает ВМ отправлять трафик
с чужими IP к другим ВМ или во внешний мир, но не запрещает отправлять его напрямую гипервизору. Чтобы
исправить это, тоже нужны дополнительные правила iptables.
Таким образом, более-менее полная блокировка при использовании простой сети на сетевых мостах может
выглядеть так (здесь `10.0.3.0/24` - подсеть ВМ, `10.0.2.0/24` - подсеть гипервизора):
```
# Разрешаем входящий трафик с физического устройства
iptables -A INPUT -m physdev --physdev-in eth0 -j ACCEPT
# Запрещаем трафик со всех ВМ, но с IP не из подсети ВМ
iptables -A INPUT ! -s 10.0.3.0/24 -i onebr0 -j DROP
# Запрещаем трафик от ВМ к сети гипервизора
iptables -I FORWARD 1 -s 10.0.3.0/24 -d 10.0.2.0/24 -j DROP
```
## Тестирование
Плагин OpenNebula по большей части состоит из bash-скриптов, и чтобы было понятнее, что они
вообще делают - ниже приведены описания процедур, которыми можно протестировать каждый из них.
| Скрипт | Описание | Как протестировать |
| ----------------------- | --------------------------------------------- | ------------------------------------------------------------------------------------ |
| vmm/kvm/deploy.vitastor | Запустить виртуальную машину | Создайте и запустите виртуальную машину с дисками Vitastor: постоянным / непостоянным / волатильным (временным). |
| vmm/kvm/save.vitastor | Сохранить снимок памяти ВМ | Остановите виртуальную машину командой "Остановить". |
| vmm/kvm/restore.vitastor| Восстановить снимок памяти ВМ | Запустите ВМ после остановки обратно. |
| datastore/clone | Скопировать образ как "постоянный" | Создайте шаблон ВМ и создайте из него постоянную ВМ. |
| datastore/cp | Импортировать внешний образ | Импортируйте шаблон ВМ с образами дисков из Магазина OpenNebula. |
| datastore/export | Экспортировать образ как URL | Вероятно: экспортируйте шаблон ВМ с образами в Магазин. |
| datastore/mkfs | Создать образ с файловой системой | Хранилище → Образы → Создать → Тип: базовый блок данных, Расположение: пустой образ диска, Файловая система: любая непустая. |
| datastore/monitor | Вывод статистики места в хранилище образов | Проверьте статистику свободного/занятого места в списке хранилищ образов. |
| datastore/rm | Удалить "постоянный" образ | Хранилище → Образы → Выберите образ → Удалить. |
| datastore/snap_delete | Удалить снимок "постоянного" образа | Хранилище → Образы → Выберите образ → Выберите снимок → Удалить; <br> Чтобы создать образ со снимком: подключите постоянный образ к ВМ, создайте снимок, отключите образ. |
| datastore/snap_flatten | Откатить образ к снимку, удалив другие снимки | Хранилище → Образы → Выберите образ → Выберите снимок → "Выровнять" (flatten). |
| datastore/snap_revert | Откатить образ к снимку | Хранилище → Образы → Выберите образ → Выберите снимок → Откатить. |
| datastore/stat | Показать виртуальный размер образа в МБ | Неизвестно. По-видимому, в плагинах Vitastor и Ceph не используется. |
| tm/clone | Клонировать "непостоянный" образ в диск ВМ | Подключите "непостоянный" образ к ВМ. |
| tm/context | Создать диск контекстуализации ВМ | Создайте ВМ с контекстуализацией, как обычно. Но тестировать особенно нечего: в плагинах Vitastor и Ceph образ контекста хранится в локальной ФС гипервизора. |
| tm/cpds | Копировать диск ВМ/его снимок в новый образ | Выберите ВМ → Выберите диск → Опционально выберите снимок → "Сохранить как". |
| tm/delete | Удалить диск-клон или волатильный диск ВМ | Отключите волатильный или не-постоянный диск от ВМ. |
| tm/failmigrate | Обработать неудачную миграцию | Тестировать нечего. Скрипт пуст в плагинах Vitastor и Ceph. В других плагинах скрипт должен откатывать действия tm/premigrate. |
| tm/ln | Подключить "постоянный" образ к ВМ | Тестировать нечего. Скрипт пуст в плагинах Vitastor и Ceph. |
| tm/mkimage | Создать волатильный диск, без или с ФС | Подключите волатильный диск к ВМ, с или без файловой системы. |
| tm/mkswap | Создать волатильный диск подкачки | Подключите волатильный диск к ВМ, форматированный как диск подкачки (swap). |
| tm/monitor | Вывод статистики места в хранилище дисков ВМ | Проверьте статистику свободного/занятого места в списке хранилищ дисков ВМ. |
| tm/mv | Мигрировать диск ВМ между хостами | Мигрируйте ВМ между серверами. Правда, с точки зрения хранилища в плагинах Vitastor и Ceph этот скрипт ничего не делает. |
| tm/mvds | Отключить "постоянный" образ от ВМ | Тестировать нечего. Скрипт пуст в плагинах Vitastor и Ceph. В целом же скрипт обратный к tm/ln и в других хранилищах он может, например, копировать образ ВМ с диска гипервизора обратно в хранилище. |
| tm/postbackup | Выполняется после бэкапа | По-видимому, скрипт просто удаляет временные файлы после резервного копирования. Так что можно провести его и проверить, что на серверах не осталось временных файлов. |
| tm/postbackup_live | Выполняется после бэкапа запущенной ВМ | То же, что tm/postbackup, но для запущенной ВМ. |
| tm/postmigrate | Выполняется после миграции ВМ | Тестировать нечего. Однако, OpenNebula запускает скрипт только для системного хранилища, поэтому он вызывает аналогичные скрипты для хранилищ других дисков той же ВМ. Помимо этого в плагинах Vitastor и Ceph скрипт ничего не делает. |
| tm/prebackup | Выполнить резервное копирование дисков ВМ | Создайте хранилище резервных копий типа "rsync" → Забэкапьте в него ВМ. |
| tm/prebackup_live | То же самое для запущенной ВМ | То же, что tm/prebackup, но запускает fsfreeze/thaw (остановку доступа к дискам). Так что смысл теста - проведите резервное копирование и проверьте, что данные скопировались консистентно. |
| tm/premigrate | Выполняется перед миграцией ВМ | Тестировать нечего. Аналогично tm/postmigrate запускается только для системного хранилища. |
| tm/resize | Изменить размер диска ВМ | Выберите ВМ → Выберите непостоянный диск → Измените его размер. |
| tm/restore | Восстановить диски ВМ из бэкапа | Создайте хранилище резервных копий → Забэкапьте в него ВМ → Восстановите её обратно. |
| tm/snap_create | Создать снимок диска ВМ | Выберите ВМ → Выберите диск → Создайте снимок. |
| tm/snap_create_live | Создать снимок диска запущенной ВМ | Выберите запущенную ВМ → Выберите диск → Создайте снимок. |
| tm/snap_delete | Удалить снимок диска ВМ | Выберите ВМ → Выберите диск → Выберите снимок → Удалить. |
| tm/snap_revert | Откатить диск ВМ к снимку | Выберите ВМ → Выберите диск → Выберите снимок → Откатить. |

View File

@@ -14,10 +14,9 @@
- Debian 12 (Bookworm/Sid): `deb https://vitastor.io/debian bookworm main`
- Debian 11 (Bullseye): `deb https://vitastor.io/debian bullseye main`
- Debian 10 (Buster): `deb https://vitastor.io/debian buster main`
- Ubuntu 22.04 (Jammy): `deb https://vitastor.io/debian jammy main`
- Add `-oldstable` to bookworm/bullseye/buster in this line to install the last
stable version from 0.9.x branch instead of 1.x
- For Debian 10 (Buster) also enable backports repository:
`deb http://deb.debian.org/debian buster-backports main`
- Install packages: `apt update; apt install vitastor lp-solve etcd linux-image-amd64 qemu-system-x86`
## CentOS

View File

@@ -14,10 +14,9 @@
- Debian 12 (Bookworm/Sid): `deb https://vitastor.io/debian bookworm main`
- Debian 11 (Bullseye): `deb https://vitastor.io/debian bullseye main`
- Debian 10 (Buster): `deb https://vitastor.io/debian buster main`
- Ubuntu 22.04 (Jammy): `deb https://vitastor.io/debian jammy main`
- Добавьте `-oldstable` к слову bookworm/bullseye/buster в этой строке, чтобы
установить последнюю стабильную версию из ветки 0.9.x вместо 1.x
- Для Debian 10 (Buster) также включите репозиторий backports:
`deb http://deb.debian.org/debian buster-backports main`
- Установите пакеты: `apt update; apt install vitastor lp-solve etcd linux-image-amd64 qemu-system-x86`
## CentOS

View File

@@ -6,10 +6,10 @@
# Proxmox VE
To enable Vitastor support in Proxmox Virtual Environment (6.4-8.1 are supported):
To enable Vitastor support in Proxmox Virtual Environment (6.4-8.x are supported):
- Add the corresponding Vitastor Debian repository into sources.list on Proxmox hosts:
bookworm for 8.1, pve8.0 for 8.0, bullseye for 7.4, pve7.3 for 7.3, pve7.2 for 7.2, pve7.1 for 7.1, buster for 6.4
bookworm for 8.1+, pve8.0 for 8.0, bullseye for 7.4, pve7.3 for 7.3, pve7.2 for 7.2, pve7.1 for 7.1, buster for 6.4
- Install vitastor-client, pve-qemu-kvm, pve-storage-vitastor (* or see note) packages from Vitastor repository
- Define storage in `/etc/pve/storage.cfg` (see below)
- Block network access from VMs to Vitastor network (to OSDs and etcd),
@@ -17,10 +17,10 @@ To enable Vitastor support in Proxmox Virtual Environment (6.4-8.1 are supported
- Restart pvedaemon: `systemctl restart pvedaemon`
`/etc/pve/storage.cfg` example (the only required option is vitastor_pool, all others
are listed below with their default values):
are listed below with their default values; `vitastor_ssd` is Proxmox storage pool id):
```
vitastor: vitastor
vitastor: vitastor_ssd
# pool to put new images into
vitastor_pool testpool
# path to the configuration file

View File

@@ -6,20 +6,20 @@
# Proxmox VE
Чтобы подключить Vitastor к Proxmox Virtual Environment (поддерживаются версии 6.4-8.1):
Чтобы подключить Vitastor к Proxmox Virtual Environment (поддерживаются версии 6.4-8.x):
- Добавьте соответствующий Debian-репозиторий Vitastor в sources.list на хостах Proxmox:
bookworm для 8.1, pve8.0 для 8.0, bullseye для 7.4, pve7.3 для 7.3, pve7.2 для 7.2, pve7.1 для 7.1, buster для 6.4
bookworm для 8.1+, pve8.0 для 8.0, bullseye для 7.4, pve7.3 для 7.3, pve7.2 для 7.2, pve7.1 для 7.1, buster для 6.4
- Установите пакеты vitastor-client, pve-qemu-kvm, pve-storage-vitastor (* или см. сноску) из репозитория Vitastor
- Определите тип хранилища в `/etc/pve/storage.cfg` (см. ниже)
- Обязательно заблокируйте доступ от виртуальных машин к сети Vitastor (OSD и etcd), т.к. Vitastor (пока) не поддерживает аутентификацию
- Перезапустите демон Proxmox: `systemctl restart pvedaemon`
Пример `/etc/pve/storage.cfg` (единственная обязательная опция - vitastor_pool, все остальные
перечислены внизу для понимания значений по умолчанию):
перечислены внизу для понимания значений по умолчанию; `vitastor_ssd` - имя хранилища в Proxmox):
```
vitastor: vitastor
vitastor: vitastor_ssd
# Пул, в который будут помещаться образы дисков
vitastor_pool testpool
# Путь к файлу конфигурации

191
docs/installation/s3.en.md Normal file
View File

@@ -0,0 +1,191 @@
[Documentation](../../README.md#documentation) → Installation → S3 for Vitastor
-----
[Читать на русском](s3.ru.md)
# S3 for Vitastor
The moment has come - Vitastor S3 implementation based on Zenko CloudServer is released.
## Highlights
- Zenko CloudServer is implemented in node.js.
- Object metadata is stored in MongoDB.
- Modified Zenko CloudServer version is used for Vitastor. It is slightly different from
the original, has an optimised build and unneeded dependencies are stripped off.
- Object data is stored in Vitastor block volumes, but the volume metadata is stored in
the same MongoDB, not in Vitastor etcd.
- Objects are written to volumes sequentially one after another. The space is allocated
with rounding to the sector size (4 KB), so each object takes at least 4 KB.
- An important property of such storage scheme is that small objects aren't chunked into
parts in Vitastor EC N+K pools and thus don't require reads from all N disks when
downloading.
- Deleted objects are marked as deleted, but the space is only actually freed during
asynchronously executed "defragmentation" process. Defragmentation runs automatically
in the background when a volume reaches configured amount of "garbage" (20% by default).
Defragmentation copies actual objects to new volume(s) and then removes the old volume.
Defragmentation can be configured in locationConfig.json.
## Plans for future development
- User account storage in the DB instead of a static file. Original Zenko uses
a separate closed-source "Scality Vault" service for it, that's why we use
a static file for now.
- More detailed documentation.
- Support for other (and faster) key-value DBMS for object metadata storage.
- Other performance optimisations, for example, related to the used hash function -
MD5 used for Amazon compatibility purposes is relatively slow.
- Object Lifecycle support. There is a Lifecycle implementation for Zenko called
[Backbeat](https://github.com/scality/backbeat) but it's not adapted for Vitastor yet.
- Quota support. Original Zenko uses a separate "SCUBA" service for quotas, but
it's also proprietary and not available publicly.
## Installation
In a few words:
- Install MongoDB, create a user for S3 metadata DB.
- Create a Vitastor pool for S3 data.
- Download and setup the Docker container `vitalif/vitastor-zenko`.
### Setup MongoDB
You can setup MongoDB yourself, following the [MongoDB manual](https://www.mongodb.com/docs/manual/installation/).
Or you can follow the instructions below - it describes a simple example of MongoDB setup
in Docker (through docker-compose) with 3 replicas.
1. On each host, create a file `docker-compose.yml` with the content listed below.
Replace `<YOUR_PASSWORD>` with your future mongodb administrator password, and optionally
replace `0.0.0.0` with `localhost,<server_IP>`. It's recommended to either use a private IP
or [setup TLS](https://www.mongodb.com/docs/manual/tutorial/configure-ssl/) afterwards.
```
version: '3.1'
services:
mongo:
container_name: mongo
image: mongo:7-jammy
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: <YOUR_PASSWORD>
network_mode: host
volumes:
- ./keyfile:/opt/keyfile
- ./mongo-data/db:/data/db
- ./mongo-data/configdb:/data/configdb
entrypoint: /bin/bash -c
command: [ "chown mongodb /opt/keyfile && chmod 600 /opt/keyfile && . /usr/local/bin/docker-entrypoint.sh mongod --replSet rs0 --keyFile /opt/keyfile --bind_ip 0.0.0.0" ]
```
2. Generate a shared cluster key using `openssl rand -base64 756 > ./keyfile` and copy
that `keyfile` to all hosts.
3. Start MongoDB on all hosts with `docker compose up -d mongo`.
4. Enter Mongo Shell with `docker exec -it mongo mongosh -u root -p <YOUR_PASSWORD> localhost/admin`
and execute the following command (replace IP addresses `10.10.10.{1,2,3}` with your host IPs):
`rs.initiate({ _id: 'rs0', members: [
{ _id: 1, host: '10.10.10.1:27017' },
{ _id: 2, host: '10.10.10.2:27017' },
{ _id: 3, host: '10.10.10.3:27017' }
] })`
5. Stay in Mongo Shell and create a user for the future S3 database:
`db.createUser({ user: 's3', pwd: '<YOUR_S3_PASSWORD>', roles: [
{ role: 'readWrite', db: 's3' },
{ role: 'dbAdmin', db: 's3' },
{ role: 'readWrite', db: 'vitastor' },
{ role: 'dbAdmin', db: 'vitastor' }
] })`
### Setup Vitastor
Create a pool in Vitastor for S3 object data, for example:
`vitastor-cli create-pool --ec 2+1 -n 512 s3-data --used_for_app s3:standard`
The `--used_for_app` options works as fool-proofing and prevents you from
accidentally creating a regular block volume in the S3 pool and overwriting some S3 data.
Also it hides inode space statistics from Vitastor etcd.
Retrieve the ID of your pool with `vitastor-cli ls-pools s3-data --detail`.
### Setup Vitastor S3
1. Add the following lines to `docker-compose.yml` (instead of `network_mode: host`,
you can use `ports: [ "8000:8000", "8002:8002" ]`):
```
zenko:
container_name: zenko
image: vitalif/vitastor-zenko
restart: always
security_opt:
- seccomp:unconfined
ulimits:
memlock: -1
network_mode: host
volumes:
- /etc/vitastor:/etc/vitastor
- /etc/vitastor/s3:/conf
```
2. Download Docker image: `docker pull vitalif/vitastor-zenko`
3. Extract configuration file examples from the Docker image:
```
docker run --rm -it -v /etc/vitastor:/etc/vitastor -v /etc/vitastor/s3:/conf vitalif/vitastor-zenko configure.sh
```
4. Edit configuration files in `/etc/vitastor/s3/`:
- `config.json` - common settings.
- `authdata.json` - user accounts and access keys.
- `locationConfig.json` - S3 storage class list with placement settings.
Note: it actually contains storage classes (like STANDARD, COLD, etc)
instead of "locations" (zones like us-east-1) as in the original Zenko CloudServer.
- Put your MongoDB connection data into `config.json` and `locationConfig.json`.
- Put your Vitastor pool ID into `locationConfig.json`.
- For now, the complete list of Vitastor backend settings is only available [in the code](https://git.yourcmc.ru/vitalif/zenko-arsenal/src/branch/master/lib/storage/data/vitastor/VitastorBackend.ts#L94).
### Start Zenko
Start the S3 server with:
```
docker run --restart always --security-opt seccomp:unconfined --ulimit memlock=-1 --network=host \
-v /etc/vitastor:/etc/vitastor -v /etc/vitastor/s3:/conf --name zenko vitalif/vitastor-zenko
```
If you use default settings, Zenko CloudServer starts on port 8000.
The default access key is `accessKey1` with a secret key of `verySecretKey1`.
Now you can access your S3 with, for example, [s3cmd](https://s3tools.org/s3cmd):
```
s3cmd --access_key=accessKey1 --secret_key=verySecretKey1 --host=http://localhost:8000 mb s3://testbucket
```
Or even mount it with [GeeseFS](https://github.com/yandex-cloud/geesefs):
```
AWS_ACCESS_KEY_ID=accessKey1 \
AWS_SECRET_ACCESS_KEY=verySecretKey1 \
geesefs --endpoint http://localhost:8000 testbucket mountdir
```
## Author & License
- [Zenko CloudServer](https://s3-server.readthedocs.io/en/latest/) author is Scality,
licensed under [Apache License, version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
- [Vitastor](https://git.yourcmc.ru/vitalif/vitastor/) and Zenko Vitastor backend author is
Vitaliy Filippov, licensed under [VNPL-1.1](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/VNPL-1.1.txt)
(a "network copyleft" license based on AGPL/SSPL, but worded in a better way)
- Vitastor S3 repository: https://git.yourcmc.ru/vitalif/zenko-cloudserver-vitastor
- Vitastor S3 backend code: https://git.yourcmc.ru/vitalif/zenko-arsenal/src/branch/master/lib/storage/data/vitastor/VitastorBackend.ts

171
docs/installation/s3.ru.md Normal file
View File

@@ -0,0 +1,171 @@
[Документация](../../README-ru.md#документация) → Установка → S3 на базе Vitastor
-----
[Read in English](s3.en.md)
# S3 на базе Vitastor
Итак, свершилось - реализация Vitastor S3 на базе Zenko CloudServer достигла
состояния готовности к публикации и использованию.
## Ключевые особенности
- Zenko CloudServer реализован на node.js.
- Метаданные объектов хранятся в MongoDB.
- Поставляется модифицированная версия Zenko CloudServer, отвязанная от лишних зависимостей,
с оптимизированной сборкой и немного отличающаяся от оригинала.
- Данные объектов хранятся в блочных томах Vitastor, однако информация о самих томах
сохраняется не в etcd Vitastor, а тоже в БД на основе MongoDB.
- Объекты записываются в тома последовательно друг за другом. Место выделяется с округлением
до размера сектора (до 4 килобайт), поэтому каждый объект занимает как минимум 4 КБ.
- Благодаря такой схеме записи объектов мелкие объекты не нарезаются на части и поэтому не
требуют чтения с N дисков данных в EC N+K пулах Vitastor.
- При удалении объекты помечаются удалёнными, но место освобождается не сразу, а при
запускаемой асинхронно "дефрагментации". Дефрагментация запускается автоматически в фоне
при достижении заданного объёма "мусора" в томе (по умолчанию 20%), копирует актуальные
объекты в новые тома, после чего очищает старый том полностью. Дефрагментацию можно
настраивать в locationConfig.json.
## Планы развития
- Хранение учётных записей в БД, а не в статическом файле (в оригинальном Zenko для
этого используется отдельный закрытый сервис "Scality Vault").
- Более подробная документация.
- Поддержка других (и более производительных) key-value СУБД для хранения метаданных.
- Другие оптимизации производительности, например, в области используемой хеш-функции
(хеш MD5, используемый в целях совместимости, относительно медленный).
- Поддержка Object Lifecycle. Реализация Lifecycle для Zenko существует и называется
[Backbeat](https://github.com/scality/backbeat), но она ещё не адаптирована для Vitastor.
- Квоты. В оригинальном Zenko для этого используется отдельный сервис "SCUBA", однако
он тоже является закрытым и недоступен для публичного использования.
## Установка
Кратко:
- Установите MongoDB, создайте пользователя для БД метаданных S3.
- Создайте в Vitastor пул для хранения данных объектов.
- Скачайте и настройте Docker-контейнер `vitalif/vitastor-zenko`.
### Установка MongoDB
Вы можете установить MongoDB сами, следуя [официальному руководству MongoDB](https://www.mongodb.com/docs/manual/installation/).
Либо вы можете последовать инструкции, приведённой ниже - здесь описан простейший пример
установки MongoDB в Docker (docker-compose) в конфигурации с 3 репликами.
1. На всех 3 серверах создайте файл `docker-compose.yml`, заменив `<ВАШ_ПАРОЛЬ>`
на собственный будущий пароль администратора mongodb, а `0.0.0.0` по желанию
заменив на на `localhost,<IP_сервера>` - желательно либо использовать публично не доступный IP,
либо потом [настроить TLS](https://www.mongodb.com/docs/manual/tutorial/configure-ssl/).
```
version: '3.1'
services:
mongo:
container_name: mongo
image: mongo:7-jammy
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: <ВАШ_ПАРОЛЬ>
network_mode: host
volumes:
- ./keyfile:/opt/keyfile
- ./mongo-data/db:/data/db
- ./mongo-data/configdb:/data/configdb
entrypoint: /bin/bash -c
command: [ "chown mongodb /opt/keyfile && chmod 600 /opt/keyfile && . /usr/local/bin/docker-entrypoint.sh mongod --replSet rs0 --keyFile /opt/keyfile --bind_ip 0.0.0.0" ]
```
2. В той же директории сгенерируйте общий ключ кластера командой `openssl rand -base64 756 > ./keyfile`
и скопируйте этот файл на все 3 сервера.
3. На всех 3 серверах запустите MongoDB командой `docker compose up -d mongo`.
4. Зайдите в Mongo Shell с помощью команды `docker exec -it mongo mongosh -u root -p <ВАШ_ПАРОЛЬ> localhost/admin`
и там выполните команду (заменив IP-адреса `10.10.10.{1,2,3}` на адреса своих серверов):
`rs.initiate({ _id: 'rs0', members: [
{ _id: 1, host: '10.10.10.1:27017' },
{ _id: 2, host: '10.10.10.2:27017' },
{ _id: 3, host: '10.10.10.3:27017' }
] })`
5. Находясь там же, в Mongo Shell, создайте пользователя с доступом к будущей базе данных S3:
`db.createUser({ user: 's3', pwd: '<ВАШ_ПАРОЛЬ_S3>', roles: [
{ role: 'readWrite', db: 's3' },
{ role: 'dbAdmin', db: 's3' },
{ role: 'readWrite', db: 'vitastor' },
{ role: 'dbAdmin', db: 'vitastor' }
] })`
### Настройка Vitastor
Создайте в Vitastor отдельный пул для данных объектов S3, например:
`vitastor-cli create-pool --ec 2+1 -n 512 s3-data --used_for_app s3:standard`
Опция `--used_for_app` работает как "защита от дурака" и не даёт вам случайно создать
в этом пуле обычный блочный том и перезаписать им какие-то данные S3, а также скрывает
статистику занятого места по томам S3 из etcd.
Получите ID своего пула с помощью команды `vitastor-cli ls-pools --detail`.
### Установка Vitastor S3
1. Добавьте в `docker-compose.yml` строки (альтернативно вместо `network_mode: host`
можно использовать `ports: [ "8000:8000", "8002:8002" ]`):
```
zenko:
container_name: zenko
image: vitalif/vitastor-zenko
restart: always
security_opt:
- seccomp:unconfined
ulimits:
memlock: -1
network_mode: host
volumes:
- /etc/vitastor:/etc/vitastor
- /etc/vitastor/s3:/conf
```
2. Извлеките из Docker-образа Vitastor примеры файлов конфигурации:
`docker run --rm -it -v /etc/vitastor:/etc/vitastor -v /etc/vitastor/s3:/conf vitalif/vitastor-zenko configure.sh`
3. Отредактируйте файлы конфигурации в `/etc/vitastor/s3/`:
- `config.json` - общие настройки.
- `authdata.json` - учётные записи и ключи доступа.
- `locationConfig.json` - список классов хранения S3 с настройками расположения.
Внимание: в данной версии это именно список S3 storage class-ов (STANDARD, COLD и т.п.),
а не зон (подобных us-east-1), как в оригинальном Zenko CloudServer.
- В `config.json` и в `locationConfig.json` пропишите свои данные подключения к MongoDB.
- В `locationConfig.json` укажите ID пула Vitastor для хранения данных.
- Полный перечень настроек Vitastor-бэкенда пока можно посмотреть [в коде](https://git.yourcmc.ru/vitalif/zenko-arsenal/src/branch/master/lib/storage/data/vitastor/VitastorBackend.ts#L94).
### Запуск
Запустите S3-сервер: `docker-compose up -d zenko`
Готово! Вы получили S3-сервер, работающий на порту 8000.
Можете попробовать обратиться к нему с помощью, например, [s3cmd](https://s3tools.org/s3cmd):
`s3cmd --host-bucket= --no-ssl --access_key=accessKey1 --secret_key=verySecretKey1 --host=http://localhost:8000 mb s3://testbucket`
Или смонтировать его с помощью [GeeseFS](https://github.com/yandex-cloud/geesefs):
`AWS_ACCESS_KEY_ID=accessKey1 AWS_SECRET_ACCESS_KEY=verySecretKey1 geesefs --endpoint http://localhost:8000 testbucket /mnt/geesefs`
## Лицензия
- Автор [Zenko CloudServer](https://s3-server.readthedocs.io/en/latest/) - Scality, лицензия [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
- Vitastor-бэкенд для S3, как и сам Vitastor, лицензируется на условиях [VNPL 1.1](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/VNPL-1.1.txt)
- Репозиторий сборки: https://git.yourcmc.ru/vitalif/zenko-cloudserver-vitastor
- Бэкенд хранения данных: https://git.yourcmc.ru/vitalif/zenko-arsenal/src/branch/master/lib/storage/data/vitastor/VitastorBackend.ts

View File

@@ -16,7 +16,7 @@
designated initializers support from C++20
- CMake
- liburing, jerasure headers and libraries
- ISA-L, libibverbs headers and libraries (optional)
- ISA-L, libibverbs and librdmacm headers and libraries (optional)
- tcmalloc (google-perftools-dev)
## Basic instructions
@@ -41,7 +41,7 @@ It's recommended to build the QEMU driver (qemu_driver.c) in-tree, as a part of
QEMU build process. To do that:
- Install vitastor client library headers (from source or from vitastor-client-dev package)
- Take a corresponding patch from `patches/qemu-*-vitastor.patch` and apply it to QEMU source
- Copy `src/qemu_driver.c` to QEMU source directory as `block/vitastor.c`
- Copy `src/client/qemu_driver.c` to QEMU source directory as `block/vitastor.c`
- Build QEMU as usual
But it is also possible to build it out-of-tree. To do that:

Some files were not shown because too many files have changed in this diff Show More