Compare commits

..

265 Commits

Author SHA1 Message Date
69cbe7bbb2 Release 1.11.0
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m47s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m35s
Test / test_write_no_same (push) Successful in 12s
Test / test_switch_primary (push) Successful in 38s
Test / test_write (push) Successful in 37s
Test / test_write_xor (push) Successful in 39s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m29s
Test / test_heal_ec (push) Successful in 2m45s
Test / test_heal_csum_32k_dj (push) Successful in 2m30s
Test / test_heal_csum_32k (push) Successful in 2m29s
Test / test_heal_csum_4k_dmj (push) Successful in 2m33s
Test / test_resize (push) Successful in 19s
Test / test_heal_csum_4k_dj (push) Successful in 2m28s
Test / test_resize_auto (push) Successful in 13s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 17s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 17s
Test / test_scrub_pg_size_3 (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 20s
Test / test_scrub_ec (push) Successful in 21s
Test / test_nfs (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m27s
New features:

- Support containerized Vitastor installations: http://vitastor.io/docs/installation/docker.html
- Add new functions to the node.js binding: delete(), get_immediate_commit(), on_ready(),
  get_min_io_size(), get_max_atomic_write_size()
- S3 (Zenko Cloudserver with Vitastor support) is coming shortly and will be released separately

Bug fixes:

- Use IP-derived etcd node names in make-etcd
- Set short name of the OSD process to display in `top`
- Fix snap-create without pool_id failing when there are multiple pools
- Several bugs are fixed in the write-back cache, it should now be stable:
  - Fix incorrect snapshot reads from dirty write-back cache
  - Do not try to repeat pending writebacks on OSD reconnections
  - Fix client hangs with multiple SYNCs in the writeback queue
  - Fix client hangs do to incorrect calculation of the writeback queue size
- Several improvements for NBD mapping/unmapping:
  - Add a workaround for race condition in the Linux kernel NBD driver leading
    to vitastor-nbd sometimes breaking a previously mapped device instead of
    setting up a new one
  - Check if the device is actually mapped in vitastor-nbd unmap
  - Fix device name/number validation in vitastor-nbd
- Fix OSD crashes after starting with corrupted metadata - from now it will skip
  corrupted metadata entries and heal itself
- Fix scrubbing of misplaced objects and object state recalculation after
  vitastor-cli fix - previously, an OSD restart could be required to fix object states
- Make primary OSD distribution more stable by using murmur3 hash instead of the old pseudo-rng
- Fix monitor sometimes racing with itself - do not touch /pool/stats from stats
  aggregation if PG recheck is active
- Sort vitastor-cli ls output by name by default
- Update antietcd to 1.1.2
2025-03-01 13:39:42 +03:00
4950a1636c Allow "infinite" startup for clients if explicitly requested 2025-03-01 13:39:42 +03:00
2eb20dff28 Do not crash on io_uring initialization failure in node-vitastor 2025-03-01 13:29:48 +03:00
59f0b0427c Support containerized Vitastor installations
All checks were successful
Test / test_dd (push) Successful in 17s
Test / test_rebalance_verify_ec (push) Successful in 1m44s
Test / test_write_no_same (push) Successful in 11s
Test / test_write (push) Successful in 34s
Test / test_switch_primary (push) Successful in 37s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m22s
Test / test_resize (push) Successful in 16s
Test / test_resize_auto (push) Successful in 13s
Test / test_osd_tags (push) Successful in 12s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 17s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_zero_osd_2 (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 21s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m21s
2025-02-27 20:06:15 +03:00
124162ad38 Use IP-derived etcd node names in make-etcd 2025-02-26 11:54:37 +03:00
391c92af1a Set OSD process name 2025-02-26 11:54:37 +03:00
c3d8fdd855 Fix snap-create without pool_id ID generation with multiple pools
All checks were successful
Test / test_dd (push) Successful in 17s
Test / test_rebalance_verify_ec (push) Successful in 1m40s
Test / test_write_no_same (push) Successful in 11s
Test / test_write (push) Successful in 34s
Test / test_switch_primary (push) Successful in 37s
Test / test_write_xor (push) Successful in 39s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_ec (push) Successful in 2m30s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m29s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m22s
Test / test_resize (push) Successful in 15s
Test / test_resize_auto (push) Successful in 12s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 20s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 19s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 20s
Test / test_scrub_ec (push) Successful in 20s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m20s
2025-02-26 11:54:28 +03:00
9ccf3af97b Add qemu-block-extra and qemu-utils 2025-02-23 15:08:16 +03:00
568a209f0d Update docker image to debian bookworm 2025-02-23 13:27:32 +03:00
b151013201 Fix snapshot reads from a dirty write-back cache
All checks were successful
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec (push) Successful in 1m46s
Test / test_write_no_same (push) Successful in 13s
Test / test_write (push) Successful in 34s
Test / test_switch_primary (push) Successful in 37s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 17s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_osd_tags (push) Successful in 11s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 17s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 19s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m19s
2025-02-23 02:31:19 +03:00
4a763725fe Add free() to bindiff.c 2025-02-22 16:52:19 +03:00
b8d83cd7f4 No, it's not a good idea to destroy client in the child nbd process
Some checks failed
Test / test_rebalance_verify_ec_imm (push) Failing after 59s
Test / test_write_no_same (push) Successful in 13s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 35s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m21s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 12s
Test / test_resize (push) Successful in 18s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 21s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 20s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m21s
Test / test_rebalance_verify (push) Failing after 52s
Should probably have been an obvious side effect :-)

Child process gets open file descriptors to parent's epoll/timerfd,
and it's totally OK to just close() all of them, but it's absolutely NOT
OK to run destructors - they modify the kernel state of epoll/timerfd
before destroying. So, basically, when we destroy the client in the child
process, we break it in the parent too. This also means that cluster_client_t
doesn't support fork(). :-)
2025-02-22 15:10:27 +03:00
2e9ee2fe20 Do not try to repeat pending writebacks
Some checks reported warnings
Test / test_snapshot (push) Has been cancelled
Test / test_snapshot_ec (push) Has been cancelled
Test / test_minsize_1 (push) Has been cancelled
Test / test_move_reappear (push) Has been cancelled
Test / test_rm (push) Has been cancelled
Test / test_rm_degraded (push) Has been cancelled
Test / test_snapshot_chain (push) Has been cancelled
Test / test_snapshot_chain_ec (push) Has been cancelled
Test / test_snapshot_down (push) Has been cancelled
Test / test_snapshot_down_ec (push) Has been cancelled
Test / test_splitbrain (push) Has been cancelled
Test / test_rebalance_verify (push) Has been cancelled
Test / test_rebalance_verify_imm (push) Has been cancelled
Test / test_rebalance_verify_ec (push) Has been cancelled
Test / test_rebalance_verify_ec_imm (push) Has been cancelled
Test / test_dd (push) Has been cancelled
Test / test_root_node (push) Has been cancelled
Test / test_switch_primary (push) Has been cancelled
Test / test_write (push) Has been cancelled
Test / test_write_xor (push) Has been cancelled
Test / test_write_no_same (push) Has been cancelled
Test / test_heal_pg_size_2 (push) Has been cancelled
Test / test_heal_ec (push) Has been cancelled
Test / test_heal_antietcd (push) Has been cancelled
Test / test_heal_csum_32k_dmj (push) Has been cancelled
Test / test_heal_csum_32k_dj (push) Has been cancelled
Test / test_heal_csum_32k (push) Has been cancelled
Test / test_heal_csum_4k_dmj (push) Has been cancelled
Test / test_heal_csum_4k_dj (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
2025-02-22 14:16:44 +03:00
508ae852e4 Fix trap in test_rebalance_verify
Some checks failed
Test / test_dd (push) Successful in 16s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 33s
Test / test_write_no_same (push) Successful in 11s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_rebalance_verify_ec_imm (push) Failing after 6m26s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m25s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 12s
Test / test_resize (push) Successful in 16s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 18s
Test / test_scrub_pg_size_3 (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 19s
Test / test_scrub_ec (push) Successful in 20s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_heal_csum_4k (push) Successful in 2m21s
2025-02-22 02:18:41 +03:00
97ee400505 Add a workaround for race condition in the Linux kernel NBD driver
Some checks failed
Test / test_switch_primary (push) Successful in 37s
Test / test_write_no_same (push) Successful in 11s
Test / test_write_xor (push) Successful in 36s
Test / test_rebalance_verify_ec_imm (push) Failing after 6m22s
Test / test_rebalance_verify_ec (push) Failing after 6m32s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m22s
Test / test_heal_csum_32k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m24s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_resize (push) Successful in 15s
Test / test_resize_auto (push) Successful in 12s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_osd_tags (push) Successful in 12s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 15s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m26s
Test / test_heal_csum_4k (push) Successful in 2m23s
Test / test_enospc_imm_xor (push) Successful in 17s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_nfs (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 20s
Test / test_scrub_ec (push) Successful in 19s
Test / test_rebalance_verify (push) Failing after 18s
Do all NBD configuration in the child process, after the last fork.
Why? It's needed because there is a race condition in the Linux kernel nbd driver
in nbd_add_socket() - it saves `current` task pointer as `nbd->task_setup` and
then rechecks if the new `current` is the same. Problem is that if that process
is already dead, `current` may be freed and then replaced by another process
with the same pointer value. So the check passes and NBD allows a different process
to set up a device which is already set up. Proper fix would have to be done in the
kernel code, but the workaround is obviously to perform NBD setup from the process
which will then actually call NBD_DO_IT. That process stays alive during the whole
time of NBD device execution and the (nbd->task_setup != current) check always
works correctly, and we don't accidentally break previous NBD devices while setting
up a new device. Forking to check every device is of course rather slow, so we also
do an additional check by calling list_mapped() before searching for a free NBD device.
2025-02-21 13:17:37 +03:00
5ee4894fab Check if mapped in vitastor-nbd unmap
Some checks failed
Test / test_rebalance_verify_imm (push) Successful in 1m41s
Test / test_write (push) Successful in 33s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m46s
Test / test_write_no_same (push) Successful in 11s
Test / test_write_xor (push) Successful in 37s
Test / test_rebalance_verify_ec (push) Failing after 3m31s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m22s
Test / test_heal_antietcd (push) Successful in 2m28s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m24s
Test / test_heal_csum_32k (push) Successful in 2m28s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_resize (push) Successful in 16s
Test / test_resize_auto (push) Successful in 12s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 18s
Test / test_scrub (push) Successful in 16s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 18s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 19s
Test / test_nfs (push) Successful in 14s
Test / test_scrub_ec (push) Successful in 18s
Test / test_heal_csum_4k (push) Successful in 2m20s
2025-02-21 01:28:06 +03:00
125dcafb11 Prevent OSD crashes when metadata is corrupted
All checks were successful
Test / test_write_no_same (push) Successful in 13s
Test / test_rebalance_verify_imm (push) Successful in 1m37s
Test / test_write_xor (push) Successful in 40s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m44s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m27s
Test / test_heal_csum_32k (push) Successful in 2m26s
Test / test_heal_csum_4k_dmj (push) Successful in 2m30s
Test / test_resize (push) Successful in 22s
Test / test_heal_csum_4k_dj (push) Successful in 2m30s
Test / test_resize_auto (push) Successful in 13s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 19s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 17s
Test / test_scrub_pg_size_3 (push) Successful in 25s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 27s
Test / test_scrub_ec (push) Successful in 23s
Test / test_nfs (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m29s
Test / test_rebalance_verify (push) Successful in 2m4s
Test / test_rebalance_verify_ec (push) Successful in 2m21s
2025-02-20 02:19:32 +03:00
9f44cf71df Fix device name/number validation in vitastor-nbd
All checks were successful
Test / test_rebalance_verify (push) Successful in 1m44s
Test / test_rebalance_verify_imm (push) Successful in 1m44s
Test / test_write_no_same (push) Successful in 11s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m51s
Test / test_write (push) Successful in 35s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 19s
Test / test_osd_tags (push) Successful in 12s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 18s
Test / test_scrub_zero_osd_2 (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 20s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m20s
2025-02-20 01:33:11 +03:00
df3c63ca7f Sort vitastor-cli ls by name by default 2025-02-20 01:32:49 +03:00
be66edd09f Prevent infinite loops on syncs in writeback_overflow
All checks were successful
Test / test_dd (push) Successful in 17s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m54s
Test / test_write_no_same (push) Successful in 11s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 35s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m26s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 12s
Test / test_resize (push) Successful in 18s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_osd_tags (push) Successful in 11s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 18s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 21s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m22s
2025-02-19 01:44:12 +03:00
ccbc0c5928 Add assert !writeback_bytes
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 2m26s
Test / test_write (push) Successful in 43s
Test / test_rebalance_verify_ec (push) Successful in 2m40s
Test / test_write_no_same (push) Successful in 18s
Test / test_rebalance_verify_ec_imm (push) Successful in 2m39s
Test / test_write_xor (push) Successful in 41s
Test / test_heal_pg_size_2 (push) Successful in 2m23s
Test / test_heal_ec (push) Successful in 2m24s
Test / test_heal_antietcd (push) Successful in 2m30s
Test / test_heal_csum_32k_dmj (push) Successful in 2m43s
Test / test_heal_csum_32k (push) Successful in 2m36s
Test / test_heal_csum_32k_dj (push) Successful in 2m48s
Test / test_heal_csum_4k_dmj (push) Successful in 2m33s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 16s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_osd_tags (push) Successful in 11s
Test / test_enospc (push) Successful in 13s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc_xor (push) Successful in 17s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 17s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 19s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m28s
2025-02-19 01:15:46 +03:00
78ca4538bf Fix qemu docker build for ubuntu
All checks were successful
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify_ec_imm (push) Successful in 2m37s
Test / test_write_no_same (push) Successful in 18s
Test / test_switch_primary (push) Successful in 40s
Test / test_write (push) Successful in 49s
Test / test_write_xor (push) Successful in 53s
Test / test_heal_pg_size_2 (push) Successful in 2m34s
Test / test_heal_ec (push) Successful in 2m31s
Test / test_heal_antietcd (push) Successful in 2m32s
Test / test_heal_csum_32k_dmj (push) Successful in 2m36s
Test / test_heal_csum_32k_dj (push) Successful in 2m47s
Test / test_heal_csum_32k (push) Successful in 2m39s
Test / test_heal_csum_4k_dmj (push) Successful in 2m41s
Test / test_heal_csum_4k_dj (push) Successful in 2m32s
Test / test_resize (push) Successful in 28s
Test / test_resize_auto (push) Successful in 22s
Test / test_snapshot_pool2 (push) Successful in 32s
Test / test_osd_tags (push) Successful in 17s
Test / test_enospc (push) Successful in 18s
Test / test_enospc_imm (push) Successful in 20s
Test / test_enospc_xor (push) Successful in 22s
Test / test_enospc_imm_xor (push) Successful in 29s
Test / test_scrub (push) Successful in 23s
Test / test_scrub_zero_osd_2 (push) Successful in 26s
Test / test_scrub_xor (push) Successful in 25s
Test / test_scrub_pg_size_3 (push) Successful in 26s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 24s
Test / test_nfs (push) Successful in 13s
Test / test_scrub_ec (push) Successful in 21s
Test / test_heal_csum_4k (push) Successful in 2m48s
2025-02-18 23:44:16 +03:00
86b5760ec1 Fix writeback incorrectly calculating queue size which was leading to client hangs
All checks were successful
Test / test_root_node (push) Successful in 13s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m52s
Test / test_write_no_same (push) Successful in 11s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 41s
Test / test_write_xor (push) Successful in 42s
Test / test_heal_pg_size_2 (push) Successful in 2m30s
Test / test_heal_ec (push) Successful in 2m22s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m24s
Test / test_heal_csum_32k_dj (push) Successful in 2m27s
Test / test_heal_csum_32k (push) Successful in 2m27s
Test / test_heal_csum_4k_dmj (push) Successful in 2m43s
Test / test_resize (push) Successful in 24s
Test / test_heal_csum_4k_dj (push) Successful in 2m45s
Test / test_resize_auto (push) Successful in 12s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 17s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 20s
Test / test_scrub_pg_size_3 (push) Successful in 25s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 26s
Test / test_scrub_ec (push) Successful in 22s
Test / test_nfs (push) Successful in 18s
Test / test_heal_csum_4k (push) Successful in 2m40s
2025-02-18 23:42:55 +03:00
27f3803d2f Add vitastor_c_delete() and delete() to the node.js binding
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m56s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m55s
Test / test_write_no_same (push) Successful in 16s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 36s
Test / test_write_xor (push) Successful in 42s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_resize (push) Successful in 17s
Test / test_resize_auto (push) Successful in 12s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 17s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 17s
Test / test_scrub (push) Successful in 19s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 17s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 21s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m17s
2025-02-15 18:27:17 +03:00
2ead06e126 Add ubuntu jammy to docs 2025-02-12 15:32:35 +03:00
a5d5559f8e Add get_immediate_commit() to the node.js binding 2025-02-06 01:35:48 +03:00
e8e7ba8fde Add FIXME for CAS in non-immediate_commit mode 2025-02-06 01:35:48 +03:00
6fd831a299 Add on_ready(), get_min_io_size(), get_max_atomic_write_size() to the node.js binding 2025-02-06 01:35:48 +03:00
069808dfce Fix --config_path option in docs
All checks were successful
Test / test_dd (push) Successful in 16s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 13s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 39s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m25s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 16s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_osd_tags (push) Successful in 11s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 17s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 19s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m16s
2025-01-24 17:21:11 +03:00
bcefa42bc0 Scrub all chunks, not just 1 chunk per position
Some checks failed
Test / test_rebalance_verify_ec (push) Successful in 1m41s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 11s
Test / test_write (push) Successful in 34s
Test / test_switch_primary (push) Successful in 36s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Failing after 2m30s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 15s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 18s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 17s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 21s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m19s
2025-01-23 02:02:55 +03:00
4636e02d43 Remove scheme, pg_size, pg_data_size from op_data 2025-01-23 01:20:31 +03:00
e4c7d1c147 s/3/4/ 2025-01-23 01:20:31 +03:00
a4677f3e69 Mention P5530 2025-01-23 01:20:31 +03:00
7cbf207d65 Use murmur3 to select primary OSD instead of old pseudo-rng
Some checks failed
Test / test_rebalance_verify_ec (push) Successful in 1m39s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m39s
Test / test_write_no_same (push) Successful in 11s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 15s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 17s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_xor (push) Failing after 17s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m20s
Test / test_scrub_zero_osd_2 (push) Failing after 19s
2025-01-18 12:28:54 +03:00
7c9711af20 Do not touch /pool/stats from stats aggregation if PG recheck is active
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m41s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 12s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m24s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 17s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 18s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 19s
Test / test_scrub_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m14s
2025-01-16 20:41:16 +03:00
33ef701464 Update antietcd to 1.1.2
All checks were successful
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m44s
Test / test_write_no_same (push) Successful in 12s
Test / test_write (push) Successful in 33s
Test / test_switch_primary (push) Successful in 35s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m14s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m22s
Test / test_resize (push) Successful in 16s
Test / test_resize_auto (push) Successful in 10s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m10s
2025-01-04 02:13:36 +03:00
61ededa230 Release 1.10.1
All checks were successful
Test / test_rebalance_verify_ec_imm (push) Successful in 1m47s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m24s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize (push) Successful in 17s
Test / test_resize_auto (push) Successful in 11s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 18s
Test / test_scrub_pg_size_3 (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m13s
Test / test_etcd_fail_antietcd (push) Successful in 41s
New features:

- Add "deleted" image flag which is set when vitastor-cli rm starts to delete an image,
  but can't delete it fully due to inactive PGs or stopped OSDs
- Support JSON output in vitastor-disk prepare and purge
- Show backfillfull pools in vitastor-cli status
- Make object listings consistent (used in vitastor-cli rm/rm-data/merge/etc).
  This means that there is now a guarantee that if a data block is present when you invoke rm,
  rm will attempt to delete it, even if rm is invoked when the PG switches state. Previously in
  such cases rm could skip and leave some objects behind as garbage, and merge probably could
  incorrectly move data between snapshots.
- Make deletions (rm/rm-data) consistent. This means that rm/rm-data will either complete
  successfully and delete all requested image data or complete with an error if some objects
  could not be deleted or if there is a possibility that some data is left on stopped OSDs.
  Previously, when some PGs or OSDs were inactive at the moment of deletion, rm-data was
  behaving incorrectly: it wasn't retrying deletions failed due to dropped OSD connections,
  it could hang waiting for PGs to activate, and it could return with a successful error
  code while some garbage was still possibly left on some OSDs. Deletions are not fully atomic
  cluster-wide yet, which means that you still have to repeat the deletion request after you
  return stopped OSDs back, but now you always know for sure if you have to repeat it.

Bug fixes:

- Fix vitastor-cli rm --exact / --matching command not working
- Finally fix "Unexpected status" in the Proxmox plugin
- Fix vitastor-cli create-snap incorrectly linking multiple snapshots in a different pool
- Fix incomplete image parent_id loop check in OSD
- Fix reads from snapshots in a different pool not working if there are more than 2 snapshots
- Fix append of VITASTOR_CONF to cmdline in the opennebula prebackup script
- Fix OSDs crashing again when the cluster is full with EC (was meant to work since 1.6.0 but didn't)
- Improve logging of subop failures
2025-01-03 16:22:09 +03:00
d9d90d3183 Fix build for debian buster 2025-01-03 16:21:56 +03:00
9dbcdbcec9 Return left_on_dead OSD list in DELETE replies and use it in rm-data
All checks were successful
Test / test_rebalance_verify_ec_imm (push) Successful in 1m48s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 37s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m26s
Test / test_heal_ec (push) Successful in 2m21s
Test / test_heal_antietcd (push) Successful in 2m21s
Test / test_heal_csum_32k_dmj (push) Successful in 2m32s
Test / test_heal_csum_32k_dj (push) Successful in 2m35s
Test / test_heal_csum_32k (push) Successful in 2m24s
Test / test_heal_csum_4k_dmj (push) Successful in 2m27s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize (push) Successful in 16s
Test / test_resize_auto (push) Successful in 11s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 17s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 24s
Test / test_scrub_pg_size_3 (push) Successful in 23s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 26s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m29s
Test / test_etcd_fail_antietcd (push) Successful in 41s
2025-01-03 15:57:09 +03:00
a147f7e7dc Copy & repeat deletions too
All checks were successful
Test / test_root_node (push) Successful in 11s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m38s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m18s
2025-01-03 00:21:52 +03:00
0e6bf66734 Add bindiff for tests
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m45s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m46s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m14s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 17s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m15s
2025-01-02 19:59:04 +03:00
ab822d3050 Support consistent listings in client (rm-data, merge and etc)
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m50s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m49s
Test / test_write_no_same (push) Successful in 12s
Test / test_write (push) Successful in 32s
Test / test_switch_primary (push) Successful in 35s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_ec (push) Successful in 2m43s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m32s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_resize (push) Successful in 16s
Test / test_resize_auto (push) Successful in 10s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m18s
2025-01-02 18:07:12 +03:00
d5366a0767 Support listings from primary OSDs (for consistent deletions) 2025-01-02 11:07:24 +03:00
40b8a8b0da Add wait_up_timeout support to cluster_client and use it in vitastor-cli rm-data & merge
All checks were successful
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m48s
Test / test_write_no_same (push) Successful in 11s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 15s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_osd_tags (push) Successful in 10s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m18s
2025-01-01 17:57:58 +03:00
5c5119aba4 Pass min_offset/max_offset to list_inode()
All checks were successful
Test / test_dd (push) Successful in 16s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m44s
Test / test_write_no_same (push) Successful in 10s
Test / test_write (push) Successful in 33s
Test / test_switch_primary (push) Successful in 36s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m22s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 16s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 19s
Test / test_scrub_zero_osd_2 (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m12s
2025-01-01 15:40:12 +03:00
4edda88903 Wait for OSDs to either connect or stop infinitely during listing, not for peer_connect_timeout
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m48s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m47s
Test / test_write_no_same (push) Successful in 11s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 35s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 16s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m26s
2025-01-01 15:29:42 +03:00
80dda3ca94 Remove separate list_inode_next()
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m47s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m50s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 35s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_csum_32k_dmj (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 16s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_osd_tags (push) Successful in 10s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m21s
2025-01-01 14:19:18 +03:00
c8decb32e8 Rename to client_wait_up_timeout
All checks were successful
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m48s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 15s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_osd_tags (push) Successful in 11s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
2025-01-01 11:26:57 +03:00
4995592e61 Retry listings on broken OSD connections
Some checks reported warnings
Test / test_rebalance_verify_ec (push) Successful in 1m52s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m54s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 36s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_antietcd (push) Has started running
Test / test_heal_csum_32k_dj (push) Has been cancelled
Test / test_heal_csum_32k (push) Has been cancelled
Test / test_heal_csum_4k_dmj (push) Has been cancelled
Test / test_heal_csum_4k_dj (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
Test / test_resize (push) Has been cancelled
Test / test_resize_auto (push) Has been cancelled
Test / test_snapshot_pool2 (push) Has been cancelled
Test / test_osd_tags (push) Has been cancelled
Test / test_enospc (push) Has been cancelled
Test / test_enospc_xor (push) Has been cancelled
Test / test_enospc_imm (push) Has been cancelled
Test / test_enospc_imm_xor (push) Has been cancelled
Test / test_scrub (push) Has been cancelled
Test / test_scrub_zero_osd_2 (push) Has been cancelled
Test / test_scrub_xor (push) Has been cancelled
Test / test_scrub_pg_size_3 (push) Has been cancelled
Test / test_heal_pg_size_2 (push) Has been cancelled
Test / test_heal_csum_32k_dmj (push) Has been cancelled
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Has been cancelled
Test / test_scrub_ec (push) Has been cancelled
Test / test_nfs (push) Has been cancelled
Test / test_heal_ec (push) Has been cancelled
2025-01-01 11:14:36 +03:00
d9f9b0bca5 Start listings consistently with the current PG state, add wait_up_timeout
Some checks reported warnings
Test / test_dd (push) Successful in 15s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m49s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m24s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 15s
Test / test_osd_tags (push) Successful in 12s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 17s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
This still doesn't make listings 100% consistent yet; for 100% consistent
listings we have to receive listings only from the primary OSD, not from all
peer OSDs, but this issue will be fixed separately.
2025-01-01 10:58:22 +03:00
d0396267d0 Clear retry_timeout when the client is destroyed 2025-01-01 10:58:22 +03:00
b46d5db115 Support JSON output in vitastor-disk prepare and purge
Some checks failed
Test / test_rebalance_verify_ec (push) Successful in 1m50s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m50s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_resize (push) Successful in 16s
Test / test_resize_auto (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_osd_tags (push) Successful in 9s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m20s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 14s
Test / test_heal_ec (push) Failing after 10m10s
2024-12-29 15:19:44 +03:00
ecd92655fe Fix rm --exact / --matching not removing one uppermost image in each chain
All checks were successful
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 33s
Test / test_write_no_same (push) Successful in 11s
Test / test_write_xor (push) Successful in 35s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m42s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m26s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_resize (push) Successful in 14s
Test / test_resize_auto (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_osd_tags (push) Successful in 10s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 12s
Test / test_heal_csum_4k_dj (push) Successful in 2m13s
Test / test_enospc_imm_xor (push) Successful in 17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m12s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 13s
Test / test_scrub_ec (push) Successful in 18s
Test / test_etcd_fail (push) Successful in 43s
2024-12-28 21:53:49 +03:00
383712148b Fix rm --exact / --matching not being invoked at all O_o
All checks were successful
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m51s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m23s
Test / test_heal_csum_32k (push) Successful in 2m12s
Test / test_heal_csum_4k_dmj (push) Successful in 2m15s
Test / test_resize (push) Successful in 15s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 9s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m24s
2024-12-28 21:47:00 +03:00
42d40153ff Do not intercept STDERR in Proxmox plugin (finally fixes "unexpected status"!) 2024-12-28 21:18:49 +03:00
561b36a4c1 Use revision from txn response header, not from put subresponse
Some checks failed
Test / test_rebalance_verify_ec (push) Successful in 1m41s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m41s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_ec (push) Failing after 2m22s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m10s
2024-12-28 21:01:15 +03:00
685af019f5 Allow :: and 0.0.0.0 as local IPs in antietcd_adapter 2024-12-28 20:52:27 +03:00
a31592d131 Print sizes in "Auto-selecting" as "4K", not "4 K"
All checks were successful
Test / test_root_node (push) Successful in 11s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m53s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_resize (push) Successful in 14s
Test / test_resize_auto (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_osd_tags (push) Successful in 11s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m18s
2024-12-28 19:15:23 +03:00
28b0a2597d Add a test for multiple snapshots in a second pool
All checks were successful
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m42s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 38s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 15s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 17s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m15s
2024-12-28 18:57:30 +03:00
de6b345473 Fix create-snap taking parent_pool from incorrect key parent_pool_id 2024-12-28 18:53:29 +03:00
8bf52d6e96 Fix inode parent_id loop check 2024-12-28 18:40:17 +03:00
5623dca02c Fix vitastor client passing incorrect mod_revision for snapshotted images
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m37s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m41s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m29s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m23s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 11s
Test / test_enospc (push) Successful in 12s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 18s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m19s
This was leading to reads only working for the image itself and for its latest snapshot
2024-12-28 16:01:35 +03:00
abdc207297 Fix append of VITASTOR_CONF to cmdline in the opennebula prebackup script 2024-12-28 13:33:24 +03:00
044e621b62 Add test_rm_degraded to CI
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m40s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m42s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m19s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 9s
Test / test_enospc (push) Successful in 12s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m11s
2024-12-27 18:31:58 +03:00
ba9aabf187 Return listing errors from list_inode_start(), abort merging and fail deletion on unsuccessfull listings 2024-12-27 18:31:21 +03:00
5c890e4a12 Fix rm-data hanging when some OSDs are inactive, add a test for it
All checks were successful
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m39s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m13s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 10s
Test / test_enospc (push) Successful in 13s
Test / test_snapshot_pool2 (push) Successful in 18s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m14s
There's also another case which also needs to be fixed - we shouldn't retry
deletions for indefinite time if an OSD is stopped during deletion
2024-12-27 16:29:33 +03:00
0b0c2afbce Implement "deleted" flag
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m33s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m34s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 15s
Test / test_osd_tags (push) Successful in 11s
Test / test_enospc (push) Successful in 12s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m21s
2024-12-27 01:18:55 +03:00
651c055bd9 Show backfillfull pools in vitastor-cli status
Some checks reported warnings
Test / test_rm (push) Has been cancelled
Test / test_snapshot_chain (push) Has been cancelled
Test / test_snapshot_chain_ec (push) Has been cancelled
Test / test_snapshot_down (push) Has been cancelled
Test / test_snapshot_down_ec (push) Has been cancelled
Test / test_splitbrain (push) Has been cancelled
Test / test_rebalance_verify (push) Has been cancelled
Test / test_rebalance_verify_imm (push) Has been cancelled
Test / buildenv (push) Has been cancelled
Test / test_rebalance_verify_ec (push) Has been cancelled
Test / test_rebalance_verify_ec_imm (push) Has been cancelled
Test / test_dd (push) Has been cancelled
Test / test_root_node (push) Has been cancelled
Test / test_switch_primary (push) Has been cancelled
Test / test_write (push) Has been cancelled
Test / test_write_xor (push) Has been cancelled
Test / test_write_no_same (push) Has been cancelled
Test / test_heal_pg_size_2 (push) Has been cancelled
Test / test_heal_ec (push) Has been cancelled
Test / test_heal_antietcd (push) Has been cancelled
Test / test_heal_csum_32k_dmj (push) Has been cancelled
Test / test_heal_csum_32k_dj (push) Has been cancelled
Test / test_heal_csum_32k (push) Has been cancelled
Test / test_heal_csum_4k_dmj (push) Has been cancelled
Test / test_heal_csum_4k_dj (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
Test / test_resize (push) Has been cancelled
Test / test_resize_auto (push) Has been cancelled
Test / test_snapshot_pool2 (push) Has been cancelled
Test / test_osd_tags (push) Has been cancelled
2024-12-26 12:17:47 +03:00
42eebfc1bd Fix OSDs still crashing when the cluster is full with EC
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m37s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m39s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m13s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m23s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m22s
Test / test_heal_csum_4k_dj (push) Successful in 2m22s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 14s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_osd_tags (push) Successful in 9s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m26s
ENOSPC handling was introduced in 1.6.0 but it was not complete; now it is

P.S: See also client_retry_enospc (true by default)
2024-12-26 01:56:33 +03:00
cef98052f5 Improve logging of subop failures 2024-12-26 01:54:40 +03:00
7fbb04fdfa Release 1.10.0
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m54s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m58s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 35s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m22s
Test / test_resize_auto (push) Successful in 11s
Test / test_resize (push) Successful in 15s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_osd_tags (push) Successful in 8s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m19s
New features:

- Implement basic VitastorFS support in [CSI](https://vitastor.io/docs/installation/kubernetes.html)
- Implement [NFS RDMA](https://vitastor.io/docs/usage/nfs.html#rdma) support
- Pause pool rebalance when monitor detects that it can lead to any OSD becoming full ([osd_backfillfull_ratio](https://vitastor.io/docs/config/monitor.html#osd_backfillfull_ratio))
- Auto-select correct [RDMA device and GID](https://vitastor.io/docs/config/network.html#rdma_device) based on osd_network and RoCEv2 priority
- Report slow ops in OSD stats in etcd and show them in vitastor-cli status

Bug fixes:

- Fix possibly incorrect linked list deserialization in NFS
- Fix possible crash in vitastor-nfs --block READDIR operation
- Map netlink after forking to show correct PID in vitastor-nbd ls
- Simplify and fix create-pool OSD count checks for the case of hosts split into sub-nodes
- Make monitor print "Waiting to become master" just once, not every 5s
- Take out_size from oimg if not specified in vitastor-cli dd
- Do not report OSDs with empty statistics as "full" in status
- Trigger double autosync when switching PG state to prevent leaving garbage in non-immediate_commit clusters
- Fix a lack of connection timeout for etcd websockets in OSD leading to slower etcd failover (~70s instead of ~10s)
- Fix a rare OSD crash during client disconnect
- Fix PGs sometimes sticking until OSD restart in the "has_unclean" state with EC pools
- Fix metadata partition zeroing in vitastor-disk prepare
- Add patches for qemu 9.1 and pve-qemu 9.0 and 9.1
- Fix libvirt 8 patch
2024-12-19 15:49:19 +03:00
63b85b6bfb Fix clang warnings/errors
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m52s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m54s
Test / test_write_no_same (push) Successful in 11s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 34s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 16s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m19s
2024-12-19 15:30:31 +03:00
2f5959e3fa Add pve-qemu 9.1 patch
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m48s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m47s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 42s
Test / test_write_xor (push) Successful in 42s
Test / test_heal_pg_size_2 (push) Successful in 2m26s
Test / test_heal_ec (push) Successful in 2m22s
Test / test_heal_antietcd (push) Successful in 2m23s
Test / test_heal_csum_32k_dmj (push) Successful in 2m26s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m33s
Test / test_resize (push) Successful in 17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m35s
Test / test_heal_csum_4k_dj (push) Successful in 2m29s
Test / test_resize_auto (push) Successful in 10s
Test / test_osd_tags (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 18s
Test / test_enospc_xor (push) Successful in 23s
Test / test_enospc_imm_xor (push) Successful in 23s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub (push) Successful in 19s
Test / test_scrub_xor (push) Successful in 18s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 19s
Test / test_scrub_ec (push) Successful in 19s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m23s
2024-12-19 14:05:12 +03:00
a4a286ed95 Document NFS-RDMA 2024-12-19 14:05:12 +03:00
b8009bad5e Add librdmacm-dev to build dockerfile 2024-12-19 14:05:12 +03:00
9be3d27dc9 Document VitastorFS-based CSI 2024-12-19 13:06:47 +03:00
a19d2066c2 Document osd_backfillfull_ratio 2024-12-19 02:15:02 +03:00
2a8780b4b5 Add a note about slow ops 2024-12-19 02:02:37 +03:00
109f51a015 Implement basic VitastorFS support in CSI
All checks were successful
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 11s
Test / test_write (push) Successful in 32s
Test / test_switch_primary (push) Successful in 36s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_resize (push) Successful in 15s
Test / test_resize_auto (push) Successful in 9s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m8s
2024-12-17 02:26:23 +03:00
8a86c123c3 Allow to auto-select and print the port
All checks were successful
Test / test_root_node (push) Successful in 11s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m51s
Test / test_write_no_same (push) Successful in 9s
Test / test_write (push) Successful in 32s
Test / test_switch_primary (push) Successful in 36s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 17s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_osd_tags (push) Successful in 9s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m10s
2024-12-14 16:55:13 +03:00
b856524e0c Workaround for Linux bug: return post_op_attr for NFS-RDMA READ3
Linux NFS RDMA transport has a stupid bug - when the reply doesn't contain
post_op_attr, the data gets offsetted by 84 bytes (size of attributes) and
first 84 bytes are filled with probably random data.
2024-12-11 21:09:36 +03:00
ae3ca7451f Use per-connection RDMA device contexts 2024-12-11 21:09:36 +03:00
1dbbb0c3f8 Implement NFS RDMA support 2024-12-11 21:09:36 +03:00
64db31ec10 Fix slow op warning format
All checks were successful
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m48s
Test / test_write_no_same (push) Successful in 8s
Test / test_write (push) Successful in 32s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m22s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m13s
Test / test_heal_csum_4k_dmj (push) Successful in 2m13s
Test / test_resize_auto (push) Successful in 9s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize (push) Successful in 16s
Test / test_osd_tags (push) Successful in 9s
Test / test_enospc (push) Successful in 13s
Test / test_snapshot_pool2 (push) Successful in 19s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 16s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m17s
2024-12-11 21:09:36 +03:00
76470686b3 Fix possibly incorrect linked list deserialization in NFS
All checks were successful
Test / test_root_node (push) Successful in 11s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m40s
Test / test_write_no_same (push) Successful in 10s
Test / test_write (push) Successful in 32s
Test / test_switch_primary (push) Successful in 35s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_4k_dmj (push) Successful in 2m16s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m17s
2024-12-08 02:54:13 +03:00
652ca631bb Fix possible crash in nfs_block readdir
All checks were successful
Test / test_write_no_same (push) Successful in 10s
Test / test_rebalance_verify_imm (push) Successful in 1m33s
Test / test_rebalance_verify_ec (push) Successful in 1m38s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_resize (push) Successful in 13s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_resize_auto (push) Successful in 9s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m19s
Test / test_rebalance_verify (push) Successful in 1m57s
Test / test_rebalance_verify_ec_imm (push) Successful in 2m9s
2024-12-01 18:04:49 +03:00
2105f4b654 Add lost netlink daemonize
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m42s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m44s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m25s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_resize_auto (push) Successful in 10s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 18s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m17s
2024-11-27 17:13:30 +03:00
0d01573da3 Fix typos 2024-11-26 14:31:47 +03:00
d84b84f58d Fix new backfillfull feature, add more logs
All checks were successful
Test / test_rebalance_verify_ec_imm (push) Successful in 1m41s
Test / test_write_no_same (push) Successful in 9s
Test / test_write (push) Successful in 31s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m31s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 14s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 9s
Test / test_enospc_xor (push) Successful in 15s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m17s
Test / test_etcd_fail_antietcd (push) Successful in 42s
2024-11-23 01:08:13 +03:00
8cfe705d7a Map netlink after forking to show correct PID in vitastor-nbd ls
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m35s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m36s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m25s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m19s
2024-11-23 00:46:44 +03:00
66c9271cbd Radically simplify create-pool pg_size check
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m38s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m40s
Test / test_write_no_same (push) Successful in 10s
Test / test_write (push) Successful in 32s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 15s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m18s
2024-11-22 01:44:14 +03:00
7b37ba921d Pause pool rebalance when monitor detects that it can lead to any OSD becoming full
Some checks failed
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m39s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m27s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Failing after 11s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m19s
2024-11-22 01:01:07 +03:00
262c581400 Fix create-pool for the case of hosts split into sub-nodes
Some checks failed
Test / test_rebalance_verify_ec (push) Successful in 1m38s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m40s
Test / test_switch_primary (push) Successful in 23s
Test / test_write_no_same (push) Successful in 9s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 16s
Test / test_snapshot_pool2 (push) Failing after 11s
Test / test_osd_tags (push) Successful in 7s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m14s
2024-11-22 01:01:07 +03:00
ad3b6b7267 Add a note about GID and RDMA device auto-selection 2024-11-21 23:54:05 +03:00
1f6a061283 Move ibv_query_gid under #ifdef to only build it with libibverbs 32+
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m39s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m39s
Test / test_write_no_same (push) Successful in 13s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 35s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_resize (push) Successful in 13s
Test / test_resize_auto (push) Successful in 9s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m26s
2024-11-21 23:47:57 +03:00
fc4d97da10 Print "Waiting to become master" just once
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m38s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m40s
Test / test_write_no_same (push) Successful in 8s
Test / test_write (push) Successful in 30s
Test / test_switch_primary (push) Successful in 33s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m26s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m22s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 9s
Test / test_enospc (push) Successful in 11s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m18s
2024-11-21 00:55:22 +03:00
c7a4ce7341 Take out_size from dd oimg if not specified
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m41s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 9s
Test / test_write (push) Successful in 32s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 13s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m8s
2024-11-19 02:13:34 +03:00
ddea31d86d Auto-select first RDMA device only if RoCE is not found, add rocev2->rocev1->ib priority
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m43s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 10s
Test / test_write (push) Successful in 29s
Test / test_switch_primary (push) Successful in 32s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_resize (push) Successful in 13s
Test / test_resize_auto (push) Successful in 9s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m19s
2024-11-19 01:54:00 +03:00
156d005412 Add serialize_overlap to test_heal
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m44s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m44s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 13s
Test / test_osd_tags (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m22s
2024-11-17 01:26:35 +03:00
7e076c7049 Do not report OSDs with empty statistics as full
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m36s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m38s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m15s
Test / test_heal_csum_4k_dj (push) Successful in 2m14s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m19s
2024-11-16 23:36:16 +03:00
7de38250ad Auto-select RDMA device based on osd_network
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m49s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m50s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m12s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 15s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 17s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m16s
2024-11-16 18:38:57 +03:00
9c59d30e83 Report slow ops in OSD stats in etcd and show them in vitastor-cli status
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m42s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 8s
Test / test_write (push) Successful in 30s
Test / test_switch_primary (push) Successful in 33s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 8s
Test / test_enospc (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m9s
2024-11-16 15:11:16 +03:00
5db02cdf6e Add pve-qemu 9.0 patch 2024-11-16 11:20:47 +03:00
8202ee9d74 Trigger double autosync when switching PG state to prevent leaving garbage in non-immediate_commit clusters
All checks were successful
Test / test_dd (push) Successful in 12s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m39s
Test / test_write_no_same (push) Successful in 8s
Test / test_write (push) Successful in 29s
Test / test_switch_primary (push) Successful in 33s
Test / test_write_xor (push) Successful in 33s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 12s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m10s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
2024-11-15 01:26:36 +03:00
5864bd067c Add missing connection timeout for etcd websockets in OSD
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m43s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m44s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m14s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m16s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m11s
2024-11-12 02:28:07 +03:00
c312557ace Do not execute remaining operations if the client is stopped during read
All checks were successful
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m46s
Test / test_write_no_same (push) Successful in 10s
Test / test_write (push) Successful in 31s
Test / test_switch_primary (push) Successful in 33s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_ec (push) Successful in 2m28s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m15s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_resize (push) Successful in 13s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_resize_auto (push) Successful in 9s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m20s
2024-11-10 16:44:13 +03:00
5ce20116d8 Postpone trigger_nearest to prevent timer callbacks called from setTimer/clearTimer
All checks were successful
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m48s
Test / test_write_no_same (push) Successful in 8s
Test / test_write (push) Successful in 31s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_resize_auto (push) Successful in 9s
Test / test_resize (push) Successful in 12s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 10s
Test / test_heal_csum_4k (push) Successful in 2m18s
2024-11-10 15:51:16 +03:00
be66791e59 Add another note about 1.8 upgrade 2024-11-09 00:57:58 +03:00
141cec2383 Add missing refcounting for flush_batch errors 2024-11-09 00:46:38 +03:00
1ce4b1b417 Fix stop condition in osd_flush
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 2m1s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m59s
Test / test_write_no_same (push) Successful in 8s
Test / test_write (push) Successful in 32s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 13s
Test / test_osd_tags (push) Successful in 7s
Test / test_enospc (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 13s
Test / test_enospc_xor (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 11s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m9s
Could probably lead to PGs hung in peering states on OSD restart in EC pools,
fixable by primary OSD restart
2024-11-08 00:30:40 +03:00
ebf24bac9a Fix partition zeroing during prepare
All checks were successful
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m46s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_resize_auto (push) Successful in 8s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_resize (push) Successful in 13s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 10s
Test / test_heal_csum_4k (push) Successful in 2m12s
Previously it zeroed area beginning with 0 instead of actual metadata offset
which was leading to non-zeroed metadata when the disk is very small
2024-11-08 00:14:37 +03:00
edd9051f81 Fix arch.en toc 2024-11-08 00:14:18 +03:00
662ca86dc0 Fix libvirt 8 patch 2024-11-07 12:21:32 +03:00
a1ca573168 Support QEMU 9.1
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m42s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m44s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m13s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_resize (push) Successful in 12s
Test / test_resize_auto (push) Successful in 9s
Test / test_osd_tags (push) Successful in 9s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 9s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m11s
2024-11-07 12:21:13 +03:00
f69f801ffb Release 1.9.3
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m54s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m56s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m16s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m11s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m13s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 14s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 13s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m17s
- Support custom hybrid OSD creation (`vitastor-disk prepare --hybrid --fast-devices /dev/xxx,/dev/yyy`)
- Auto-change partition paths to /dev/disk/by-partuuid/ in `vitastor-disk prepare`
- Allow to select cached I/O in vitastor-disk commands
- Fix multiple bugs in vitastor-disk resize & add tests for them
- Fix vitastor-disk write-meta/write-journal in superblock-based mode writing it to an incorrect device
- Fix vitastor-disk prepare sometimes again not seeing new partitions
- Cleanup PG history and stats of deleted pools
- Fix "is already mounted" checks in CSI
2024-11-07 01:28:31 +03:00
af92cbdfcc Dynamic device size in test
All checks were successful
Test / test_rebalance_verify_ec (push) Successful in 1m52s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m53s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m25s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_resize_auto (push) Successful in 8s
Test / test_resize (push) Successful in 13s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m17s
2024-11-06 14:16:58 +03:00
a775db10cc Also allow cached I/O in dsk.open_*() in disk_tool
Some checks failed
Test / test_rebalance_verify_ec (push) Successful in 1m52s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m53s
Test / test_write_no_same (push) Successful in 9s
Test / test_write (push) Successful in 30s
Test / test_switch_primary (push) Successful in 32s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_resize_auto (push) Failing after 8s
Test / test_resize (push) Successful in 13s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 12s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m8s
2024-11-06 13:52:25 +03:00
eafce26049 Add resize and resize-auto tests
Some checks failed
Test / test_rebalance_verify_ec (push) Successful in 1m46s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m48s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m13s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_resize (push) Failing after 12s
Test / test_resize_auto (push) Failing after 9s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_osd_tags (push) Successful in 7s
Test / test_enospc (push) Successful in 10s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc_xor (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m19s
2024-11-06 13:30:51 +03:00
625c74294f Support direct I/O 2024-11-06 13:30:12 +03:00
ef8c21ad6f Change %lu to %ju 2024-11-06 02:58:51 +03:00
2bb8e8999e Do not check length in "data alignment mismatch" 2024-11-06 02:58:26 +03:00
c2e7c28672 Fix calc_lengths data size recalc during auto-resize
All checks were successful
Test / test_dd (push) Successful in 12s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec (push) Successful in 1m58s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m55s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m14s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m26s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_osd_tags (push) Successful in 8s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 10s
Test / test_heal_csum_4k (push) Successful in 2m13s
2024-11-06 02:27:17 +03:00
bd22beefb5 Auto-extend new_data_len if new_data_offset is changed too
All checks were successful
Test / test_dd (push) Successful in 11s
Test / test_root_node (push) Successful in 7s
Test / test_rebalance_verify_ec (push) Successful in 1m48s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m49s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m14s
Test / test_heal_ec (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m12s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m13s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 14s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m22s
2024-11-06 02:13:30 +03:00
e7038ab99c Auto-change partition paths to /dev/disk/by-partuuid/
Some checks failed
Test / test_dd (push) Successful in 12s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec (push) Successful in 1m54s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m57s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Failing after 2m18s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m16s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m16s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 13s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 13s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m18s
2024-11-06 01:04:05 +03:00
b6f75ebcfd Add missing I/O path description in english 2024-11-06 00:43:17 +03:00
9def199981 Auto-reduce new_data_len in resize
All checks were successful
Test / test_dd (push) Successful in 11s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec (push) Successful in 1m45s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m46s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 33s
Test / test_heal_pg_size_2 (push) Successful in 2m14s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m15s
Test / test_heal_csum_32k_dmj (push) Successful in 2m16s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 16s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 9s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m18s
2024-11-05 02:57:11 +03:00
c72e8e649e Support test mode for vitastor-disk
All checks were successful
Test / test_dd (push) Successful in 11s
Test / test_rebalance_verify_ec (push) Successful in 1m51s
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m54s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 33s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m21s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m23s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_osd_tags (push) Successful in 7s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 14s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m19s
2024-11-05 02:43:55 +03:00
8bdb3e8786 Write meta/journal to correct device when used in superblock mode 2024-11-05 02:43:55 +03:00
a87e236c70 Fix resize --data-size, particularly when expanding the device
Some checks failed
Test / test_root_node (push) Successful in 8s
Test / test_dd (push) Successful in 12s
Test / test_rebalance_verify_ec (push) Successful in 1m47s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m47s
Test / test_write_no_same (push) Successful in 7s
Test / test_write (push) Successful in 30s
Test / test_switch_primary (push) Successful in 32s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m14s
Test / test_heal_antietcd (push) Successful in 2m15s
Test / test_heal_csum_32k_dmj (push) Successful in 2m15s
Test / test_heal_csum_32k_dj (push) Failing after 2m24s
Test / test_heal_csum_32k (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_osd_tags (push) Successful in 7s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 9s
Test / test_enospc_imm (push) Successful in 9s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 11s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 14s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m17s
2024-11-04 18:55:03 +03:00
16f67cf6f1 Fix missing metadata checksums after resize
All checks were successful
Test / test_dd (push) Successful in 12s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec (push) Successful in 1m47s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m50s
Test / test_write_no_same (push) Successful in 7s
Test / test_write (push) Successful in 30s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 33s
Test / test_heal_pg_size_2 (push) Successful in 2m22s
Test / test_heal_ec (push) Successful in 2m20s
Test / test_heal_antietcd (push) Successful in 2m19s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m26s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m25s
Test / test_heal_csum_4k_dmj (push) Successful in 2m26s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 10s
Test / test_heal_csum_4k (push) Successful in 2m16s
2024-11-04 18:36:35 +03:00
56de4a520d Support custom hybrid OSD creation (--hybrid --fast-devices /dev/xxx,/dev/yyy)
All checks were successful
Test / test_dd (push) Successful in 13s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec (push) Successful in 1m41s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m42s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 29s
Test / test_write_xor (push) Successful in 33s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m12s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m15s
2024-11-04 17:52:29 +03:00
adca162278 Note that osd_per_disk is also incompatible
All checks were successful
Test / test_dd (push) Successful in 12s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec (push) Successful in 1m43s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m45s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 31s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m14s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m25s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 13s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 9s
Test / test_heal_csum_4k (push) Successful in 2m18s
2024-11-04 15:20:01 +03:00
490b314d72 Rework & fix new partition waiting code
All checks were successful
Test / test_dd (push) Successful in 11s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec (push) Successful in 1m46s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m48s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m14s
Test / test_heal_ec (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m14s
Test / test_heal_csum_4k_dmj (push) Successful in 2m11s
Test / test_heal_csum_32k_dj (push) Successful in 2m24s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_osd_tags (push) Successful in 7s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_snapshot_pool2 (push) Successful in 13s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 9s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 11s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m11s
2024-11-04 15:16:30 +03:00
9f52074e1e Delete PG history and stats of deleted pools
All checks were successful
Test / test_dd (push) Successful in 11s
Test / test_rebalance_verify_ec (push) Successful in 1m38s
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m40s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m13s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 11s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m17s
2024-11-01 02:38:31 +03:00
2b3e877546 Add notes about vitastor-disk in disable_data_fsync 2024-11-01 02:38:18 +03:00
01d55e5420 Merge pull request #64 from 0x00ace/fio_version_fix
use fio 3.35-1 for AlmaLinux 9
2024-10-31 11:55:40 +03:00
f5aa5cfdfe Fix "is already mounted" checks in CSI 2024-10-26 14:06:21 +03:00
2826bb9e7e Add more logging to CSI 2024-10-24 02:07:55 +03:00
30d1ad0f66 Add Intel D5-P4320 2024-10-22 23:22:48 +03:00
79719e44ac Release 1.9.2
All checks were successful
Test / test_root_node (push) Successful in 8s
Test / test_dd (push) Successful in 12s
Test / test_rebalance_verify_ec (push) Successful in 1m40s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m41s
Test / test_write_no_same (push) Successful in 7s
Test / test_switch_primary (push) Successful in 31s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m16s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_osd_tags (push) Successful in 7s
Test / test_snapshot_pool2 (push) Successful in 13s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m14s
New features:
- Support resizing normal vitastor-disk partitions and moving journal/metadata: [vitastor-disk resize](https://vitastor.io/docs/usage/disk.html#resize)
- Support simple forms of vitastor-disk {dump,write}-{meta,journal} for OSD partitions

Bug fixes:
- Fix block RWX volumes broken after introducing stage/unstage support
- Do not allow to create non-block RWX volumes in CSI
- Fix vitastor-disk prepare not seeing the newly created partition in rare cases
- Fix non-array tags not showing up in ls-osd/osd-tree
- Make OpenNebula oned.conf patching during installation smarter
- Fix iseek option in vitastor-cli dd not working
- Validate conv=, iflag=, oflag= options in vitastor-cli dd
- Fix vitastor-disk write-meta not writing header checksum to the disk
- Fix JSON format in vitastor-disk dump-meta
- Fix read_chain_bitmap not working for snapshot in another pool
- Fix a possible OSD crash during parallel read & write to an image with snapshots
- Several followups to the READ_CHAIN_BITMAP fix: avoid data reads, fix possible overflow in is_zero(), fix bitmap size
2024-10-20 01:49:13 +03:00
f5626655df Add new disk command docs 2024-10-20 01:47:46 +03:00
7e2dde2702 Fix block RWX volumes broken after introducing stage/unstage support 2024-10-19 11:56:56 +03:00
3b0ab317cf Validate non-block RWX in CSI 2024-10-18 01:55:38 +03:00
18eb99c494 Implement resizing partitions created with vitastor-disk 2024-10-18 01:55:19 +03:00
4e8a1a8895 Run partprobe in add_partition() if /dev/disk/by-partuuid symlink is not present 2024-10-12 18:07:53 +03:00
d27a8bdabc Make get_parent_device return full path 2024-10-12 13:44:52 +03:00
ebd616e42f Extract clear_osd_superblock() 2024-10-12 13:44:52 +03:00
b18d296e01 Extract check_existing_partition(), get_device_size() 2024-10-12 13:44:52 +03:00
a03508320e Move json_is_true/json_is_false to json_util.cpp
All checks were successful
Test / test_dd (push) Successful in 12s
Test / test_rebalance_verify_ec (push) Successful in 1m37s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m40s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 31s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m14s
Test / test_heal_ec (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m15s
Test / test_heal_csum_32k_dmj (push) Successful in 2m16s
Test / test_heal_csum_32k_dj (push) Successful in 2m16s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_heal_csum_4k_dmj (push) Successful in 2m14s
Test / test_osd_tags (push) Successful in 7s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_snapshot_pool2 (push) Successful in 13s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 11s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 14s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m9s
2024-10-12 00:40:39 +03:00
c9ccc790ec Fix non-array tags not showing up in ls-osd/osd-tree
All checks were successful
Test / test_dd (push) Successful in 13s
Test / test_rebalance_verify_ec (push) Successful in 1m39s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m41s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 33s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_osd_tags (push) Successful in 8s
Test / test_snapshot_pool2 (push) Successful in 15s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m16s
2024-10-11 18:33:35 +03:00
db2d9c5b3d Fix tables in NFS doc 2024-10-08 00:20:10 +03:00
09f15f44c9 Fix Toshiba MG and VDUSE Debian kernel note in docs 2024-10-08 00:17:14 +03:00
c5a58c2e81 Support reading parameters automatically from the superblock in vitastor-disk {dump,write}-{meta,journal}
Some checks reported warnings
Test / test_dd (push) Has been cancelled
Test / test_root_node (push) Has been cancelled
Test / test_switch_primary (push) Has been cancelled
Test / test_write (push) Has been cancelled
Test / test_write_xor (push) Has been cancelled
Test / test_write_no_same (push) Has been cancelled
Test / test_heal_pg_size_2 (push) Has been cancelled
Test / build (push) Has been cancelled
Test / test_heal_ec (push) Has been cancelled
Test / test_heal_antietcd (push) Has been cancelled
Test / test_heal_csum_32k_dmj (push) Has been cancelled
Test / test_heal_csum_32k_dj (push) Has been cancelled
Test / test_heal_csum_32k (push) Has been cancelled
Test / test_heal_csum_4k_dmj (push) Has been cancelled
Test / test_heal_csum_4k_dj (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
Test / test_snapshot_pool2 (push) Has been cancelled
Test / test_osd_tags (push) Has been cancelled
Test / test_enospc (push) Has been cancelled
Test / test_enospc_xor (push) Has been cancelled
Test / test_add_osd (push) Has been cancelled
Test / test_enospc_imm (push) Has been cancelled
Test / test_enospc_imm_xor (push) Has been cancelled
Test / test_scrub (push) Has been cancelled
Test / test_scrub_zero_osd_2 (push) Has been cancelled
Test / test_scrub_xor (push) Has been cancelled
Test / test_scrub_pg_size_3 (push) Has been cancelled
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Has been cancelled
Test / test_scrub_ec (push) Has been cancelled
Test / test_nfs (push) Has been cancelled
2024-10-07 02:21:58 +03:00
30e7c2ad1e Add custom OpenNebula oned.conf patcher (it uses a SHITTY configuration file format) 2024-10-06 13:46:05 +03:00
2e76ceabbe Fix iseek option in vitastor-cli dd 2024-10-05 18:25:38 +03:00
3df088c207 Validate conv=, iflag=, oflag= options in vitastor-cli dd 2024-10-05 18:02:36 +03:00
d882a19eab Fix vitastor-disk write-meta not writing header checksum to the disk... 2024-10-05 17:32:55 +03:00
702be3da7a Fix JSON format in vitastor-disk dump-meta 2024-10-05 16:08:34 +03:00
99533e1c2f Fix .yml links 2024-10-02 00:38:07 +03:00
a6cceb43bf Fix read_chain_bitmap not working for snapshot in another pool
All checks were successful
Test / test_dd (push) Successful in 13s
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify_ec (push) Successful in 1m43s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m45s
Test / test_write_no_same (push) Successful in 8s
Test / test_write (push) Successful in 31s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m14s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_osd_tags (push) Successful in 7s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_snapshot_pool2 (push) Successful in 14s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 17s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m17s
2024-10-02 00:24:48 +03:00
745d89459a Fix link, add title 2024-09-29 22:05:56 +03:00
48f023292d Fix extra data reads on read_chain
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 1m35s
Test / test_dd (push) Successful in 12s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify_ec (push) Successful in 1m43s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m44s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m22s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_osd_tags (push) Successful in 8s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 11s
Test / test_scrub_ec (push) Successful in 16s
Test / test_heal_csum_4k (push) Successful in 2m12s
2024-09-21 17:05:42 +03:00
b58bf3ada5 Fix possible OSD crash during parallel read & write to an image with snapshots
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 1m39s
Test / test_dd (push) Successful in 12s
Test / test_rebalance_verify_ec (push) Successful in 1m45s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m45s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 31s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_osd_tags (push) Successful in 10s
Test / test_enospc (push) Successful in 11s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 16s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_nfs (push) Successful in 11s
Test / test_scrub_ec (push) Successful in 14s
Test / test_heal_csum_4k (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
OSDs could crash with the following "assertion failed" message (crash didn't affect data
and was caused by OSD thinking upper blocks are full while they weren't). Reproduction
without introducing artificial delays is hard because you have to force OSD to read an
object with enqueued but not handled write which fills previously non-full bitmap. O_o.

```
vitastor-osd: ./src/osd/osd_primary_chain.cpp:613: void osd_t::send_chained_read_results(pg_t&, osd_op_t*): Assertion `stripes[role].read_buf' failed.
```
2024-09-21 13:44:36 +03:00
f18a749324 READ_CHAIN fix was incomplete :-)
Some checks failed
Test / test_rebalance_verify_imm (push) Successful in 1m33s
Test / test_dd (push) Successful in 12s
Test / test_rebalance_verify_ec (push) Successful in 1m39s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Failing after 2m22s
Test / test_heal_csum_32k_dj (push) Successful in 2m15s
Test / test_heal_csum_32k (push) Successful in 2m18s
Test / test_osd_tags (push) Successful in 8s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m13s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 13s
Test / test_heal_csum_4k (push) Successful in 2m20s
2024-09-21 13:40:31 +03:00
6e9307c522 Fix possible overflow in is_zero() 2024-09-21 13:40:10 +03:00
99adbb9483 Release 1.9.1
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 1m36s
Test / test_dd (push) Successful in 12s
Test / test_rebalance_verify_ec (push) Successful in 1m40s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m42s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 33s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m16s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_osd_tags (push) Successful in 9s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m25s
Hotfixes for OpenNebula and upgrade hotfix for 1.7

- Fix deploy.vitastor, save.vitastor, restore.vitastor scripts not working for nodes other than master oned
- Fix deploy.vitastor not working for VMs without Vitastor disks
- Disable clearing old PG configuration when upgrading from 1.7 or older versions (it was breaking old clients)
2024-09-14 19:17:30 +03:00
b489a611a9 Add 1.8 upgrade note 2024-09-14 19:17:30 +03:00
c6c0b8957a Stop updating old PG configuration when the user manually deletes it
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 1m32s
Test / test_root_node (push) Successful in 9s
Test / test_dd (push) Successful in 17s
Test / test_rebalance_verify_ec (push) Successful in 1m42s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m43s
Test / test_write_no_same (push) Successful in 9s
Test / test_write (push) Successful in 32s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m32s
Test / test_heal_csum_32k_dmj (push) Successful in 2m31s
Test / test_heal_csum_32k_dj (push) Successful in 2m25s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_osd_tags (push) Successful in 8s
Test / test_heal_csum_4k_dmj (push) Successful in 2m23s
Test / test_heal_csum_4k_dj (push) Successful in 2m21s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 14s
Test / test_enospc_imm_xor (push) Successful in 15s
Test / test_scrub (push) Successful in 17s
Test / test_scrub_zero_osd_2 (push) Successful in 16s
Test / test_scrub_xor (push) Successful in 17s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 14s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m19s
2024-09-14 19:15:40 +03:00
5d40d2a459 Fix oned.conf patch 2024-09-14 19:08:44 +03:00
f449c28c3b Always write decoded base64 deployment file (otherwise it breaks VMs without Vitastor disks) 2024-09-14 15:25:02 +03:00
a6274f58cc Same fix for save/restore: they also need to ssh to target node 2024-09-14 02:46:48 +03:00
ac29ffea6a Add ssh to target node to deploy.vitastor - without it it always tried to deploy VMs on oned host 2024-09-14 02:15:24 +03:00
bc06acc153 Disable clearing old PG configuration - we can not be sure that old clients do not need it
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 1m43s
Test / test_dd (push) Successful in 11s
Test / test_rebalance_verify_ec (push) Successful in 1m49s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m51s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m34s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_osd_tags (push) Successful in 9s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m17s
2024-09-13 19:00:12 +03:00
fe8e611e23 Release 1.9.0
All checks were successful
Test / test_dd (push) Successful in 9s
Test / test_rebalance_verify_ec (push) Successful in 1m33s
Test / test_root_node (push) Successful in 9s
Test / test_switch_primary (push) Successful in 32s
Test / test_write (push) Successful in 31s
Test / test_write_no_same (push) Successful in 8s
Test / test_write_xor (push) Successful in 33s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m33s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m17s
Test / test_heal_csum_32k_dj (push) Successful in 2m18s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_osd_tags (push) Successful in 7s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 14s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub_xor (push) Successful in 13s
Test / test_heal_csum_4k_dmj (push) Successful in 2m17s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 17s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_nfs (push) Successful in 11s
Test / test_scrub_ec (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 2m19s
Test / test_etcd_fail (push) Successful in 42s
- OpenNebula support! [Installation instructions](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/installation/opennebula.en.md)
- Added [vitastor-cli rm --exact|--matching](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/usage/cli.en.md#rm) command
- Added [vitastor-cli dd](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/usage/cli.en.md#dd) command - copy files between Vitastor images, files and pipes
- Add a startup timeout to vitastor-cli to not wait for etcd infinitely
- Fix non-working OSD_OP_READ_CHAIN_BITMAP O_o
- Autodetect block_size/bitmap_granularity/immediate_commit when creating pools
- Do not allow to create multiple pools with the same name from vitastor-cli
- Fix skip_cache_check option not applied due to type issue (see github issue #70)
2024-09-06 01:46:16 +03:00
7636f9c726 Turn off brp-python-bytecompile in RPM specs 2024-09-06 01:44:44 +03:00
d5f7005ddd Add dd and rm --exact|--matching documentation 2024-09-05 02:22:05 +03:00
70d6fcd32a Add OpenNebula to README 2024-09-05 02:00:14 +03:00
17caaa59af vitastor-opennebula is probably more correct than opennebula-vitastor
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 1m29s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec (push) Successful in 1m35s
Test / test_dd (push) Successful in 12s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m37s
Test / test_write_no_same (push) Successful in 9s
Test / test_write (push) Successful in 31s
Test / test_switch_primary (push) Successful in 34s
Test / test_write_xor (push) Successful in 35s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m16s
Test / test_heal_csum_32k_dmj (push) Successful in 2m21s
Test / test_heal_csum_32k_dj (push) Successful in 2m25s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_osd_tags (push) Successful in 9s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 11s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 14s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m19s
2024-09-05 01:44:16 +03:00
2dac6ee38b Fix OpenNebula reinstall
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 1m27s
Test / test_dd (push) Successful in 11s
Test / test_root_node (push) Successful in 7s
Test / test_rebalance_verify_ec (push) Successful in 1m35s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m36s
Test / test_write_no_same (push) Successful in 9s
Test / test_write (push) Successful in 30s
Test / test_switch_primary (push) Successful in 32s
Test / test_write_xor (push) Successful in 33s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m25s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_osd_tags (push) Successful in 7s
Test / test_heal_csum_4k_dj (push) Successful in 2m19s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m21s
2024-09-04 11:05:56 +03:00
8be67a2d5b Fix OpenNebula save/restore 2024-09-04 11:05:56 +03:00
9c2132882c Fix unaligned last block read/write in cli_dd 2024-09-04 11:05:56 +03:00
9f25bb059b Use just IMAGE_PREFIX, not IMAGE_PREFIX+"one" 2024-09-04 01:23:00 +03:00
ee3094c5e5 Add OpenNebula plugin docs 2024-09-04 01:22:39 +03:00
ba9f263b75 Add wildcard removal command
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 1m27s
Test / test_root_node (push) Successful in 8s
Test / test_dd (push) Successful in 13s
Test / test_rebalance_verify_ec (push) Successful in 1m35s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m36s
Test / test_write_no_same (push) Successful in 8s
Test / test_write (push) Successful in 31s
Test / test_switch_primary (push) Successful in 33s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m15s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_osd_tags (push) Successful in 9s
Test / test_enospc (push) Successful in 12s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 16s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m9s
2024-08-31 14:13:09 +03:00
30eaa1a8e6 Add vitastor-cli ls --exact 2024-08-31 02:36:25 +03:00
6a8daedbe2 rm --wildcard 2024-08-31 02:36:25 +03:00
2b96ac0b44 Implement OpenNebula driver 2024-08-30 23:46:37 +03:00
986cd11705 Implement CLI "dd" command - copy data between Vitastor images, files and pipes 2024-08-30 02:31:06 +03:00
b804051eaf Remove debug print in nbd-proxy 2024-08-30 02:31:06 +03:00
3cc326500e Fix non-working OSD_OP_READ_CHAIN_BITMAP O_o 2024-08-30 01:25:05 +03:00
f848c450a4 Clients should not wait infinitely for etcd to start if it's unavailable 2024-08-28 02:03:35 +03:00
4121c66281 Autodetect block_size/bitmap_granularity/immediate_commit when creating pools 2024-08-28 02:03:35 +03:00
b3716fbe23 Validate pool name when creating a pool 2024-08-28 02:03:35 +03:00
97f49d7d94 Fix #70 from github - skip_cache_check type issue
All checks were successful
Test / test_rebalance_verify (push) Successful in 1m21s
Test / test_rebalance_verify_imm (push) Successful in 1m22s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify_ec (push) Successful in 1m30s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m30s
Test / test_write_no_same (push) Successful in 9s
Test / test_switch_primary (push) Successful in 31s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 37s
Test / test_heal_pg_size_2 (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m27s
Test / test_heal_csum_32k (push) Successful in 2m26s
Test / test_osd_tags (push) Successful in 9s
Test / test_heal_csum_4k_dj (push) Successful in 2m16s
Test / test_enospc (push) Successful in 9s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_zero_osd_2 (push) Successful in 14s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 14s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m17s
2024-08-14 01:35:43 +03:00
131de4b790 Disable trace in header 2024-08-13 11:21:35 +03:00
ce359c5a69 Release 1.8.0
All checks were successful
Test / test_rebalance_verify (push) Successful in 1m25s
Test / test_rebalance_verify_imm (push) Successful in 1m25s
Test / test_root_node (push) Successful in 7s
Test / test_rebalance_verify_ec (push) Successful in 1m30s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m32s
Test / test_write_no_same (push) Successful in 7s
Test / test_write (push) Successful in 30s
Test / test_switch_primary (push) Successful in 32s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k_dj (push) Successful in 2m16s
Test / test_heal_csum_32k (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_osd_tags (push) Successful in 8s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k (push) Successful in 2m16s
Bugfix release, would be 1.7.2, but etcd layout changes mandate it to be 1.8.0. :-)

- Change etcd layout: /config/pgs is now /pg/config, /pg/stats/* is now /pgstats/*.
  This is required to fix a rare PG history tracking issue caused by non-atomic
  delivery of etcd events sometimes resulting in `incomplete` objects in EC pools
  after mass OSD restarts. Upgrading can be performed freely, downgrade requires
  additional action: [1.8.0 to 1.7.1](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/usage/admin.en.md#1-8-0-to-1-7-1)
- Fix a rare client hang on PG primary OSD switch
- Fix vitastor-nfs started using mount command sometimes not stopping automatically after unmount
- Fix vitastor-nfs mounts started using mount command sometimes hanging after daemonizing
- Fix merge/flatten into a pool with different object size (image migration between pools case)
- Do not print extra "PG disappeared after reload" verbose log messages for non-existing PGs
- Fix clustered Antietcd support and persistence filter
- Do not try to purge the same OSD multiple times if its multiple devices are passed to purge
- Various node.js binding fixes
2024-08-11 14:28:31 +03:00
521e867b10 Run check_exit also on deferred stop. Now vitastor-nfs should finally always stop on umount
All checks were successful
Test / test_rebalance_verify (push) Successful in 1m27s
Test / test_rebalance_verify_imm (push) Successful in 1m28s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec (push) Successful in 1m34s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m35s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 31s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m19s
Test / test_heal_csum_4k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k (push) Successful in 2m26s
Test / test_osd_tags (push) Successful in 8s
Test / test_heal_csum_4k_dj (push) Successful in 2m20s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_xor (push) Successful in 12s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 13s
Test / test_scrub (push) Successful in 14s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub_xor (push) Successful in 14s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 13s
Test / test_nfs (push) Successful in 10s
Test / test_heal_csum_4k (push) Successful in 2m17s
2024-08-11 00:05:20 +03:00
333c54ebbf Cleanup clients correctly during stop(). Was also affecting #67, but could also reproduce during normal operation
All checks were successful
Test / test_rebalance_verify (push) Successful in 1m27s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify_imm (push) Successful in 1m29s
Test / test_rebalance_verify_ec (push) Successful in 1m34s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m35s
Test / test_write_no_same (push) Successful in 10s
Test / test_switch_primary (push) Successful in 31s
Test / test_write (push) Successful in 31s
Test / test_write_xor (push) Successful in 36s
Test / test_heal_pg_size_2 (push) Successful in 2m16s
Test / test_heal_ec (push) Successful in 2m19s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m22s
Test / test_heal_csum_32k (push) Successful in 2m21s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_osd_tags (push) Successful in 7s
Test / test_heal_csum_4k_dj (push) Successful in 2m18s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 15s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_ec (push) Successful in 13s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_nfs (push) Successful in 10s
Test / test_heal_csum_4k (push) Successful in 2m21s
2024-08-11 00:00:13 +03:00
58d3da95c8 Fix github issue #67 by closing active NFS sockets before daemonize()
All checks were successful
Test / test_rebalance_verify (push) Successful in 1m26s
Test / test_rebalance_verify_imm (push) Successful in 1m28s
Test / test_root_node (push) Successful in 7s
Test / test_rebalance_verify_ec (push) Successful in 1m32s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m34s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 31s
Test / test_write (push) Successful in 32s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_antietcd (push) Successful in 2m18s
Test / test_heal_ec (push) Successful in 2m26s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_32k (push) Successful in 2m20s
Test / test_heal_csum_4k_dmj (push) Successful in 2m21s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_osd_tags (push) Successful in 8s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 12s
Test / test_scrub (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 12s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 15s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 15s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m10s
2024-08-10 20:13:37 +03:00
4e90e752eb Fix merge/flatten into a pool with different object size
All checks were successful
Test / test_rebalance_verify (push) Successful in 1m25s
Test / test_rebalance_verify_imm (push) Successful in 1m26s
Test / test_root_node (push) Successful in 9s
Test / test_rebalance_verify_ec (push) Successful in 1m31s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m30s
Test / test_write_no_same (push) Successful in 8s
Test / test_switch_primary (push) Successful in 33s
Test / test_write (push) Successful in 30s
Test / test_write_xor (push) Successful in 33s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m16s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m19s
Test / test_heal_csum_32k_dj (push) Successful in 2m21s
Test / test_heal_csum_32k (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m18s
Test / test_osd_tags (push) Successful in 9s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_enospc (push) Successful in 11s
Test / test_enospc_imm (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 12s
Test / test_scrub_zero_osd_2 (push) Successful in 13s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 14s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 15s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 11s
Test / test_heal_csum_4k (push) Successful in 2m12s
2024-08-10 19:23:26 +03:00
09342d7189 node.js binding fixes 2024-08-05 00:10:37 +03:00
eb3e8b8c19 Do not print "PG disappeared after reload" verbose log messages when *it* was not present
All checks were successful
Test / test_rebalance_verify (push) Successful in 1m22s
Test / test_rebalance_verify_imm (push) Successful in 1m22s
Test / test_root_node (push) Successful in 8s
Test / test_rebalance_verify_ec (push) Successful in 1m30s
Test / test_rebalance_verify_ec_imm (push) Successful in 1m30s
Test / test_write_no_same (push) Successful in 9s
Test / test_write (push) Successful in 29s
Test / test_switch_primary (push) Successful in 32s
Test / test_write_xor (push) Successful in 34s
Test / test_heal_pg_size_2 (push) Successful in 2m17s
Test / test_heal_ec (push) Successful in 2m18s
Test / test_heal_antietcd (push) Successful in 2m17s
Test / test_heal_csum_32k_dmj (push) Successful in 2m18s
Test / test_heal_csum_32k_dj (push) Successful in 2m17s
Test / test_heal_csum_4k_dmj (push) Successful in 2m20s
Test / test_heal_csum_32k (push) Successful in 2m23s
Test / test_heal_csum_4k_dj (push) Successful in 2m17s
Test / test_osd_tags (push) Successful in 7s
Test / test_enospc (push) Successful in 10s
Test / test_enospc_xor (push) Successful in 13s
Test / test_enospc_imm (push) Successful in 12s
Test / test_enospc_imm_xor (push) Successful in 14s
Test / test_scrub (push) Successful in 13s
Test / test_scrub_zero_osd_2 (push) Successful in 11s
Test / test_scrub_xor (push) Successful in 13s
Test / test_scrub_pg_size_3 (push) Successful in 13s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 14s
Test / test_nfs (push) Successful in 10s
Test / test_heal_csum_4k (push) Successful in 2m14s
2024-08-04 01:42:05 +03:00
e2ca3ad99e Add a note about storage ID in proxmox storage config doc 2024-07-31 01:19:44 +03:00
dd4b0aed2b Support scattered write in node.js binding 2024-07-31 01:17:06 +03:00
42851a061c Always continue operations to not miss resuming after the lack of PG primary
Should fix spurious client hangs during PG primary switchover
2024-07-31 01:17:03 +03:00
8e0f242d30 Add downgrade docs 2024-07-31 01:15:37 +03:00
0daa8ea39b Support seamless upgrade to new PG config and stats etcd key names 2024-07-31 01:15:37 +03:00
b263d311ef Use separate watch revisions for different watchers 2024-07-31 01:15:37 +03:00
8720185780 Run tests in CI in memory (in tmpfs) 2024-07-31 01:15:37 +03:00
20584414d8 Report OSD version in /osd/state/ and /osd/stats/ (for the future) 2024-07-31 01:15:37 +03:00
306a3db7f3 Rename VERSION define to VITASTOR_VERSION 2024-07-31 01:15:37 +03:00
5b0aebada4 Rename /config/pgs to /pg/config and /pg/stats/* to /pgstats/* 2024-07-31 01:15:37 +03:00
d6f0b480c8 Fix broken link 2024-07-22 14:01:53 +03:00
f1f8531fd4 Make tests compatible with antietcd, add 2 antietcd tests to CI
All checks were successful
Test / test_rebalance_verify_imm (push) Successful in 4m31s
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify (push) Successful in 5m18s
Test / test_switch_primary (push) Successful in 38s
Test / test_write (push) Successful in 48s
Test / test_write_no_same (push) Successful in 19s
Test / test_write_xor (push) Successful in 53s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m58s
Test / test_rebalance_verify_ec (push) Successful in 7m50s
Test / test_heal_pg_size_2 (push) Successful in 4m10s
Test / test_heal_antietcd (push) Successful in 4m16s
Test / test_heal_ec (push) Successful in 4m54s
Test / test_heal_csum_32k_dmj (push) Successful in 5m52s
Test / test_heal_csum_32k_dj (push) Successful in 6m29s
Test / test_heal_csum_32k (push) Successful in 6m14s
Test / test_osd_tags (push) Successful in 35s
Test / test_heal_csum_4k_dmj (push) Successful in 6m51s
Test / test_enospc (push) Successful in 1m42s
Test / test_enospc_xor (push) Successful in 2m32s
Test / test_enospc_imm (push) Successful in 1m40s
Test / test_heal_csum_4k_dj (push) Successful in 6m8s
Test / test_scrub (push) Successful in 1m3s
Test / test_heal_csum_4k (push) Successful in 5m5s
Test / test_enospc_imm_xor (push) Successful in 1m23s
Test / test_scrub_zero_osd_2 (push) Successful in 26s
Test / test_scrub_xor (push) Successful in 31s
Test / test_scrub_ec (push) Successful in 33s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 37s
Test / test_nfs (push) Successful in 15s
Test / test_scrub_pg_size_3 (push) Successful in 48s
2024-07-20 02:16:38 +03:00
8d79d59964 Update antietcd to 1.1.0 2024-07-20 02:15:48 +03:00
551a209a50 Fix persistence filter initialization 2024-07-20 02:15:48 +03:00
06cafd7702 Do not merge config an extra unneeded time 2024-07-20 02:15:48 +03:00
3018352443 Fix clustered Antietcd support 2024-07-19 18:58:58 +03:00
f8edfb4a71 No need to check for PG intersection if a history set is smaller than EC data part count
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m3s
Test / test_rebalance_verify_imm (push) Successful in 4m18s
Test / test_root_node (push) Successful in 13s
Test / test_rebalance_verify (push) Successful in 4m59s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 43s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m22s
Test / test_write_no_same (push) Successful in 24s
Test / test_write_xor (push) Successful in 1m53s
Test / test_rebalance_verify_ec (push) Successful in 5m41s
Test / test_heal_pg_size_2 (push) Successful in 3m41s
Test / test_heal_csum_32k (push) Successful in 5m8s
Test / test_heal_csum_32k_dmj (push) Successful in 9m15s
Test / test_heal_csum_4k_dmj (push) Successful in 3m32s
Test / test_osd_tags (push) Successful in 24s
Test / test_enospc (push) Successful in 2m1s
Test / test_enospc_xor (push) Successful in 2m49s
Test / test_enospc_imm (push) Successful in 1m27s
Test / test_heal_csum_4k (push) Successful in 5m14s
Test / test_scrub (push) Successful in 48s
Test / test_scrub_zero_osd_2 (push) Successful in 34s
Test / test_enospc_imm_xor (push) Successful in 1m15s
Test / test_scrub_xor (push) Successful in 28s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 43s
Test / test_scrub_pg_size_3 (push) Successful in 54s
Test / test_scrub_ec (push) Successful in 31s
Test / test_nfs (push) Successful in 18s
Test / test_heal_csum_4k_dj (push) Successful in 8m27s
Test / test_heal_csum_32k_dj (push) Successful in 3m36s
Test / test_heal_ec (push) Successful in 4m56s
2024-07-18 19:29:05 +03:00
8239ea2356 Do not try to purge the same OSD multiple times if its multiple devices are passed to purge 2024-07-16 16:48:16 +03:00
e898335b8d Release 1.7.1
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m13s
Test / test_rebalance_verify_imm (push) Successful in 4m39s
Test / test_root_node (push) Successful in 15s
Test / test_rebalance_verify (push) Successful in 5m36s
Test / test_switch_primary (push) Successful in 41s
Test / test_write (push) Successful in 44s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m4s
Test / test_write_no_same (push) Successful in 22s
Test / test_write_xor (push) Successful in 1m7s
Test / test_rebalance_verify_ec (push) Successful in 7m23s
Test / test_heal_pg_size_2 (push) Successful in 4m10s
Test / test_heal_csum_32k_dmj (push) Successful in 4m46s
Test / test_heal_csum_4k_dmj (push) Successful in 5m5s
Test / test_heal_csum_32k_dj (push) Successful in 8m24s
Test / test_osd_tags (push) Successful in 23s
Test / test_enospc (push) Successful in 1m27s
Test / test_enospc_xor (push) Successful in 1m59s
Test / test_heal_csum_4k (push) Successful in 4m33s
Test / test_enospc_imm (push) Successful in 58s
Test / test_scrub (push) Successful in 44s
Test / test_enospc_imm_xor (push) Successful in 1m11s
Test / test_scrub_zero_osd_2 (push) Successful in 25s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 43s
Test / test_scrub_pg_size_3 (push) Successful in 53s
Test / test_nfs (push) Successful in 20s
Test / test_scrub_ec (push) Successful in 27s
Test / test_heal_csum_32k (push) Successful in 4m3s
Test / test_scrub_xor (push) Successful in 38s
Test / test_heal_csum_4k_dj (push) Successful in 3m23s
Test / test_heal_ec (push) Successful in 3m46s
Some stupid hotfixes for 1.7.0 :)

- Fix NFS mount
- Fix modify-osd
- Fix use_antietcd not taken from /etc
2024-07-16 00:07:03 +03:00
e7869611fa Another stupid fix for NFS (no idea how it worked for me)
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m13s
Test / test_rebalance_verify_imm (push) Successful in 6m48s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 7m31s
Test / test_switch_primary (push) Successful in 42s
Test / test_write (push) Successful in 48s
Test / test_rebalance_verify_ec_imm (push) Successful in 5m54s
Test / test_write_no_same (push) Successful in 21s
Test / test_write_xor (push) Successful in 1m51s
Test / test_rebalance_verify_ec (push) Successful in 8m35s
Test / test_heal_pg_size_2 (push) Successful in 3m49s
Test / test_heal_csum_32k_dmj (push) Successful in 5m32s
Test / test_heal_csum_32k (push) Successful in 5m25s
Test / test_heal_csum_4k_dmj (push) Successful in 4m45s
Test / test_osd_tags (push) Successful in 23s
Test / test_enospc (push) Successful in 1m48s
Test / test_enospc_xor (push) Successful in 1m45s
Test / test_heal_csum_4k_dj (push) Successful in 5m15s
Test / test_enospc_imm (push) Successful in 51s
Test / test_scrub (push) Successful in 31s
Test / test_enospc_imm_xor (push) Successful in 1m5s
Test / test_scrub_zero_osd_2 (push) Successful in 26s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 31s
Test / test_scrub_pg_size_3 (push) Successful in 55s
Test / test_scrub_ec (push) Successful in 26s
Test / test_nfs (push) Successful in 18s
Test / test_heal_csum_4k (push) Successful in 6m11s
Test / test_scrub_xor (push) Successful in 57s
Test / test_heal_csum_32k_dj (push) Successful in 3m44s
Test / test_heal_ec (push) Successful in 4m2s
2024-07-16 00:05:51 +03:00
e1c2500b60 Use modify-osd in the disk removal instruction 2024-07-16 00:01:42 +03:00
42cf3a11df Oops, fix reweight :)
Some checks reported warnings
Test / test_rebalance_verify_ec (push) Has been cancelled
Test / test_rebalance_verify_ec_imm (push) Has been cancelled
Test / test_root_node (push) Has been cancelled
Test / test_switch_primary (push) Has been cancelled
Test / test_write (push) Has been cancelled
Test / test_write_xor (push) Has been cancelled
Test / test_write_no_same (push) Has been cancelled
Test / test_cas (push) Has been cancelled
Test / test_change_pg_count (push) Has been cancelled
Test / test_heal_pg_size_2 (push) Has been cancelled
Test / test_heal_ec (push) Has been cancelled
Test / test_heal_csum_32k_dmj (push) Has been cancelled
Test / test_heal_csum_32k_dj (push) Has been cancelled
Test / test_heal_csum_32k (push) Has been cancelled
Test / test_heal_csum_4k_dmj (push) Has been cancelled
Test / test_heal_csum_4k_dj (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
Test / test_osd_tags (push) Has been cancelled
Test / test_enospc (push) Has been cancelled
Test / test_enospc_xor (push) Has been cancelled
Test / test_enospc_imm (push) Has been cancelled
Test / test_enospc_imm_xor (push) Has been cancelled
Test / test_scrub (push) Has been cancelled
Test / test_scrub_zero_osd_2 (push) Has been cancelled
Test / test_scrub_xor (push) Has been cancelled
Test / test_scrub_pg_size_3 (push) Has been cancelled
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Has been cancelled
Test / test_scrub_ec (push) Has been cancelled
Test / test_nfs (push) Has been cancelled
Test / test_add_osd (push) Has been cancelled
2024-07-16 00:01:11 +03:00
4d9293f0e9 Fix QEMU 8.2 and 9.0 patches (add @location comments) 2024-07-15 16:30:14 +03:00
7a13f85ae2 Fix mon config merge
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 3m6s
Test / test_rebalance_verify_imm (push) Successful in 4m17s
Test / test_root_node (push) Successful in 16s
Test / test_rebalance_verify (push) Successful in 4m59s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 42s
Test / test_write_no_same (push) Successful in 21s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m36s
Test / test_rebalance_verify_ec (push) Successful in 4m44s
Test / test_write_xor (push) Successful in 1m39s
Test / test_heal_pg_size_2 (push) Successful in 4m52s
Test / test_heal_csum_32k_dmj (push) Successful in 4m44s
Test / test_heal_csum_32k_dj (push) Successful in 5m45s
Test / test_heal_csum_4k_dmj (push) Successful in 5m13s
Test / test_heal_ec (push) Failing after 10m11s
Test / test_osd_tags (push) Successful in 15s
Test / test_enospc (push) Successful in 1m9s
Test / test_enospc_xor (push) Successful in 1m52s
Test / test_heal_csum_4k (push) Successful in 3m32s
Test / test_enospc_imm (push) Successful in 1m0s
Test / test_enospc_imm_xor (push) Successful in 1m14s
Test / test_heal_csum_4k_dj (push) Successful in 8m7s
Test / test_scrub (push) Successful in 35s
Test / test_heal_csum_32k (push) Failing after 10m17s
Test / test_scrub_zero_osd_2 (push) Successful in 37s
Test / test_scrub_xor (push) Successful in 35s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 44s
Test / test_scrub_ec (push) Successful in 29s
Test / test_scrub_pg_size_3 (push) Successful in 52s
Test / test_nfs (push) Failing after 2m18s
2024-07-15 16:25:22 +03:00
fc219b8602 Add pg-list to docs 2024-07-15 13:29:22 +03:00
989d73f874 Release 1.7.0
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 3m12s
Test / test_rebalance_verify_imm (push) Successful in 6m5s
Test / test_root_node (push) Successful in 15s
Test / test_rebalance_verify (push) Successful in 6m45s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 42s
Test / test_rebalance_verify_ec_imm (push) Successful in 5m6s
Test / test_write_no_same (push) Successful in 24s
Test / test_rebalance_verify_ec (push) Successful in 7m26s
Test / test_write_xor (push) Successful in 1m56s
Test / test_heal_pg_size_2 (push) Successful in 4m13s
Test / test_heal_ec (push) Successful in 4m39s
Test / test_heal_csum_32k_dmj (push) Successful in 5m49s
Test / test_heal_csum_32k_dj (push) Successful in 5m47s
Test / test_heal_csum_32k (push) Successful in 6m35s
Test / test_osd_tags (push) Successful in 25s
Test / test_enospc (push) Successful in 1m52s
Test / test_heal_csum_4k_dj (push) Successful in 6m9s
Test / test_enospc_imm (push) Successful in 43s
Test / test_enospc_xor (push) Successful in 1m4s
Test / test_scrub (push) Successful in 45s
Test / test_enospc_imm_xor (push) Successful in 1m10s
Test / test_scrub_zero_osd_2 (push) Successful in 27s
Test / test_scrub_xor (push) Successful in 34s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 42s
Test / test_scrub_pg_size_3 (push) Successful in 57s
Test / test_scrub_ec (push) Successful in 27s
Test / test_heal_csum_4k (push) Successful in 9m58s
Test / test_heal_csum_4k_dmj (push) Successful in 3m33s
Test / test_nfs (push) Failing after 2m17s
Omnidirectional release

New features:

- Support handling TCP I/O in simple separate io_uring-based [I/O threads](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/config/client.en.md#client_iothread_count) - may increase linear performance to 7-8 GB/s
- Experimental internal etcd replacement - [antietcd](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/config/monitor.en.md#use_antietcd)
- Monitor now has a [built-in Prometheus exporter](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/config/monitor.en.md#enable_prometheus)
- Added a reference [Grafana dashboard](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/mon/scripts/Vitastor-Grafana-6+.json)
- Implement vitastor-cli [osd-tree](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/usage/cli.en.md#osd-tree) and [ls-osd](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/usage/cli.en.md#ls-osd) commands
- Implement vitastor-cli [modify-osd](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/usage/cli.en.md#modify-osd) command
- Implement vitastor-cli [pg-list](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/usage/cli.en.md#pg-list) command
- Implement [VitastorFS defragmentation](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/usage/nfs.en.md#defrag)
- Implement basic node.js binding (not published on npm yet)

Changes:

- Make immediate_commit=all the default everywhere to match default vitastor-disk behaviour
- Make pool-create error message more obvious and add details to it
- Set default etcd_ws_keepalive_interval to 5 seconds (speedup client etcd failover)
- Support OpenStack 2023.2 in Nova and Cinder drivers/patches
- Add patches for libvirt 10.x
- Add patches for QEMU 8.2 and 9.0
- Implement internal restart / run_forever in monitor
- Some source tree refactoring - sources are now moved into subdirectories, monitor is now split into multiple files
- Add vitastor_c_inode_get_immediate_commit in vitastor_c client library
- Make vitastor_kv.h header public

Bug fixes:

- Fix total statistics usec/count/bytes not being reported when delta (bps/iops/lat) is zero
- Prevent infinite loop in NFS on files with incorrect metadata pointing to an empty volume
- Fix READDIR offsets (cookies) in VitastorFS sometimes leading to client infinite loops when reading a directory
- Fix a rare infinite loop during OSD journal flushing (OSD hanging and eating 100 % CPU)
- Fix several bugs which could lead to lost writes in setups without immediate_commit:
  - Client library treated writes as completed before actually completing them, thus missing them in a subsequent fsync
  - Client library didn't repeat writes on the new PG primary when it changed
  - OSDs didn't drop peer connections with dirty writes when stopping PG
- Fix Block Pseudo-FS initialization leading to ENOENTs some time after start
- Fix vitastor-cli merge-based commands (merge/flatten/rm snapshot) slowing down and finally failing when using CAS optimistic locks
- Fix pool create/modify --block_size validation
- Fix TTL comparison for determining failed lease/keepalive requests in OSD
- Add support for size suffixes in pool-create --block_size and --immediate_commit values
2024-07-15 11:48:35 +03:00
f0630722ce Make pool-create error message more obvious, add details
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 3m11s
Test / test_rebalance_verify_imm (push) Successful in 4m52s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 5m31s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 43s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m58s
Test / test_write_no_same (push) Successful in 24s
Test / test_rebalance_verify_ec (push) Successful in 6m27s
Test / test_write_xor (push) Successful in 1m59s
Test / test_heal_pg_size_2 (push) Successful in 3m55s
Test / test_heal_ec (push) Successful in 3m57s
Test / test_heal_csum_32k_dmj (push) Successful in 6m10s
Test / test_heal_csum_32k_dj (push) Successful in 6m10s
Test / test_heal_csum_32k (push) Successful in 7m12s
Test / test_heal_csum_4k_dmj (push) Successful in 6m52s
Test / test_osd_tags (push) Successful in 19s
Test / test_enospc (push) Successful in 2m6s
Test / test_enospc_xor (push) Successful in 2m47s
Test / test_heal_csum_4k (push) Successful in 6m20s
Test / test_heal_csum_4k_dj (push) Successful in 6m22s
Test / test_enospc_imm (push) Successful in 1m5s
Test / test_scrub_zero_osd_2 (push) Successful in 40s
Test / test_scrub (push) Successful in 42s
Test / test_scrub_xor (push) Successful in 39s
Test / test_enospc_imm_xor (push) Successful in 50s
Test / test_scrub_ec (push) Successful in 35s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 40s
Test / test_scrub_pg_size_3 (push) Successful in 50s
Test / test_nfs (push) Failing after 2m28s
2024-07-15 11:47:49 +03:00
93b0947720 Support size suffixes in pool-create --block_size / --bitmap_granularity 2024-07-15 11:47:05 +03:00
9c628646fa Remove bullseye-backports from build, remove buster-backports from docs 2024-07-15 11:47:05 +03:00
cf476a3b95 Add mkdir /var/lib/vitastor 2024-07-15 11:47:05 +03:00
23f9273ba3 Take use_antietcd setting from /etc/vitastor/vitastor.conf too
Some checks reported warnings
Test / test_interrupted_rebalance_ec_imm (push) Successful in 1m19s
Test / test_move_reappear (push) Successful in 24s
Test / test_rm (push) Successful in 16s
Test / test_snapshot_down (push) Successful in 33s
Test / test_snapshot_down_ec (push) Successful in 34s
Test / test_splitbrain (push) Successful in 24s
Test / test_snapshot_chain (push) Successful in 2m20s
Test / test_snapshot_chain_ec (push) Successful in 2m50s
Test / test_rebalance_verify (push) Has started running
Test / test_rebalance_verify_imm (push) Has started running
Test / test_root_node (push) Has been cancelled
Test / test_switch_primary (push) Has been cancelled
Test / test_write (push) Has been cancelled
Test / test_write_xor (push) Has been cancelled
Test / test_write_no_same (push) Has been cancelled
Test / test_heal_pg_size_2 (push) Has been cancelled
Test / test_heal_ec (push) Has been cancelled
Test / test_heal_csum_32k_dmj (push) Has been cancelled
Test / test_heal_csum_32k_dj (push) Has been cancelled
Test / test_heal_csum_32k (push) Has been cancelled
Test / test_heal_csum_4k_dmj (push) Has been cancelled
Test / test_heal_csum_4k_dj (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
Test / test_rebalance_verify_ec_imm (push) Has been cancelled
Test / test_osd_tags (push) Has been cancelled
Test / test_enospc (push) Has been cancelled
Test / test_enospc_xor (push) Has been cancelled
Test / test_enospc_imm (push) Has been cancelled
Test / test_enospc_imm_xor (push) Has been cancelled
Test / test_rebalance_verify_ec (push) Has been cancelled
2024-07-15 02:02:56 +03:00
74b88bf8ba Use own repo instead of buster-backports as it is EOL 2024-07-14 20:25:44 +03:00
1254d5a0de Fix delta stats when counters may be hypothetically reset
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 2m58s
Test / test_rebalance_verify_imm (push) Successful in 5m37s
Test / test_root_node (push) Successful in 14s
Test / test_rebalance_verify (push) Successful in 6m18s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m40s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 48s
Test / test_write_no_same (push) Successful in 23s
Test / test_write_xor (push) Successful in 1m30s
Test / test_rebalance_verify_ec (push) Successful in 6m51s
Test / test_heal_ec (push) Successful in 4m46s
Test / test_heal_csum_32k_dmj (push) Successful in 5m1s
Test / test_heal_csum_32k_dj (push) Successful in 5m8s
Test / test_heal_pg_size_2 (push) Failing after 10m29s
Test / test_heal_csum_4k_dmj (push) Successful in 5m9s
Test / test_heal_csum_4k_dj (push) Successful in 4m57s
Test / test_osd_tags (push) Successful in 20s
Test / test_enospc (push) Successful in 1m17s
Test / test_enospc_xor (push) Successful in 1m41s
Test / test_enospc_imm (push) Successful in 1m32s
Test / test_scrub (push) Successful in 31s
Test / test_enospc_imm_xor (push) Successful in 1m21s
Test / test_scrub_zero_osd_2 (push) Successful in 31s
Test / test_scrub_xor (push) Successful in 29s
Test / test_heal_csum_32k (push) Successful in 10m18s
Test / test_scrub_ec (push) Successful in 42s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 45s
Test / test_scrub_pg_size_3 (push) Successful in 57s
Test / test_heal_csum_4k (push) Successful in 7m38s
Test / test_nfs (push) Failing after 2m17s
2024-07-14 13:11:00 +03:00
f87bece253 Fix build with antietcd & tinyraft, remove some version hardcode 2024-07-14 13:04:25 +03:00
ba85d0ef16 Add vitastor_kv.h to RPM specs 2024-07-14 11:20:37 +03:00
17a909ea3a Stop metrics/future API HTTP server when closing Monitor instance
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 2m55s
Test / test_rebalance_verify_imm (push) Successful in 5m5s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 5m43s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m10s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 45s
Test / test_write_no_same (push) Successful in 21s
Test / test_write_xor (push) Successful in 1m50s
Test / test_rebalance_verify_ec (push) Successful in 6m58s
Test / test_heal_pg_size_2 (push) Successful in 3m55s
Test / test_heal_ec (push) Successful in 3m50s
Test / test_heal_csum_32k_dmj (push) Successful in 5m29s
Test / test_heal_csum_32k_dj (push) Successful in 6m35s
Test / test_heal_csum_32k (push) Successful in 6m41s
Test / test_heal_csum_4k_dmj (push) Successful in 6m38s
Test / test_osd_tags (push) Successful in 22s
Test / test_enospc (push) Successful in 1m54s
Test / test_enospc_xor (push) Successful in 2m24s
Test / test_heal_csum_4k_dj (push) Successful in 6m38s
Test / test_enospc_imm (push) Successful in 1m26s
Test / test_scrub (push) Successful in 1m0s
Test / test_heal_csum_4k (push) Successful in 5m45s
Test / test_scrub_zero_osd_2 (push) Successful in 48s
Test / test_enospc_imm_xor (push) Successful in 1m25s
Test / test_scrub_xor (push) Successful in 34s
Test / test_scrub_ec (push) Successful in 35s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 40s
Test / test_scrub_pg_size_3 (push) Successful in 44s
Test / test_nfs (push) Failing after 2m17s
2024-07-14 11:16:41 +03:00
a4dfc220ab Implement basic node.js binding (not published on npm yet)
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 3m12s
Test / test_rebalance_verify_ec (push) Failing after 1m56s
Test / test_root_node (push) Successful in 13s
Test / test_rebalance_verify_imm (push) Successful in 3m15s
Test / test_rebalance_verify (push) Successful in 4m1s
Test / test_switch_primary (push) Successful in 37s
Test / test_write_no_same (push) Successful in 21s
Test / test_write (push) Successful in 59s
Test / test_write_xor (push) Successful in 1m29s
Test / test_rebalance_verify_ec_imm (push) Successful in 5m13s
Test / test_heal_pg_size_2 (push) Has started running
Test / test_heal_ec (push) Has started running
Test / test_heal_csum_32k_dj (push) Has started running
Test / test_heal_csum_32k (push) Has been cancelled
Test / test_heal_csum_4k_dmj (push) Has been cancelled
Test / test_heal_csum_4k_dj (push) Has been cancelled
Test / test_heal_csum_4k (push) Has been cancelled
Test / test_osd_tags (push) Has been cancelled
Test / test_enospc (push) Has been cancelled
Test / test_enospc_xor (push) Has been cancelled
Test / test_enospc_imm (push) Has been cancelled
Test / test_enospc_imm_xor (push) Has been cancelled
Test / test_scrub (push) Has been cancelled
Test / test_scrub_zero_osd_2 (push) Has been cancelled
Test / test_scrub_xor (push) Has been cancelled
Test / test_scrub_pg_size_3 (push) Has been cancelled
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Has been cancelled
Test / test_scrub_ec (push) Has been cancelled
Test / test_nfs (push) Has been cancelled
Test / test_heal_csum_32k_dmj (push) Has been cancelled
2024-07-14 10:58:38 +03:00
26426dd95e Return it back, but fix stats in another way
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m9s
Test / test_rebalance_verify_imm (push) Successful in 5m3s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 5m45s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 41s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m4s
Test / test_write_no_same (push) Successful in 21s
Test / test_rebalance_verify_ec (push) Successful in 6m21s
Test / test_write_xor (push) Successful in 1m53s
Test / test_heal_pg_size_2 (push) Successful in 3m56s
Test / test_heal_ec (push) Successful in 4m18s
Test / test_heal_csum_32k_dmj (push) Successful in 5m46s
Test / test_heal_csum_32k_dj (push) Successful in 5m39s
Test / test_heal_csum_32k (push) Successful in 6m24s
Test / test_osd_tags (push) Successful in 30s
Test / test_enospc (push) Successful in 1m16s
Test / test_heal_csum_4k_dj (push) Successful in 5m40s
Test / test_enospc_xor (push) Successful in 51s
Test / test_enospc_imm (push) Successful in 54s
Test / test_heal_csum_4k (push) Successful in 6m48s
Test / test_enospc_imm_xor (push) Successful in 1m13s
Test / test_scrub (push) Successful in 30s
Test / test_scrub_zero_osd_2 (push) Successful in 31s
Test / test_scrub_xor (push) Successful in 28s
Test / test_scrub_pg_size_3 (push) Successful in 48s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 30s
Test / test_scrub_ec (push) Successful in 28s
Test / test_nfs (push) Successful in 17s
Test / test_heal_csum_4k_dmj (push) Successful in 4m27s
2024-07-13 19:14:34 +03:00
9f38b7e5c1 Fix osd_ping_time_remaining reset from 990c3ba7eb, leading to osd disconnections
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m0s
Test / test_rebalance_verify_imm (push) Successful in 4m26s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 4m59s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m21s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 46s
Test / test_write_no_same (push) Successful in 21s
Test / test_write_xor (push) Successful in 1m52s
Test / test_rebalance_verify_ec (push) Successful in 7m1s
Test / test_heal_pg_size_2 (push) Successful in 3m51s
Test / test_heal_csum_32k_dmj (push) Successful in 5m38s
Test / test_heal_csum_32k_dj (push) Successful in 6m30s
Test / test_heal_csum_32k (push) Successful in 6m39s
Test / test_osd_tags (push) Successful in 21s
Test / test_enospc (push) Successful in 2m13s
Test / test_heal_csum_4k_dmj (push) Successful in 6m44s
Test / test_heal_csum_4k_dj (push) Successful in 6m5s
Test / test_enospc_imm (push) Successful in 1m39s
Test / test_enospc_xor (push) Successful in 2m43s
Test / test_scrub (push) Successful in 1m0s
Test / test_scrub_zero_osd_2 (push) Successful in 47s
Test / test_enospc_imm_xor (push) Successful in 1m19s
Test / test_scrub_xor (push) Successful in 33s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 35s
Test / test_scrub_pg_size_3 (push) Successful in 53s
Test / test_nfs (push) Successful in 21s
Test / test_scrub_ec (push) Successful in 29s
Test / test_heal_csum_4k (push) Successful in 9m4s
Test / test_heal_ec (push) Successful in 2m51s
2024-07-13 16:09:56 +03:00
20057defbe Revert 8ad63465cd
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 4m10s
Test / test_rebalance_verify_ec (push) Failing after 4m22s
Test / test_root_node (push) Successful in 18s
Test / test_switch_primary (push) Successful in 39s
Test / test_rebalance_verify_imm (push) Successful in 7m51s
Test / test_write (push) Successful in 1m19s
Test / test_write_xor (push) Successful in 1m22s
Test / test_write_no_same (push) Successful in 21s
Test / test_rebalance_verify (push) Failing after 10m15s
Test / test_rebalance_verify_ec_imm (push) Successful in 7m51s
Test / test_heal_pg_size_2 (push) Successful in 4m52s
Test / test_heal_csum_32k_dmj (push) Successful in 5m53s
Test / test_heal_ec (push) Successful in 6m41s
Test / test_heal_csum_32k_dj (push) Successful in 6m8s
Test / test_heal_csum_32k (push) Successful in 6m37s
Test / test_osd_tags (push) Successful in 51s
Test / test_heal_csum_4k_dmj (push) Successful in 6m36s
Test / test_heal_csum_4k_dj (push) Successful in 6m43s
Test / test_heal_csum_4k (push) Successful in 6m30s
Test / test_enospc (push) Successful in 1m46s
Test / test_enospc_imm (push) Successful in 54s
Test / test_enospc_xor (push) Successful in 1m30s
Test / test_enospc_imm_xor (push) Successful in 52s
Test / test_scrub (push) Successful in 25s
Test / test_scrub_zero_osd_2 (push) Successful in 29s
Test / test_scrub_xor (push) Successful in 37s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 41s
Test / test_scrub_pg_size_3 (push) Successful in 50s
Test / test_nfs (push) Successful in 18s
Test / test_scrub_ec (push) Successful in 28s
2024-07-13 15:34:34 +03:00
b4e9140755 Add defrag docs, fix trace message
Some checks failed
Test / test_interrupted_rebalance_ec (push) Successful in 6m48s
Test / test_root_node (push) Successful in 50s
Test / test_rebalance_verify_ec (push) Failing after 10m14s
Test / test_switch_primary (push) Successful in 56s
Test / test_write_no_same (push) Successful in 34s
Test / test_write (push) Successful in 1m38s
Test / test_rebalance_verify_ec_imm (push) Failing after 11m54s
Test / test_write_xor (push) Successful in 3m2s
Test / test_heal_pg_size_2 (push) Successful in 4m1s
Test / test_heal_csum_32k_dmj (push) Successful in 5m35s
Test / test_heal_csum_32k_dj (push) Successful in 6m34s
Test / test_heal_csum_32k (push) Successful in 6m44s
Test / test_osd_tags (push) Successful in 32s
Test / test_heal_csum_4k_dmj (push) Successful in 6m3s
Test / test_enospc (push) Successful in 1m56s
Test / test_heal_csum_4k_dj (push) Successful in 5m32s
Test / test_enospc_imm (push) Successful in 1m32s
Test / test_enospc_xor (push) Successful in 2m20s
Test / test_scrub (push) Successful in 1m9s
Test / test_scrub_zero_osd_2 (push) Successful in 38s
Test / test_enospc_imm_xor (push) Successful in 1m49s
Test / test_heal_csum_4k (push) Successful in 6m9s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 45s
Test / test_scrub_pg_size_3 (push) Successful in 55s
Test / test_scrub_ec (push) Successful in 24s
Test / test_nfs (push) Successful in 13s
Test / test_scrub_xor (push) Failing after 3m8s
Test / test_rebalance_verify (push) Successful in 3m35s
Test / test_rebalance_verify_imm (push) Successful in 3m50s
Test / test_heal_ec (push) Successful in 5m22s
2024-07-13 00:45:53 +03:00
413959e75a Prevent infinite loop in NFS - return EIO when an inode points to an incorrect volume position
Some checks failed
Test / test_snapshot_chain_ec (push) Failing after 6m21s
Test / test_rebalance_verify_ec (push) Failing after 5m19s
Test / test_rebalance_verify_imm (push) Successful in 7m19s
Test / test_rebalance_verify (push) Failing after 10m24s
Test / test_root_node (push) Successful in 3m21s
Test / test_switch_primary (push) Successful in 3m24s
Test / test_write_no_same (push) Successful in 41s
Test / test_write (push) Successful in 2m10s
Test / test_write_xor (push) Failing after 3m27s
Test / test_rebalance_verify_ec_imm (push) Successful in 10m32s
Test / test_heal_pg_size_2 (push) Successful in 5m42s
Test / test_heal_ec (push) Successful in 4m45s
Test / test_heal_csum_32k_dmj (push) Successful in 5m36s
Test / test_heal_csum_32k_dj (push) Successful in 6m42s
Test / test_heal_csum_4k_dmj (push) Successful in 6m58s
Test / test_heal_csum_32k (push) Successful in 7m2s
Test / test_osd_tags (push) Successful in 27s
Test / test_enospc (push) Successful in 1m57s
Test / test_enospc_xor (push) Successful in 2m13s
Test / test_heal_csum_4k_dj (push) Successful in 7m8s
Test / test_enospc_imm (push) Successful in 47s
Test / test_scrub (push) Successful in 1m11s
Test / test_scrub_zero_osd_2 (push) Successful in 1m18s
Test / test_enospc_imm_xor (push) Successful in 1m40s
Test / test_heal_csum_4k (push) Successful in 6m3s
Test / test_scrub_xor (push) Successful in 53s
Test / test_scrub_ec (push) Successful in 27s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 38s
Test / test_scrub_pg_size_3 (push) Successful in 56s
Test / test_nfs (push) Successful in 14s
2024-07-12 20:53:54 +03:00
8973982570 Delete keys from internal state instead of setting them to null on DELETE event in mon
Some checks failed
Test / test_snapshot_chain_ec (push) Failing after 6m9s
Test / test_rebalance_verify (push) Successful in 6m10s
Test / test_interrupted_rebalance_ec (push) Failing after 10m22s
Test / test_root_node (push) Successful in 55s
Test / test_switch_primary (push) Successful in 49s
Test / test_rebalance_verify_ec (push) Failing after 4m50s
Test / test_write (push) Successful in 2m19s
Test / test_write_no_same (push) Successful in 23s
Test / test_write_xor (push) Successful in 3m12s
Test / test_rebalance_verify_ec_imm (push) Successful in 7m11s
Test / test_heal_pg_size_2 (push) Successful in 4m27s
Test / test_heal_ec (push) Successful in 4m42s
Test / test_heal_csum_32k_dmj (push) Successful in 5m47s
Test / test_heal_csum_32k_dj (push) Successful in 6m36s
Test / test_heal_csum_32k (push) Successful in 7m9s
Test / test_heal_csum_4k_dmj (push) Successful in 6m50s
Test / test_osd_tags (push) Successful in 19s
Test / test_enospc (push) Successful in 1m52s
Test / test_enospc_xor (push) Successful in 2m29s
Test / test_heal_csum_4k_dj (push) Successful in 6m58s
Test / test_enospc_imm (push) Successful in 1m37s
Test / test_scrub (push) Successful in 48s
Test / test_enospc_imm_xor (push) Successful in 1m2s
Test / test_scrub_zero_osd_2 (push) Successful in 33s
Test / test_heal_csum_4k (push) Successful in 6m22s
Test / test_scrub_xor (push) Successful in 38s
Test / test_nfs (push) Successful in 21s
Test / test_scrub_ec (push) Successful in 34s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 36s
Test / test_scrub_pg_size_3 (push) Successful in 54s
2024-07-12 16:42:21 +03:00
990c3ba7eb Implement FS defragmentation
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 4m56s
Test / test_rebalance_verify_ec (push) Failing after 4m35s
Test / test_rebalance_verify_imm (push) Successful in 7m40s
Test / test_root_node (push) Successful in 30s
Test / test_switch_primary (push) Successful in 38s
Test / test_write (push) Successful in 54s
Test / test_write_no_same (push) Successful in 23s
Test / test_write_xor (push) Successful in 1m30s
Test / test_rebalance_verify_ec_imm (push) Successful in 6m21s
Test / test_rebalance_verify (push) Failing after 10m18s
Test / test_heal_pg_size_2 (push) Successful in 6m15s
Test / test_heal_csum_32k_dmj (push) Successful in 5m27s
Test / test_heal_csum_32k_dj (push) Successful in 5m8s
Test / test_heal_ec (push) Successful in 5m34s
Test / test_heal_csum_32k (push) Successful in 6m10s
Test / test_heal_csum_4k_dj (push) Successful in 6m4s
Test / test_heal_csum_4k_dmj (push) Successful in 6m6s
Test / test_heal_csum_4k (push) Successful in 6m2s
Test / test_osd_tags (push) Successful in 15s
Test / test_enospc (push) Successful in 54s
Test / test_enospc_imm (push) Successful in 51s
Test / test_enospc_xor (push) Successful in 58s
Test / test_enospc_imm_xor (push) Successful in 49s
Test / test_scrub_zero_osd_2 (push) Successful in 31s
Test / test_scrub (push) Successful in 34s
Test / test_scrub_xor (push) Successful in 33s
Test / test_nfs (push) Successful in 20s
Test / test_scrub_pg_size_3 (push) Successful in 51s
Test / test_scrub_ec (push) Successful in 27s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 30s
2024-07-12 16:11:35 +03:00
1771d2ef36 Fix READDIR cookie/offset bug
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 3m1s
Test / test_rebalance_verify_imm (push) Successful in 3m43s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 4m19s
Test / test_switch_primary (push) Successful in 42s
Test / test_write (push) Successful in 42s
Test / test_write_no_same (push) Successful in 22s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m7s
Test / test_rebalance_verify_ec (push) Successful in 4m10s
Test / test_write_xor (push) Successful in 1m20s
Test / test_heal_pg_size_2 (push) Successful in 4m59s
Test / test_heal_csum_32k_dmj (push) Successful in 5m30s
Test / test_heal_csum_32k_dj (push) Successful in 5m28s
Test / test_heal_csum_32k (push) Successful in 5m10s
Test / test_heal_ec (push) Failing after 10m10s
Test / test_osd_tags (push) Successful in 13s
Test / test_enospc (push) Successful in 1m10s
Test / test_enospc_xor (push) Successful in 1m35s
Test / test_heal_csum_4k (push) Successful in 3m39s
Test / test_enospc_imm (push) Successful in 41s
Test / test_scrub (push) Successful in 43s
Test / test_enospc_imm_xor (push) Successful in 1m3s
Test / test_scrub_zero_osd_2 (push) Successful in 27s
Test / test_scrub_xor (push) Successful in 32s
Test / test_scrub_pg_size_3 (push) Successful in 50s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 35s
Test / test_heal_csum_4k_dj (push) Successful in 10m2s
Test / test_heal_csum_4k_dmj (push) Failing after 10m24s
Test / test_scrub_ec (push) Successful in 29s
Test / test_nfs (push) Successful in 25s
2024-07-12 16:11:35 +03:00
d88ab76636 Fix active mon stat
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 3m8s
Test / test_rebalance_verify_imm (push) Successful in 4m20s
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify (push) Successful in 4m59s
Test / test_switch_primary (push) Successful in 36s
Test / test_write (push) Successful in 45s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m30s
Test / test_write_no_same (push) Successful in 20s
Test / test_write_xor (push) Successful in 1m20s
Test / test_rebalance_verify_ec (push) Successful in 7m3s
Test / test_heal_pg_size_2 (push) Successful in 3m49s
Test / test_heal_csum_32k_dmj (push) Successful in 5m11s
Test / test_heal_csum_32k_dj (push) Successful in 5m57s
Test / test_heal_csum_32k (push) Successful in 6m14s
Test / test_heal_ec (push) Failing after 10m14s
Test / test_osd_tags (push) Successful in 48s
Test / test_heal_csum_4k_dmj (push) Successful in 6m24s
Test / test_enospc (push) Successful in 2m20s
Test / test_heal_csum_4k_dj (push) Successful in 5m58s
Test / test_enospc_xor (push) Successful in 2m30s
Test / test_enospc_imm (push) Successful in 1m37s
Test / test_scrub (push) Successful in 48s
Test / test_heal_csum_4k (push) Successful in 5m40s
Test / test_scrub_zero_osd_2 (push) Successful in 44s
Test / test_enospc_imm_xor (push) Successful in 1m0s
Test / test_scrub_xor (push) Successful in 33s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 35s
Test / test_scrub_ec (push) Successful in 33s
Test / test_scrub_pg_size_3 (push) Successful in 43s
Test / test_nfs (push) Successful in 14s
2024-07-11 01:34:59 +03:00
c010a0aa54 Fix OSD "local write" latency sum
Some checks failed
Test / test_rebalance_verify (push) Failing after 2m2s
Test / test_snapshot_chain_ec (push) Successful in 2m45s
Test / test_root_node (push) Successful in 1m10s
Test / test_switch_primary (push) Successful in 1m58s
Test / test_rebalance_verify_imm (push) Successful in 5m10s
Test / test_write (push) Successful in 1m0s
Test / test_write_xor (push) Successful in 59s
Test / test_write_no_same (push) Successful in 16s
Test / test_rebalance_verify_ec_imm (push) Successful in 5m3s
Test / test_heal_pg_size_2 (push) Successful in 3m50s
Test / test_rebalance_verify_ec (push) Successful in 8m57s
Test / test_heal_ec (push) Successful in 4m27s
Test / test_heal_csum_32k (push) Successful in 5m20s
Test / test_heal_csum_4k_dmj (push) Successful in 5m21s
Test / test_heal_csum_32k_dmj (push) Failing after 10m21s
Test / test_osd_tags (push) Successful in 21s
Test / test_enospc (push) Successful in 1m55s
Test / test_heal_csum_32k_dj (push) Failing after 10m21s
Test / test_heal_csum_4k_dj (push) Successful in 4m36s
Test / test_heal_csum_4k (push) Successful in 4m43s
Test / test_enospc_xor (push) Successful in 1m25s
Test / test_enospc_imm (push) Successful in 47s
Test / test_enospc_imm_xor (push) Successful in 56s
Test / test_scrub (push) Successful in 33s
Test / test_scrub_zero_osd_2 (push) Successful in 29s
Test / test_scrub_xor (push) Successful in 29s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 36s
Test / test_nfs (push) Successful in 20s
Test / test_scrub_ec (push) Successful in 34s
Test / test_scrub_pg_size_3 (push) Successful in 43s
2024-07-11 01:30:03 +03:00
0d42712d29 Fix refresh in dashboard variable
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m3s
Test / test_rebalance_verify_imm (push) Successful in 4m17s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 4m59s
Test / test_switch_primary (push) Successful in 38s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m22s
Test / test_write (push) Successful in 42s
Test / test_write_no_same (push) Successful in 22s
Test / test_rebalance_verify_ec (push) Successful in 5m52s
Test / test_write_xor (push) Successful in 1m53s
Test / test_heal_pg_size_2 (push) Successful in 3m50s
Test / test_heal_ec (push) Successful in 4m8s
Test / test_heal_csum_32k_dmj (push) Successful in 5m41s
Test / test_heal_csum_32k_dj (push) Successful in 5m40s
Test / test_heal_csum_32k (push) Successful in 6m40s
Test / test_heal_csum_4k_dmj (push) Successful in 6m23s
Test / test_osd_tags (push) Successful in 31s
Test / test_enospc (push) Successful in 1m44s
Test / test_enospc_xor (push) Successful in 2m35s
Test / test_heal_csum_4k_dj (push) Successful in 6m11s
Test / test_enospc_imm (push) Successful in 1m10s
Test / test_heal_csum_4k (push) Successful in 6m17s
Test / test_scrub_zero_osd_2 (push) Successful in 35s
Test / test_scrub (push) Successful in 41s
Test / test_scrub_xor (push) Successful in 34s
Test / test_enospc_imm_xor (push) Successful in 49s
Test / test_nfs (push) Successful in 24s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 32s
Test / test_scrub_ec (push) Successful in 31s
Test / test_scrub_pg_size_3 (push) Successful in 42s
2024-07-11 01:13:02 +03:00
66b438106a Add vitastor-cli pg-list command
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 2m59s
Test / test_rebalance_verify_imm (push) Successful in 3m25s
Test / test_root_node (push) Successful in 13s
Test / test_rebalance_verify (push) Successful in 4m6s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 42s
Test / test_write_no_same (push) Successful in 21s
Test / test_rebalance_verify_ec (push) Successful in 3m52s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m2s
Test / test_write_xor (push) Successful in 1m24s
Test / test_heal_pg_size_2 (push) Successful in 4m31s
Test / test_heal_csum_32k_dmj (push) Successful in 4m46s
Test / test_heal_csum_32k_dj (push) Successful in 5m50s
Test / test_heal_csum_4k_dj (push) Successful in 5m1s
Test / test_osd_tags (push) Successful in 19s
Test / test_enospc (push) Successful in 46s
Test / test_enospc_xor (push) Successful in 1m38s
Test / test_heal_csum_4k (push) Successful in 4m7s
Test / test_enospc_imm (push) Successful in 56s
Test / test_enospc_imm_xor (push) Successful in 56s
Test / test_scrub (push) Successful in 28s
Test / test_scrub_zero_osd_2 (push) Successful in 30s
Test / test_scrub_xor (push) Successful in 30s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 34s
Test / test_nfs (push) Successful in 19s
Test / test_scrub_pg_size_3 (push) Successful in 48s
Test / test_scrub_ec (push) Successful in 26s
Test / test_heal_csum_32k (push) Successful in 4m51s
Test / test_heal_csum_4k_dmj (push) Successful in 4m48s
Test / test_heal_ec (push) Successful in 4m46s
2024-07-10 02:27:41 +03:00
3aef6682fb Add vitastor-cli modify-osd command
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m8s
Test / test_rebalance_verify_imm (push) Successful in 3m19s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 4m12s
Test / test_switch_primary (push) Successful in 41s
Test / test_write (push) Successful in 43s
Test / test_write_no_same (push) Successful in 20s
Test / test_rebalance_verify_ec_imm (push) Successful in 2m58s
Test / test_write_xor (push) Successful in 1m7s
Test / test_rebalance_verify_ec (push) Successful in 5m0s
Test / test_heal_pg_size_2 (push) Successful in 4m9s
Test / test_heal_csum_32k_dmj (push) Successful in 5m23s
Test / test_heal_ec (push) Successful in 7m37s
Test / test_heal_csum_32k (push) Successful in 5m7s
Test / test_heal_csum_4k_dmj (push) Successful in 5m10s
Test / test_osd_tags (push) Successful in 22s
Test / test_enospc (push) Successful in 2m2s
Test / test_heal_csum_4k_dj (push) Successful in 5m40s
Test / test_enospc_xor (push) Successful in 2m15s
Test / test_enospc_imm (push) Successful in 1m12s
Test / test_heal_csum_4k (push) Successful in 5m30s
Test / test_enospc_imm_xor (push) Successful in 1m16s
Test / test_scrub (push) Successful in 32s
Test / test_scrub_zero_osd_2 (push) Successful in 36s
Test / test_scrub_xor (push) Successful in 34s
Test / test_scrub_pg_size_3 (push) Successful in 39s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 38s
Test / test_nfs (push) Successful in 16s
Test / test_scrub_ec (push) Successful in 22s
Test / test_heal_csum_32k_dj (push) Successful in 3m9s
2024-07-09 16:52:19 +03:00
8535bccf4c Add a note about antietcd dump/load
Some checks failed
Test / test_rebalance_verify_imm (push) Successful in 2m34s
Test / test_rebalance_verify (push) Successful in 4m11s
Test / test_root_node (push) Successful in 12s
Test / test_switch_primary (push) Successful in 38s
Test / test_etcd_fail (push) Failing after 10m8s
Test / test_rebalance_verify_ec (push) Successful in 3m52s
Test / test_write_no_same (push) Successful in 24s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m7s
Test / test_write (push) Successful in 1m12s
Test / test_write_xor (push) Successful in 2m26s
Test / test_heal_ec (push) Successful in 5m28s
Test / test_heal_csum_32k_dj (push) Successful in 6m2s
Test / test_heal_pg_size_2 (push) Failing after 10m20s
Test / test_heal_csum_32k_dmj (push) Failing after 10m20s
Test / test_heal_csum_4k_dj (push) Successful in 4m14s
Test / test_osd_tags (push) Successful in 14s
Test / test_enospc (push) Successful in 41s
Test / test_heal_csum_32k (push) Failing after 10m12s
Test / test_enospc_xor (push) Successful in 58s
Test / test_enospc_imm (push) Successful in 39s
Test / test_scrub (push) Successful in 39s
Test / test_enospc_imm_xor (push) Successful in 59s
Test / test_scrub_zero_osd_2 (push) Successful in 30s
Test / test_scrub_xor (push) Successful in 26s
Test / test_heal_csum_4k_dmj (push) Failing after 10m18s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 37s
Test / test_scrub_pg_size_3 (push) Successful in 54s
Test / test_scrub_ec (push) Successful in 28s
Test / test_nfs (push) Successful in 17s
Test / test_heal_csum_4k (push) Successful in 9m21s
2024-07-09 15:58:03 +03:00
0487b3b239 Add clusterid to Grafana dashboard 2024-07-09 15:58:03 +03:00
a54ef97f5d Add Grafana dashboard link 2024-07-09 15:37:25 +03:00
10434a9b2b Add notes about antietcd to documentation
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 3m9s
Test / test_rebalance_verify_imm (push) Successful in 5m50s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 6m29s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 45s
Test / test_rebalance_verify_ec_imm (push) Successful in 4m56s
Test / test_write_no_same (push) Successful in 20s
Test / test_write_xor (push) Successful in 1m37s
Test / test_rebalance_verify_ec (push) Successful in 7m58s
Test / test_heal_pg_size_2 (push) Successful in 3m57s
Test / test_heal_csum_32k_dmj (push) Successful in 5m18s
Test / test_heal_csum_32k (push) Successful in 5m55s
Test / test_heal_ec (push) Failing after 10m18s
Test / test_heal_csum_4k_dmj (push) Successful in 5m34s
Test / test_osd_tags (push) Successful in 22s
Test / test_heal_csum_32k_dj (push) Failing after 10m36s
Test / test_enospc (push) Successful in 1m47s
Test / test_heal_csum_4k_dj (push) Successful in 5m23s
Test / test_enospc_xor (push) Successful in 2m19s
Test / test_enospc_imm (push) Successful in 1m40s
Test / test_heal_csum_4k (push) Successful in 5m29s
Test / test_scrub (push) Successful in 46s
Test / test_enospc_imm_xor (push) Successful in 1m3s
Test / test_scrub_zero_osd_2 (push) Successful in 33s
Test / test_scrub_xor (push) Successful in 32s
Test / test_nfs (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 37s
Test / test_scrub_pg_size_3 (push) Successful in 50s
Test / test_scrub_ec (push) Successful in 29s
2024-07-09 15:01:41 +03:00
c6be194508 Implement experimental antietcd-based version of monitor 2024-07-09 13:54:58 +03:00
df668286fb Add Grafana dashboard
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 2m58s
Test / test_rebalance_verify_imm (push) Successful in 4m27s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 5m7s
Test / test_switch_primary (push) Successful in 37s
Test / test_rebalance_verify_ec_imm (push) Successful in 3m24s
Test / test_write (push) Successful in 41s
Test / test_write_no_same (push) Successful in 23s
Test / test_write_xor (push) Successful in 2m2s
Test / test_rebalance_verify_ec (push) Successful in 6m4s
Test / test_heal_ec (push) Successful in 4m6s
Test / test_heal_csum_32k_dmj (push) Successful in 4m40s
Test / test_heal_csum_32k_dj (push) Successful in 5m13s
Test / test_heal_pg_size_2 (push) Failing after 10m31s
Test / test_heal_csum_32k (push) Successful in 6m6s
Test / test_osd_tags (push) Successful in 46s
Test / test_heal_csum_4k_dmj (push) Successful in 5m45s
Test / test_heal_csum_4k_dj (push) Successful in 5m56s
Test / test_enospc (push) Successful in 1m58s
Test / test_enospc_xor (push) Successful in 2m17s
Test / test_enospc_imm (push) Successful in 1m26s
Test / test_enospc_imm_xor (push) Successful in 1m57s
Test / test_scrub_zero_osd_2 (push) Successful in 39s
Test / test_scrub (push) Successful in 44s
Test / test_heal_csum_4k (push) Successful in 5m21s
Test / test_scrub_xor (push) Successful in 40s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 42s
Test / test_nfs (push) Successful in 18s
Test / test_scrub_pg_size_3 (push) Successful in 56s
Test / test_scrub_ec (push) Successful in 25s
2024-07-09 02:39:36 +03:00
667c5999c9 Report all PG states
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 3m1s
Test / test_rebalance_verify_imm (push) Successful in 6m25s
Test / test_root_node (push) Successful in 14s
Test / test_rebalance_verify (push) Successful in 7m1s
Test / test_switch_primary (push) Successful in 40s
Test / test_write (push) Successful in 43s
Test / test_rebalance_verify_ec_imm (push) Successful in 5m40s
Test / test_write_no_same (push) Successful in 19s
Test / test_write_xor (push) Successful in 1m21s
Test / test_rebalance_verify_ec (push) Successful in 8m11s
Test / test_heal_pg_size_2 (push) Successful in 3m51s
Test / test_heal_csum_32k_dj (push) Successful in 4m49s
Test / test_heal_csum_32k (push) Successful in 4m34s
Test / test_heal_ec (push) Failing after 10m27s
Test / test_heal_csum_4k_dmj (push) Successful in 4m27s
Test / test_heal_csum_32k_dmj (push) Failing after 10m28s
Test / test_osd_tags (push) Successful in 41s
Test / test_heal_csum_4k_dj (push) Successful in 4m33s
Test / test_enospc (push) Successful in 1m41s
Test / test_enospc_xor (push) Successful in 2m20s
Test / test_enospc_imm (push) Successful in 1m28s
Test / test_enospc_imm_xor (push) Successful in 1m54s
Test / test_scrub (push) Successful in 37s
Test / test_scrub_zero_osd_2 (push) Successful in 53s
Test / test_heal_csum_4k (push) Successful in 4m52s
Test / test_scrub_xor (push) Successful in 32s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 29s
Test / test_nfs (push) Successful in 20s
Test / test_scrub_ec (push) Successful in 28s
Test / test_scrub_pg_size_3 (push) Successful in 55s
2024-07-08 19:52:56 +03:00
8ad63465cd Do not wipe previous metrics at moments when difference is 0
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 2m48s
Test / test_rebalance_verify_imm (push) Successful in 3m24s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 4m3s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 41s
Test / test_write_no_same (push) Successful in 20s
Test / test_rebalance_verify_ec_imm (push) Successful in 2m59s
Test / test_write_xor (push) Successful in 1m16s
Test / test_rebalance_verify_ec (push) Successful in 5m58s
Test / test_heal_pg_size_2 (push) Successful in 4m7s
Test / test_heal_ec (push) Successful in 4m3s
Test / test_heal_csum_32k_dmj (push) Successful in 4m43s
Test / test_heal_csum_32k_dj (push) Successful in 6m10s
Test / test_heal_csum_4k_dmj (push) Successful in 6m28s
Test / test_osd_tags (push) Successful in 31s
Test / test_enospc (push) Successful in 56s
Test / test_enospc_xor (push) Successful in 1m21s
Test / test_enospc_imm (push) Successful in 43s
Test / test_heal_csum_32k (push) Failing after 10m22s
Test / test_scrub (push) Successful in 30s
Test / test_enospc_imm_xor (push) Successful in 51s
Test / test_scrub_xor (push) Successful in 31s
Test / test_scrub_zero_osd_2 (push) Successful in 34s
Test / test_heal_csum_4k_dj (push) Successful in 10m6s
Test / test_scrub_ec (push) Successful in 35s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 40s
Test / test_scrub_pg_size_3 (push) Successful in 50s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k (push) Successful in 8m28s
2024-07-08 02:20:12 +03:00
976290e6a9 Implement built-in Prometheus exporter in monitor 2024-07-08 02:20:12 +03:00
79f1d1969b Make immediate_commit=all the default
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m5s
Test / test_rebalance_verify_imm (push) Successful in 3m22s
Test / test_root_node (push) Successful in 10s
Test / test_rebalance_verify (push) Successful in 4m10s
Test / test_switch_primary (push) Successful in 42s
Test / test_write (push) Successful in 42s
Test / test_write_no_same (push) Successful in 19s
Test / test_rebalance_verify_ec_imm (push) Successful in 2m54s
Test / test_write_xor (push) Successful in 1m12s
Test / test_rebalance_verify_ec (push) Successful in 4m58s
Test / test_heal_pg_size_2 (push) Successful in 4m19s
Test / test_heal_ec (push) Successful in 4m39s
Test / test_heal_csum_32k_dmj (push) Successful in 5m29s
Test / test_heal_csum_32k_dj (push) Successful in 6m14s
Test / test_heal_csum_32k (push) Successful in 6m46s
Test / test_heal_csum_4k_dmj (push) Successful in 6m20s
Test / test_osd_tags (push) Successful in 36s
Test / test_enospc (push) Successful in 1m57s
Test / test_heal_csum_4k_dj (push) Successful in 6m59s
Test / test_enospc_xor (push) Successful in 2m16s
Test / test_heal_csum_4k (push) Successful in 6m18s
Test / test_enospc_imm (push) Successful in 1m5s
Test / test_enospc_imm_xor (push) Successful in 1m10s
Test / test_scrub_zero_osd_2 (push) Successful in 31s
Test / test_scrub (push) Successful in 40s
Test / test_scrub_xor (push) Successful in 29s
Test / test_nfs (push) Successful in 19s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 27s
Test / test_scrub_ec (push) Successful in 26s
Test / test_scrub_pg_size_3 (push) Successful in 54s
2024-07-07 11:45:18 +03:00
918e1f83b0 Add JSON output for ls-osd
All checks were successful
Test / test_snapshot_chain_ec (push) Successful in 3m7s
Test / test_rebalance_verify_imm (push) Successful in 3m22s
Test / test_root_node (push) Successful in 2m24s
Test / test_rebalance_verify (push) Successful in 6m17s
Test / test_switch_primary (push) Successful in 37s
Test / test_write (push) Successful in 42s
Test / test_write_no_same (push) Successful in 20s
Test / test_rebalance_verify_ec_imm (push) Successful in 5m3s
Test / test_write_xor (push) Successful in 1m6s
Test / test_rebalance_verify_ec (push) Successful in 6m41s
Test / test_heal_pg_size_2 (push) Successful in 4m9s
Test / test_heal_csum_32k_dmj (push) Successful in 5m27s
Test / test_heal_csum_32k_dj (push) Successful in 5m50s
Test / test_heal_csum_32k (push) Successful in 5m40s
Test / test_osd_tags (push) Successful in 43s
Test / test_heal_csum_4k_dj (push) Successful in 5m19s
Test / test_enospc (push) Successful in 1m19s
Test / test_enospc_imm (push) Successful in 1m7s
Test / test_enospc_xor (push) Successful in 1m44s
Test / test_scrub (push) Successful in 42s
Test / test_heal_csum_4k (push) Successful in 4m47s
Test / test_enospc_imm_xor (push) Successful in 1m3s
Test / test_scrub_zero_osd_2 (push) Successful in 23s
Test / test_scrub_xor (push) Successful in 22s
Test / test_scrub_ec (push) Successful in 38s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 41s
Test / test_scrub_pg_size_3 (push) Successful in 53s
Test / test_nfs (push) Successful in 12s
Test / test_heal_csum_4k_dmj (push) Successful in 3m9s
Test / test_heal_ec (push) Successful in 2m50s
2024-07-07 02:24:36 +03:00
abbba6ade4 Support handling TCP I/O in simple separate io_uring-based I/O threads
Some checks failed
Test / test_snapshot_chain_ec (push) Successful in 3m4s
Test / test_rebalance_verify_imm (push) Successful in 5m52s
Test / test_root_node (push) Successful in 12s
Test / test_rebalance_verify (push) Successful in 6m28s
Test / test_switch_primary (push) Successful in 34s
Test / test_write (push) Successful in 46s
Test / test_rebalance_verify_ec_imm (push) Successful in 5m2s
Test / test_write_no_same (push) Successful in 22s
Test / test_rebalance_verify_ec (push) Successful in 6m58s
Test / test_write_xor (push) Successful in 2m38s
Test / test_heal_pg_size_2 (push) Successful in 3m53s
Test / test_heal_ec (push) Successful in 5m20s
Test / test_heal_csum_32k_dmj (push) Successful in 5m19s
Test / test_heal_csum_32k_dj (push) Successful in 6m38s
Test / test_heal_csum_32k (push) Successful in 7m12s
Test / test_osd_tags (push) Successful in 37s
Test / test_heal_csum_4k_dmj (push) Successful in 6m51s
Test / test_enospc (push) Successful in 1m26s
Test / test_enospc_imm (push) Successful in 1m5s
Test / test_enospc_xor (push) Successful in 1m43s
Test / test_heal_csum_4k (push) Successful in 5m57s
Test / test_scrub (push) Successful in 50s
Test / test_scrub_zero_osd_2 (push) Successful in 29s
Test / test_enospc_imm_xor (push) Successful in 1m14s
Test / test_scrub_xor (push) Successful in 27s
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 39s
Test / test_scrub_ec (push) Successful in 37s
Test / test_scrub_pg_size_3 (push) Successful in 50s
Test / test_nfs (push) Successful in 15s
Test / test_heal_csum_4k_dj (push) Failing after 10m17s
Required mainly for clients, allows to scale parallel client I/O with TCP
from 100-150k iops to ~400k iops and from 2-3 GB/s to at least 7-8 GB/s
with 4 I/O threads, at the same time increasing Q=1 latency by 2x thread
switching delay, which is ~10 us when CPU powersaving is disabled and may
be as high as 200 us when it's enabled.
2024-07-04 13:29:20 +03:00
ace
b85dab8583 use fio 3.35-1 for AlmaLinux 9 2024-05-18 21:17:16 +03:00
348 changed files with 22145 additions and 3709 deletions

View File

@@ -22,7 +22,7 @@ RUN apt-get update
RUN apt-get -y install etcd qemu-system-x86 qemu-block-extra qemu-utils fio libasan5 \
liburing1 liburing-dev libgoogle-perftools-dev devscripts libjerasure-dev cmake libibverbs-dev libisal-dev
RUN apt-get -y build-dep fio qemu=`dpkg -s qemu-system-x86|grep ^Version:|awk '{print $2}'`
RUN apt-get -y install jq lp-solve sudo nfs-common
RUN apt-get update && apt-get -y install jq lp-solve sudo nfs-common fdisk parted
RUN apt-get --download-only source fio qemu=`dpkg -s qemu-system-x86|grep ^Version:|awk '{print $2}'`
RUN set -ex; \

View File

@@ -16,6 +16,7 @@ env:
BUILDENV_IMAGE: git.yourcmc.ru/vitalif/vitastor/buildenv
TEST_IMAGE: git.yourcmc.ru/vitalif/vitastor/test
OSD_ARGS: '--etcd_quick_timeout 2000'
USE_RAMDISK: 1
concurrency:
group: ci-${{ github.ref }}
@@ -197,6 +198,24 @@ jobs:
echo ""
done
test_etcd_fail_antietcd:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 10
run: ANTIETCD=1 /root/vitastor/tests/test_etcd_fail.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_interrupted_rebalance:
runs-on: ubuntu-latest
needs: build
@@ -269,6 +288,24 @@ jobs:
echo ""
done
test_create_halfhost:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: /root/vitastor/tests/test_create_halfhost.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_failure_domain:
runs-on: ubuntu-latest
needs: build
@@ -377,6 +414,24 @@ jobs:
echo ""
done
test_rm_degraded:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: /root/vitastor/tests/test_rm_degraded.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_snapshot_chain:
runs-on: ubuntu-latest
needs: build
@@ -539,6 +594,24 @@ jobs:
echo ""
done
test_dd:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: /root/vitastor/tests/test_dd.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_root_node:
runs-on: ubuntu-latest
needs: build
@@ -665,6 +738,24 @@ jobs:
echo ""
done
test_heal_antietcd:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 10
run: ANTIETCD=1 /root/vitastor/tests/test_heal.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_heal_csum_32k_dmj:
runs-on: ubuntu-latest
needs: build
@@ -773,6 +864,60 @@ jobs:
echo ""
done
test_resize:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: /root/vitastor/tests/test_resize.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_resize_auto:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: /root/vitastor/tests/test_resize_auto.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_snapshot_pool2:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: /root/vitastor/tests/test_snapshot_pool2.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_osd_tags:
runs-on: ubuntu-latest
needs: build

View File

@@ -34,6 +34,10 @@ for my $line (<>)
{
$test_name .= '_imm';
}
elsif ($1 eq 'ANTIETCD')
{
$test_name .= '_antietcd';
}
else
{
$test_name .= '_'.lc($1).'_'.$2;

View File

@@ -2,6 +2,6 @@ cmake_minimum_required(VERSION 2.8.12)
project(vitastor)
set(VERSION "1.6.1")
set(VITASTOR_VERSION "1.11.0")
add_subdirectory(src)

View File

@@ -1,4 +1,4 @@
## Vitastor
# Vitastor
[Read English version](README.md)
@@ -19,10 +19,10 @@ Vitastor нацелен в первую очередь на SSD и SSD+HDD кл
TCP и RDMA и на хорошем железе может достигать задержки 4 КБ чтения и записи на уровне ~0.1 мс,
что примерно в 10 раз быстрее, чем Ceph и другие популярные программные СХД.
Vitastor поддерживает QEMU-драйвер, протоколы NBD и NFS, драйверы OpenStack, Proxmox, Kubernetes.
Vitastor поддерживает QEMU-драйвер, протоколы NBD и NFS, драйверы OpenStack, OpenNebula, Proxmox, Kubernetes.
Другие драйверы могут также быть легко реализованы.
Подробности смотрите в документации по ссылкам ниже.
Подробности смотрите в документации по ссылкам. Можете начать отсюда: [Быстрый старт](docs/intro/quickstart.ru.md).
## Презентации и записи докладов
@@ -41,7 +41,9 @@ Vitastor поддерживает QEMU-драйвер, протоколы NBD и
- [Автор и лицензия](docs/intro/author.ru.md)
- Установка
- [Пакеты](docs/installation/packages.ru.md)
- [Docker](docs/installation/docker.ru.md)
- [Proxmox](docs/installation/proxmox.ru.md)
- [OpenNebula](docs/installation/opennebula.ru.md)
- [OpenStack](docs/installation/openstack.ru.md)
- [Kubernetes CSI](docs/installation/kubernetes.ru.md)
- [Сборка из исходных кодов](docs/installation/source.ru.md)
@@ -50,7 +52,7 @@ Vitastor поддерживает QEMU-драйвер, протоколы NBD и
- Параметры
- [Общие](docs/config/common.ru.md)
- [Сетевые](docs/config/network.ru.md)
- [Клиентский код](docs/config/client.en.md)
- [Клиентский код](docs/config/client.ru.md)
- [Глобальные дисковые параметры](docs/config/layout-cluster.ru.md)
- [Дисковые параметры OSD](docs/config/layout-osd.ru.md)
- [Прочие параметры OSD](docs/config/osd.ru.md)

View File

@@ -19,10 +19,10 @@ supports TCP and RDMA and may achieve 4 KB read and write latency as low as ~0.1
with proper hardware which is ~10 times faster than other popular SDS's like Ceph
or internal systems of public clouds.
Vitastor supports QEMU, NBD, NFS protocols, OpenStack, Proxmox, Kubernetes drivers.
Vitastor supports QEMU, NBD, NFS protocols, OpenStack, OpenNebula, Proxmox, Kubernetes drivers.
More drivers may be created easily.
Read more details below in the documentation.
Read more details in the documentation. You can start from here: [Quick Start](docs/intro/quickstart.en.md).
## Talks and presentations
@@ -41,7 +41,9 @@ Read more details below in the documentation.
- [Author and license](docs/intro/author.en.md)
- Installation
- [Packages](docs/installation/packages.en.md)
- [Docker](docs/installation/docker.en.md)
- [Proxmox](docs/installation/proxmox.en.md)
- [OpenNebula](docs/installation/opennebula.en.md)
- [OpenStack](docs/installation/openstack.en.md)
- [Kubernetes CSI](docs/installation/kubernetes.en.md)
- [Building from Source](docs/installation/source.en.md)

View File

@@ -22,6 +22,8 @@ RUN apt-get update && \
(echo "APT::Install-Recommends false;" > /etc/apt/apt.conf) && \
apt-get update && \
apt-get install -y e2fsprogs xfsprogs kmod iproute2 \
# NFS mount dependencies
nfs-common netbase \
# dependencies of qemu-storage-daemon
libnuma1 liburing2 libglib2.0-0 libfuse3-3 libaio1 libzstd1 libnettle8 \
libgmp10 libhogweed6 libp11-kit0 libidn2-0 libunistring2 libtasn1-6 libpcre2-8-0 libffi8 && \

View File

@@ -1,9 +1,9 @@
VERSION ?= v1.6.1
VITASTOR_VERSION ?= v1.11.0
all: build push
build:
@docker build --rm -t vitalif/vitastor-csi:$(VERSION) .
@docker build --rm -t vitalif/vitastor-csi:$(VITASTOR_VERSION) .
push:
@docker push vitalif/vitastor-csi:$(VERSION)
@docker push vitalif/vitastor-csi:$(VITASTOR_VERSION)

View File

@@ -49,7 +49,7 @@ spec:
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: vitalif/vitastor-csi:v1.6.1
image: vitalif/vitastor-csi:v1.11.0
args:
- "--node=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)"

View File

@@ -121,7 +121,7 @@ spec:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
image: vitalif/vitastor-csi:v1.6.1
image: vitalif/vitastor-csi:v1.11.0
args:
- "--node=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)"

View File

@@ -9,8 +9,16 @@ metadata:
provisioner: csi.vitastor.io
volumeBindingMode: Immediate
parameters:
etcdVolumePrefix: ""
poolId: "1"
# CSI driver can create block-based volumes and VitastorFS-based volumes
# only VitastorFS-based volumes and raw block volumes (without FS) support ReadWriteMany mode
# set this parameter to VitastorFS metadata volume name to use VitastorFS
# if unset, block-based volumes will be created
vitastorfs: ""
# for block-based storage classes, pool ID may be either a string (name) or a number (ID)
# for vitastorFS-based storage classes it must be a string - name of the default pool for FS data
poolId: "testpool"
# volume name prefix for block-based storage classes or NFS subdirectory (including /) for FS-based volumes
volumePrefix: ""
# you can choose other configuration file if you have it in the config map
# different etcd URLs and prefixes should also be put in the config
#configPath: "/etc/vitastor/vitastor.conf"

View File

@@ -0,0 +1,25 @@
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
namespace: vitastor-system
name: vitastor
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.vitastor.io
volumeBindingMode: Immediate
parameters:
# CSI driver can create block-based volumes and VitastorFS-based volumes
# only VitastorFS-based volumes and raw block volumes (without FS) support ReadWriteMany mode
# set this parameter to VitastorFS metadata volume name to use VitastorFS
# if unset, block-based volumes will be created
vitastorfs: "testfs"
# for block-based storage classes, pool ID may be either a string (name) or a number (ID)
# for vitastorFS-based storage classes it must be a string - name of the default pool for FS data
poolId: "testpool"
# volume name prefix for block-based storage classes or NFS subdirectory (including /) for FS-based volumes
volumePrefix: "k8s/"
# you can choose other configuration file if you have it in the config map
# different etcd URLs and prefixes should also be put in the config
#configPath: "/etc/vitastor/vitastor.conf"
allowVolumeExpansion: true

View File

@@ -3,10 +3,10 @@ module vitastor.io/csi
go 1.15
require (
github.com/container-storage-interface/spec v1.4.0
github.com/container-storage-interface/spec v1.8.0
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b
github.com/kubernetes-csi/csi-lib-utils v0.9.1
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb
golang.org/x/net v0.7.0
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
google.golang.org/grpc v1.33.1
google.golang.org/protobuf v1.24.0

View File

@@ -41,8 +41,8 @@ github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWR
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/container-storage-interface/spec v1.2.0/go.mod h1:6URME8mwIBbpVyZV93Ce5St17xBiQJQY67NDsuohiy4=
github.com/container-storage-interface/spec v1.4.0 h1:ozAshSKxpJnYUfmkpZCTYyF/4MYeYlhdXbAvPvfGmkg=
github.com/container-storage-interface/spec v1.4.0/go.mod h1:6URME8mwIBbpVyZV93Ce5St17xBiQJQY67NDsuohiy4=
github.com/container-storage-interface/spec v1.8.0 h1:D0vhF3PLIZwlwZEf2eNbpujGCNwspwTYf2idJRJx4xI=
github.com/container-storage-interface/spec v1.8.0/go.mod h1:ROLik+GhPslwwWRNFF1KasPzroNARibH2rfz1rkg4H0=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -182,6 +182,7 @@ github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UV
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
@@ -195,6 +196,7 @@ golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8U
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191206172530-e9b2fee46413/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -213,6 +215,7 @@ golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCc
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -228,8 +231,10 @@ golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb h1:eBmm0M9fYhWpKZLjQUUKka/LtIxf46G4fxeEz5KJr9U=
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.7.0 h1:rJrUqqhjsgNp7KqAIc25s9pZnjU7TUcSY7HcVZjdn1g=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -240,6 +245,7 @@ golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -259,13 +265,22 @@ golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200622214017-ed371f2e16b4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f h1:+Nyd8tzPX9R7BWHguqsrbFdRx3WQ/1ib8I44HXV5yTA=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0 h1:MUK/U/4lj1t1oPg0HfuXDN/Z1wv31ZJ/YcPiGccS4DU=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0 h1:4BRB4x83lYWy72KwLD/qYDuTu7q9PjSagHvijDw7cLo=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -286,8 +301,10 @@ golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgw
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=

View File

@@ -5,7 +5,7 @@ package vitastor
const (
vitastorCSIDriverName = "csi.vitastor.io"
vitastorCSIDriverVersion = "1.6.1"
vitastorCSIDriverVersion = "1.11.0"
)
// Config struct fills the parameters of request or user input

View File

@@ -8,11 +8,8 @@ import (
"encoding/json"
"fmt"
"strings"
"bytes"
"strconv"
"time"
"os"
"os/exec"
"io/ioutil"
"github.com/kubernetes-csi/csi-lib-utils/protosanitizer"
@@ -70,9 +67,10 @@ func GetConnectionParams(params map[string]string) (map[string]string, error)
{
configPath = "/etc/vitastor/vitastor.conf"
}
else
ctxVars["configPath"] = configPath
if (params["vitastorfs"] != "")
{
ctxVars["configPath"] = configPath
ctxVars["vitastorfs"] = params["vitastorfs"]
}
config := make(map[string]interface{})
configFD, err := os.Open(configPath)
@@ -114,22 +112,6 @@ func GetConnectionParams(params map[string]string) (map[string]string, error)
return ctxVars, nil
}
func system(program string, args ...string) ([]byte, []byte, error)
{
klog.Infof("Running "+program+" "+strings.Join(args, " "))
c := exec.Command(program, args...)
var stdout, stderr bytes.Buffer
c.Stdout, c.Stderr = &stdout, &stderr
err := c.Run()
if (err != nil)
{
stdoutStr, stderrStr := string(stdout.Bytes()), string(stderr.Bytes())
klog.Errorf(program+" "+strings.Join(args, " ")+" failed: %s, status %s\n", stdoutStr+stderrStr, err)
return nil, nil, status.Error(codes.Internal, stdoutStr+stderrStr+" (status "+err.Error()+")")
}
return stdout.Bytes(), stderr.Bytes(), nil
}
func invokeCLI(ctxVars map[string]string, args []string) ([]byte, error)
{
if (ctxVars["configPath"] != "")
@@ -158,27 +140,57 @@ func (cs *ControllerServer) CreateVolume(ctx context.Context, req *csi.CreateVol
return nil, status.Error(codes.InvalidArgument, "volume capabilities is a required field")
}
etcdVolumePrefix := req.Parameters["etcdVolumePrefix"]
poolId, _ := strconv.ParseUint(req.Parameters["poolId"], 10, 64)
if (poolId == 0)
{
return nil, status.Error(codes.InvalidArgument, "poolId is missing in storage class configuration")
}
volName := etcdVolumePrefix + req.GetName()
volSize := 1 * GB
if capRange := req.GetCapacityRange(); capRange != nil
{
volSize = ((capRange.GetRequiredBytes() + MB - 1) / MB) * MB
}
ctxVars, err := GetConnectionParams(req.Parameters)
if (err != nil)
{
return nil, err
}
args := []string{ "create", volName, "-s", fmt.Sprintf("%v", volSize), "--pool", fmt.Sprintf("%v", poolId) }
err = cs.checkCaps(volumeCapabilities, ctxVars["vitastorfs"] != "")
if (err != nil)
{
return nil, err
}
pool := req.Parameters["poolId"]
if (pool == "")
{
return nil, status.Error(codes.InvalidArgument, "poolId is missing in storage class configuration")
}
volumePrefix := req.Parameters["volumePrefix"]
if (volumePrefix == "")
{
// Old name
volumePrefix = req.Parameters["etcdVolumePrefix"]
}
volName := volumePrefix + req.GetName()
volSize := 1 * GB
if capRange := req.GetCapacityRange(); capRange != nil
{
volSize = ((capRange.GetRequiredBytes() + MB - 1) / MB) * MB
}
if (ctxVars["vitastorfs"] != "")
{
// Nothing to create, subdirectories are created during mounting
// FIXME: It would be cool to support quotas some day and set it here
if (req.VolumeContentSource.GetSnapshot() != nil)
{
return nil, status.Error(codes.InvalidArgument, "VitastorFS doesn't support snapshots")
}
ctxVars["name"] = volName
ctxVars["pool"] = pool
volumeIdJson, _ := json.Marshal(ctxVars)
return &csi.CreateVolumeResponse{
Volume: &csi.Volume{
// Ugly, but VolumeContext isn't passed to DeleteVolume :-(
VolumeId: string(volumeIdJson),
CapacityBytes: volSize,
},
}, nil
}
args := []string{ "create", volName, "-s", fmt.Sprintf("%v", volSize), "--pool", pool }
// Support creation from snapshot
var src *csi.VolumeContentSource
@@ -261,6 +273,12 @@ func (cs *ControllerServer) DeleteVolume(ctx context.Context, req *csi.DeleteVol
return nil, err
}
if (ctxVars["vitastorfs"] != "")
{
// FIXME: Delete FS subdirectory
return &csi.DeleteVolumeResponse{}, nil
}
_, err = invokeCLI(ctxVars, []string{ "rm", volName })
if (err != nil)
{
@@ -295,19 +313,72 @@ func (cs *ControllerServer) ValidateVolumeCapabilities(ctx context.Context, req
{
return nil, status.Error(codes.InvalidArgument, "volumeId is nil")
}
volVars := make(map[string]string)
err := json.Unmarshal([]byte(volumeID), &volVars)
if (err != nil)
{
return nil, status.Error(codes.Internal, "volume ID not in JSON format")
}
ctxVars, err := GetConnectionParams(volVars)
if (err != nil)
{
return nil, err
}
volumeCapabilities := req.GetVolumeCapabilities()
if (volumeCapabilities == nil)
{
return nil, status.Error(codes.InvalidArgument, "volumeCapabilities is nil")
}
err = cs.checkCaps(volumeCapabilities, ctxVars["vitastorfs"] != "")
if (err != nil)
{
return nil, err
}
return &csi.ValidateVolumeCapabilitiesResponse{
Confirmed: &csi.ValidateVolumeCapabilitiesResponse_Confirmed{
VolumeCapabilities: req.VolumeCapabilities,
},
}, nil
}
func (cs *ControllerServer) checkCaps(volumeCapabilities []*csi.VolumeCapability, fs bool) error
{
var volumeCapabilityAccessModes []*csi.VolumeCapability_AccessMode
for _, mode := range []csi.VolumeCapability_AccessMode_Mode{
csi.VolumeCapability_AccessMode_SINGLE_NODE_WRITER,
csi.VolumeCapability_AccessMode_MULTI_NODE_MULTI_WRITER,
csi.VolumeCapability_AccessMode_SINGLE_NODE_READER_ONLY,
csi.VolumeCapability_AccessMode_MULTI_NODE_READER_ONLY,
csi.VolumeCapability_AccessMode_SINGLE_NODE_SINGLE_WRITER,
csi.VolumeCapability_AccessMode_SINGLE_NODE_MULTI_WRITER,
} {
volumeCapabilityAccessModes = append(volumeCapabilityAccessModes, &csi.VolumeCapability_AccessMode{Mode: mode})
}
for _, capability := range volumeCapabilities
{
if (capability.GetBlock() != nil)
{
if (fs)
{
return status.Errorf(codes.InvalidArgument, "%v not supported with FS-based volumes", capability)
}
for _, mode := range []csi.VolumeCapability_AccessMode_Mode{
csi.VolumeCapability_AccessMode_MULTI_NODE_SINGLE_WRITER,
csi.VolumeCapability_AccessMode_MULTI_NODE_MULTI_WRITER,
} {
volumeCapabilityAccessModes = append(volumeCapabilityAccessModes, &csi.VolumeCapability_AccessMode{Mode: mode})
}
break
}
}
if (fs)
{
// All access modes including RWX are supported with FS-based volumes
return nil
}
capabilitySupport := false
for _, capability := range volumeCapabilities
@@ -323,14 +394,10 @@ func (cs *ControllerServer) ValidateVolumeCapabilities(ctx context.Context, req
if (!capabilitySupport)
{
return nil, status.Errorf(codes.NotFound, "%v not supported", req.GetVolumeCapabilities())
return status.Errorf(codes.InvalidArgument, "%v not supported", volumeCapabilities)
}
return &csi.ValidateVolumeCapabilitiesResponse{
Confirmed: &csi.ValidateVolumeCapabilitiesResponse_Confirmed{
VolumeCapabilities: req.VolumeCapabilities,
},
}, nil
return nil
}
// ListVolumes returns a list of volumes
@@ -419,6 +486,12 @@ func (cs *ControllerServer) CreateSnapshot(ctx context.Context, req *csi.CreateS
{
return nil, status.Error(codes.Internal, "volume ID not in JSON format")
}
if (ctxVars["vitastorfs"] != "")
{
return nil, status.Error(codes.InvalidArgument, "VitastorFS doesn't support snapshots")
}
volName := ctxVars["name"]
// Create image using vitastor-cli
@@ -477,6 +550,11 @@ func (cs *ControllerServer) DeleteSnapshot(ctx context.Context, req *csi.DeleteS
return nil, err
}
if (ctxVars["vitastorfs"] != "")
{
return nil, status.Error(codes.InvalidArgument, "VitastorFS doesn't support snapshots")
}
_, err = invokeCLI(ctxVars, []string{ "rm", volName+"@"+snapName })
if (err != nil)
{
@@ -508,6 +586,11 @@ func (cs *ControllerServer) ListSnapshots(ctx context.Context, req *csi.ListSnap
return nil, err
}
if (ctxVars["vitastorfs"] != "")
{
return nil, status.Error(codes.InvalidArgument, "VitastorFS doesn't support snapshots")
}
inodeCfg, err := invokeList(ctxVars, volName+"@*", false)
if (err != nil)
{
@@ -571,6 +654,16 @@ func (cs *ControllerServer) ControllerExpandVolume(ctx context.Context, req *csi
return nil, err
}
if (ctxVars["vitastorfs"] != "")
{
// Nothing to change
// FIXME: Support quotas and change quota here
return &csi.ControllerExpandVolumeResponse{
CapacityBytes: req.CapacityRange.RequiredBytes,
NodeExpansionRequired: false,
}, nil
}
inodeCfg, err := invokeList(ctxVars, volName, true)
if (err != nil)
{

View File

@@ -5,11 +5,15 @@ package vitastor
import (
"context"
"crypto/sha1"
"encoding/hex"
"encoding/json"
"fmt"
"os"
"os/exec"
"path/filepath"
"regexp"
"strconv"
"strings"
"sync"
"syscall"
@@ -29,13 +33,14 @@ import (
type NodeServer struct
{
*Driver
useVduse bool
stateDir string
mounter mount.Interface
useVduse bool
stateDir string
nfsStageDir string
mounter mount.Interface
restartInterval time.Duration
mu sync.Mutex
cond *sync.Cond
volumeLocks map[string]bool
mu sync.Mutex
cond *sync.Cond
volumeLocks map[string]bool
}
type DeviceState struct
@@ -48,6 +53,15 @@ type DeviceState struct
PidFile string `json:"pidFile"`
}
type NfsState struct
{
ConfigPath string `json:"configPath"`
FsName string `json:"fsName"`
Pool string `json:"pool"`
Path string `json:"path"`
Port int `json:"port"`
}
// NewNodeServer create new instance node
func NewNodeServer(driver *Driver) *NodeServer
{
@@ -60,11 +74,17 @@ func NewNodeServer(driver *Driver) *NodeServer
{
stateDir += "/"
}
nfsStageDir := os.Getenv("NFS_STAGE_DIR")
if (nfsStageDir == "")
{
nfsStageDir = "/var/lib/kubelet/plugins/csi.vitastor.io/nfs"
}
ns := &NodeServer{
Driver: driver,
useVduse: checkVduseSupport(),
stateDir: stateDir,
mounter: mount.New(""),
Driver: driver,
useVduse: checkVduseSupport(),
stateDir: stateDir,
nfsStageDir: nfsStageDir,
mounter: mount.New(""),
volumeLocks: make(map[string]bool),
}
ns.cond = sync.NewCond(&ns.mu)
@@ -123,12 +143,12 @@ func (ns *NodeServer) restarter()
func (ns *NodeServer) restoreVduseDaemons()
{
pattern := ns.stateDir+"vitastor-vduse-*.json"
matches, err := filepath.Glob(pattern)
stateFiles, err := filepath.Glob(pattern)
if (err != nil)
{
klog.Errorf("failed to list %s: %v", pattern, err)
}
if (len(matches) == 0)
if (len(stateFiles) == 0)
{
return
}
@@ -146,59 +166,162 @@ func (ns *NodeServer) restoreVduseDaemons()
klog.Errorf("/sbin/vdpa -j dev list returned bad JSON (error %v): %v", err, string(devListJSON))
return
}
for _, stateFile := range matches
for _, stateFile := range stateFiles
{
vdpaId := filepath.Base(stateFile)
vdpaId = vdpaId[0:len(vdpaId)-5]
// Check if VDPA device is still added to the bus
if (devs[vdpaId] == nil)
{
// Unused, clean it up
unmapVduseById(ns.stateDir, vdpaId)
continue
}
ns.checkVduseState(stateFile, devs)
}
}
stateJSON, err := os.ReadFile(stateFile)
func (ns *NodeServer) checkVduseState(stateFile string, devs map[string]interface{})
{
// Check if VDPA device is still added to the bus
vdpaId := filepath.Base(stateFile)
vdpaId = vdpaId[0:len(vdpaId)-5]
if (devs[vdpaId] == nil)
{
// Unused, clean it up
unmapVduseById(ns.stateDir, vdpaId)
return
}
// Read state file
stateJSON, err := os.ReadFile(stateFile)
if (err != nil)
{
klog.Warningf("error reading state file %v: %v", stateFile, err)
return
}
var state DeviceState
err = json.Unmarshal(stateJSON, &state)
if (err != nil)
{
klog.Warningf("state file %v contains invalid JSON (error %v): %v", stateFile, err, string(stateJSON))
return
}
// Lock volume
ns.lockVolume(state.ConfigPath+":block:"+state.Image)
defer ns.unlockVolume(state.ConfigPath+":block:"+state.Image)
// Recheck state file after locking
_, err = os.ReadFile(stateFile)
if (err != nil)
{
klog.Warningf("state file %v disappeared, skipping volume", stateFile)
return
}
// Check if the storage daemon is still active
pidFile := ns.stateDir + vdpaId + ".pid"
exists := false
proc, err := findByPidFile(pidFile)
if (err == nil)
{
exists = proc.Signal(syscall.Signal(0)) == nil
}
if (!exists)
{
// Restart daemon
klog.Warningf("restarting storage daemon for volume %v (VDPA ID %v)", state.Image, vdpaId)
err = startStorageDaemon(vdpaId, state.Image, pidFile, state.ConfigPath, state.Readonly)
if (err != nil)
{
klog.Warningf("error reading state file %v: %v", stateFile, err)
continue
klog.Warningf("failed to restart storage daemon for volume %v: %v", state.Image, err)
}
var state DeviceState
err = json.Unmarshal(stateJSON, &state)
}
}
func (ns *NodeServer) restoreNfsDaemons()
{
pattern := ns.stateDir+"vitastor-nfs-*.json"
stateFiles, err := filepath.Glob(pattern)
if (err != nil)
{
klog.Errorf("failed to list %s: %v", pattern, err)
}
if (len(stateFiles) == 0)
{
return
}
activeNFS, err := ns.listActiveNFS()
if (err != nil)
{
return
}
// Check all state files and try to restore active mounts
for _, stateFile := range stateFiles
{
ns.checkNfsState(stateFile, activeNFS)
}
}
func (ns *NodeServer) readNfsState(stateFile string, allowNotExists bool) (*NfsState, error)
{
stateJSON, err := os.ReadFile(stateFile)
if (err != nil)
{
if (allowNotExists && os.IsNotExist(err))
{
return nil, nil
}
klog.Warningf("error reading state file %v: %v", stateFile, err)
return nil, err
}
var state NfsState
err = json.Unmarshal(stateJSON, &state)
if (err != nil)
{
klog.Warningf("state file %v contains invalid JSON (error %v): %v", stateFile, err, string(stateJSON))
return nil, err
}
return &state, nil
}
func (ns *NodeServer) checkNfsState(stateFile string, activeNfs map[int][]string)
{
// Read state file
state, err := ns.readNfsState(stateFile, false)
if (err != nil)
{
return
}
// Lock FS
ns.lockVolume(state.ConfigPath+":fs:"+state.FsName)
defer ns.unlockVolume(state.ConfigPath+":fs:"+state.FsName)
// Check if NFS at this port is still mounted
pidFile := ns.stateDir + filepath.Base(stateFile)
pidFile = pidFile[0:len(pidFile)-5] + ".pid"
if (len(activeNfs[state.Port]) == 0)
{
// this is a stale state file, remove it
klog.Warningf("state file %v contains stale mount at port %d, removing it", stateFile, state.Port)
ns.stopNFS(stateFile, pidFile)
return
}
// Check PID file
exists := false
proc, err := findByPidFile(pidFile)
if (err == nil)
{
exists = proc.Signal(syscall.Signal(0)) == nil
}
if (!exists)
{
// Restart vitastor-nfs server
klog.Warningf("restarting NFS server for FS %v at port %v", state.FsName, state.Port)
_, _, err := system(
"/usr/bin/vitastor-nfs", "start",
"--pidfile", pidFile,
"--bind", "127.0.0.1",
"--port", fmt.Sprintf("%d", state.Port),
"--fs", state.FsName,
"--pool", state.Pool,
"--portmap", "0",
)
if (err != nil)
{
klog.Warningf("state file %v contains invalid JSON (error %v): %v", stateFile, err, string(stateJSON))
continue
klog.Warningf("failed to restart NFS server for FS %v: %v", state.FsName, err)
}
ns.lockVolume(state.ConfigPath+":"+state.Image)
// Recheck state file after locking
_, err = os.ReadFile(stateFile)
if (err != nil)
{
klog.Warningf("state file %v disappeared, skipping volume", stateFile)
ns.unlockVolume(state.ConfigPath+":"+state.Image)
continue
}
// Check if the storage daemon is still active
pidFile := ns.stateDir + vdpaId + ".pid"
exists := false
proc, err := findByPidFile(pidFile)
if (err == nil)
{
exists = proc.Signal(syscall.Signal(0)) == nil
}
if (!exists)
{
// Restart daemon
klog.Warningf("restarting storage daemon for volume %v (VDPA ID %v)", state.Image, vdpaId)
_ = startStorageDaemon(vdpaId, state.Image, pidFile, state.ConfigPath, state.Readonly)
}
ns.unlockVolume(state.ConfigPath+":"+state.Image)
}
}
@@ -220,14 +343,44 @@ func (ns *NodeServer) NodeStageVolume(ctx context.Context, req *csi.NodeStageVol
}
volName := ctxVars["name"]
ns.lockVolume(ctxVars["configPath"]+":"+volName)
defer ns.unlockVolume(ctxVars["configPath"]+":"+volName)
if (ctxVars["vitastorfs"] != "")
{
return &csi.NodeStageVolumeResponse{}, nil
}
ns.lockVolume(ctxVars["configPath"]+":block:"+volName)
defer ns.unlockVolume(ctxVars["configPath"]+":block:"+volName)
targetPath := req.GetStagingTargetPath()
isBlock := req.GetVolumeCapability().GetBlock() != nil
// Check that it's not already mounted
_, err = mount.IsNotMountPoint(ns.mounter, targetPath)
notmnt, err := mount.IsNotMountPoint(ns.mounter, targetPath)
if (err == nil)
{
if (!notmnt)
{
klog.Errorf("target path %s is already mounted", targetPath)
return nil, fmt.Errorf("target path %s is already mounted", targetPath)
}
var finfo os.FileInfo
finfo, err = os.Stat(targetPath)
if (err != nil)
{
klog.Errorf("failed to stat %s: %v", targetPath, err)
return nil, err
}
if (finfo.IsDir() != (!isBlock))
{
err = os.Remove(targetPath)
if (err != nil)
{
klog.Errorf("failed to remove %s (to recreate it with correct type): %v", targetPath, err)
return nil, err
}
err = os.ErrNotExist
}
}
if (err != nil)
{
if (os.IsNotExist(err))
@@ -280,6 +433,7 @@ func (ns *NodeServer) NodeStageVolume(ctx context.Context, req *csi.NodeStageVol
diskMounter := &mount.SafeFormatAndMount{Interface: ns.mounter, Exec: utilexec.New()}
if (isBlock)
{
klog.Infof("bind-mounting %s to %s", devicePath, targetPath)
err = diskMounter.Mount(devicePath, targetPath, "", []string{"bind"})
}
else
@@ -309,39 +463,40 @@ func (ns *NodeServer) NodeStageVolume(ctx context.Context, req *csi.NodeStageVol
readOnly := Contains(opt, "ro")
if (existingFormat == "" && !readOnly)
{
var cmdOut []byte
switch fsType
{
case "ext4":
args := []string{"-m0", "-Enodiscard,lazy_itable_init=1,lazy_journal_init=1", devicePath}
cmdOut, err = diskMounter.Exec.Command("mkfs.ext4", args...).CombinedOutput()
_, err = systemCombined("mkfs.ext4", args...)
case "xfs":
cmdOut, err = diskMounter.Exec.Command("mkfs.xfs", "-K", devicePath).CombinedOutput()
_, err = systemCombined("mkfs.xfs", "-K", devicePath)
}
if (err != nil)
{
klog.Errorf("failed to run mkfs error: %v, output: %v", err, string(cmdOut))
goto unmap
}
}
klog.Infof("formatting and mounting %s to %s with FS %s, options: %v", devicePath, targetPath, fsType, opt)
err = diskMounter.FormatAndMount(devicePath, targetPath, fsType, opt)
if (err == nil)
{
klog.Infof("successfully mounted %s to %s", devicePath, targetPath)
}
// Try to run online resize on mount.
// FIXME: Implement online resize. It requires online resize support in vitastor-nbd.
if (err == nil && existingFormat != "" && !readOnly)
{
var cmdOut []byte
switch (fsType)
{
case "ext4":
cmdOut, err = diskMounter.Exec.Command("resize2fs", devicePath).CombinedOutput()
_, err = systemCombined("resize2fs", devicePath)
case "xfs":
cmdOut, err = diskMounter.Exec.Command("xfs_growfs", devicePath).CombinedOutput()
_, err = systemCombined("xfs_growfs", devicePath)
}
if (err != nil)
{
klog.Errorf("failed to run resizefs error: %v, output: %v", err, string(cmdOut))
goto unmap
}
}
@@ -381,11 +536,16 @@ func (ns *NodeServer) NodeUnstageVolume(ctx context.Context, req *csi.NodeUnstag
}
volName := ctxVars["name"]
ns.lockVolume(ctxVars["configPath"]+":"+volName)
defer ns.unlockVolume(ctxVars["configPath"]+":"+volName)
if (ctxVars["vitastorfs"] != "")
{
return &csi.NodeUnstageVolumeResponse{}, nil
}
ns.lockVolume(ctxVars["configPath"]+":block:"+volName)
defer ns.unlockVolume(ctxVars["configPath"]+":block:"+volName)
targetPath := req.GetStagingTargetPath()
devicePath, refCount, err := mount.GetDeviceNameFromMount(ns.mounter, targetPath)
devicePath, _, err := mount.GetDeviceNameFromMount(ns.mounter, targetPath)
if (err != nil)
{
if (os.IsNotExist(err))
@@ -402,6 +562,16 @@ func (ns *NodeServer) NodeUnstageVolume(ctx context.Context, req *csi.NodeUnstag
return &csi.NodeUnstageVolumeResponse{}, nil
}
refList, err := ns.mounter.GetMountRefs(targetPath)
if (err != nil)
{
return nil, err
}
if (len(refList) > 0)
{
klog.Warningf("%s is still referenced: %v", targetPath, refList)
}
// unmount
err = mount.CleanupMountPoint(targetPath, ns.mounter, false)
if (err != nil)
@@ -410,7 +580,7 @@ func (ns *NodeServer) NodeUnstageVolume(ctx context.Context, req *csi.NodeUnstag
}
// unmap device
if (refCount == 1)
if (len(refList) == 0)
{
if (!ns.useVduse)
{
@@ -425,6 +595,153 @@ func (ns *NodeServer) NodeUnstageVolume(ctx context.Context, req *csi.NodeUnstag
return &csi.NodeUnstageVolumeResponse{}, nil
}
// Mount or check if NFS is already mounted
func (ns *NodeServer) mountNFS(ctxVars map[string]string) (string, error)
{
sum := sha1.Sum([]byte(ctxVars["configPath"]+":fs:"+ctxVars["vitastorfs"]))
nfsHash := hex.EncodeToString(sum[:])
stateFile := ns.stateDir+"vitastor-nfs-"+nfsHash+".json"
pidFile := ns.stateDir+"vitastor-nfs-"+nfsHash+".pid"
mountPath := ns.nfsStageDir+"/"+nfsHash
state, err := ns.readNfsState(stateFile, true)
if (state != nil)
{
return state.Path, nil
}
if (err != nil)
{
return "", err
}
err = os.MkdirAll(mountPath, 0777)
if (err != nil)
{
return "", err
}
// Create a new mount
state = &NfsState{
ConfigPath: ctxVars["configPath"],
FsName: ctxVars["vitastorfs"],
Pool: ctxVars["pool"],
Path: mountPath,
}
klog.Infof("starting new NFS server for FS %v", state.FsName)
stdout, _, err := system(
"/usr/bin/vitastor-nfs", "start",
"--pidfile", pidFile,
"--bind", "127.0.0.1",
"--port", "auto",
"--fs", state.FsName,
"--pool", state.Pool,
"--portmap", "0",
)
if (err != nil)
{
return "", err
}
match := regexp.MustCompile("Port: (\\d+)").FindStringSubmatch(string(stdout))
if (match == nil)
{
klog.Errorf("failed to find port in vitastor-nfs output: %v", string(stdout))
ns.stopNFS(stateFile, pidFile)
return "", fmt.Errorf("failed to find port in vitastor-nfs output (bad vitastor-nfs version?)")
}
port, _ := strconv.ParseUint(match[1], 0, 16)
state.Port = int(port)
// Write state file
stateJSON, _ := json.Marshal(state)
err = os.WriteFile(stateFile, stateJSON, 0600)
if (err != nil)
{
klog.Errorf("failed to write state file %v", stateFile)
ns.stopNFS(stateFile, pidFile)
return "", err
}
// Mount NFS
_, _, err = system(
"mount", "-t", "nfs", "127.0.0.1:/", state.Path,
"-o", fmt.Sprintf("port=%d,mountport=%d,nfsvers=3,soft,nolock,tcp", port, port),
)
if (err != nil)
{
ns.stopNFS(stateFile, pidFile)
return "", err
}
return state.Path, nil
}
// Mount or check if NFS is already mounted
func (ns *NodeServer) checkStopNFS(ctxVars map[string]string)
{
sum := sha1.Sum([]byte(ctxVars["configPath"]+":fs:"+ctxVars["vitastorfs"]))
nfsHash := hex.EncodeToString(sum[:])
stateFile := ns.stateDir+"vitastor-nfs-"+nfsHash+".json"
pidFile := ns.stateDir+"vitastor-nfs-"+nfsHash+".pid"
mountPath := ns.nfsStageDir+"/"+nfsHash
state, err := ns.readNfsState(stateFile, true)
if (state == nil)
{
return
}
activeNFS, err := ns.listActiveNFS()
if (err != nil)
{
return
}
if (len(activeNFS[state.Port]) > 0)
{
return
}
// All volume mounts are detached, unmount the root mount and kill the server
err = mount.CleanupMountPoint(mountPath, ns.mounter, false)
if (err != nil)
{
klog.Errorf("failed to unmount %v: %v", mountPath, err)
return
}
ns.stopNFS(stateFile, pidFile)
}
func (ns *NodeServer) stopNFS(stateFile, pidFile string)
{
err := killByPidFile(pidFile)
if (err != nil)
{
klog.Errorf("failed to kill process with pid from %v: %v", pidFile, err)
}
os.Remove(pidFile)
os.Remove(stateFile)
}
func (ns *NodeServer) listActiveNFS() (map[int][]string, error)
{
mounts, err := mount.ParseMountInfo("/proc/self/mountinfo")
if (err != nil)
{
klog.Errorf("failed to list mounts: %v", err)
return nil, err
}
activeNFS := make(map[int][]string)
for _, mount := range mounts
{
// Volume mounts always refer to subpaths
if (mount.FsType == "nfs" && mount.Root != "/")
{
for _, opt := range mount.MountOptions
{
if (strings.HasPrefix(opt, "port="))
{
port64, err := strconv.ParseUint(opt[5:], 10, 16)
if (err == nil)
{
activeNFS[int(port64)] = append(activeNFS[int(port64)], mount.MountPoint)
}
}
}
}
}
return activeNFS, nil
}
// NodePublishVolume mounts the volume mounted to the staging path to the target path
func (ns *NodeServer) NodePublishVolume(ctx context.Context, req *csi.NodePublishVolumeRequest) (*csi.NodePublishVolumeResponse, error)
{
@@ -443,23 +760,39 @@ func (ns *NodeServer) NodePublishVolume(ctx context.Context, req *csi.NodePublis
}
volName := ctxVars["name"]
ns.lockVolume(ctxVars["configPath"]+":"+volName)
defer ns.unlockVolume(ctxVars["configPath"]+":"+volName)
if (ctxVars["vitastorfs"] != "")
{
ns.lockVolume(ctxVars["configPath"]+":fs:"+ctxVars["vitastorfs"])
defer ns.unlockVolume(ctxVars["configPath"]+":fs:"+ctxVars["vitastorfs"])
}
else
{
ns.lockVolume(ctxVars["configPath"]+":block:"+volName)
defer ns.unlockVolume(ctxVars["configPath"]+":block:"+volName)
}
stagingTargetPath := req.GetStagingTargetPath()
targetPath := req.GetTargetPath()
isBlock := req.GetVolumeCapability().GetBlock() != nil
// Check that stagingTargetPath is mounted
_, err = mount.IsNotMountPoint(ns.mounter, stagingTargetPath)
if (err != nil)
if (ctxVars["vitastorfs"] == "")
{
klog.Errorf("staging path %v is not mounted: %v", stagingTargetPath, err)
return nil, fmt.Errorf("staging path %v is not mounted: %v", stagingTargetPath, err)
// Check that stagingTargetPath is mounted
notmnt, err := mount.IsNotMountPoint(ns.mounter, stagingTargetPath)
if (err != nil)
{
klog.Errorf("staging path %v is not mounted: %w", stagingTargetPath, err)
return nil, fmt.Errorf("staging path %v is not mounted: %w", stagingTargetPath, err)
}
else if (notmnt)
{
klog.Errorf("staging path %v is not mounted", stagingTargetPath)
return nil, fmt.Errorf("staging path %v is not mounted", stagingTargetPath)
}
}
// Check that targetPath is not already mounted
_, err = mount.IsNotMountPoint(ns.mounter, targetPath)
notmnt, err := mount.IsNotMountPoint(ns.mounter, targetPath)
if (err != nil)
{
if (os.IsNotExist(err))
@@ -494,6 +827,29 @@ func (ns *NodeServer) NodePublishVolume(ctx context.Context, req *csi.NodePublis
return nil, err
}
}
else if (!notmnt)
{
klog.Errorf("target path %s is already mounted", targetPath)
return nil, fmt.Errorf("target path %s is already mounted", targetPath)
}
if (ctxVars["vitastorfs"] != "")
{
nfspath, err := ns.mountNFS(ctxVars)
if (err != nil)
{
ns.checkStopNFS(ctxVars)
return nil, err
}
// volName should include prefix
stagingTargetPath = nfspath+"/"+volName
err = os.MkdirAll(stagingTargetPath, 0777)
if (err != nil && !os.IsExist(err))
{
ns.checkStopNFS(ctxVars)
return nil, err
}
}
execArgs := []string{"--bind", stagingTargetPath, targetPath}
if (req.GetReadonly())
@@ -506,6 +862,10 @@ func (ns *NodeServer) NodePublishVolume(ctx context.Context, req *csi.NodePublis
out, err := cmd.Output()
if (err != nil)
{
if (ctxVars["vitastorfs"] != "")
{
ns.checkStopNFS(ctxVars)
}
return nil, fmt.Errorf("Error running mount %v: %s", strings.Join(execArgs, " "), out)
}
@@ -525,8 +885,16 @@ func (ns *NodeServer) NodeUnpublishVolume(ctx context.Context, req *csi.NodeUnpu
}
volName := ctxVars["name"]
ns.lockVolume(ctxVars["configPath"]+":"+volName)
defer ns.unlockVolume(ctxVars["configPath"]+":"+volName)
if (ctxVars["vitastorfs"] != "")
{
ns.lockVolume(ctxVars["configPath"]+":fs:"+ctxVars["vitastorfs"])
defer ns.unlockVolume(ctxVars["configPath"]+":fs:"+ctxVars["vitastorfs"])
}
else
{
ns.lockVolume(ctxVars["configPath"]+":block:"+volName)
defer ns.unlockVolume(ctxVars["configPath"]+":block:"+volName)
}
targetPath := req.GetTargetPath()
devicePath, _, err := mount.GetDeviceNameFromMount(ns.mounter, targetPath)
@@ -553,6 +921,11 @@ func (ns *NodeServer) NodeUnpublishVolume(ctx context.Context, req *csi.NodeUnpu
return nil, err
}
if (ctxVars["vitastorfs"] != "")
{
ns.checkStopNFS(ctxVars)
}
return &csi.NodeUnpublishVolumeResponse{}, nil
}

View File

@@ -4,6 +4,7 @@
package vitastor
import (
"bytes"
"errors"
"encoding/json"
"fmt"
@@ -15,6 +16,8 @@ import (
"syscall"
"k8s.io/klog"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
func Contains(list []string, s string) bool
@@ -73,6 +76,10 @@ func checkVduseSupport() bool
" For VDUSE you need at least Linux 5.15 and the following kernel modules: vdpa, virtio-vdpa, vduse.",
)
}
else
{
klog.Infof("VDUSE support enabled successfully")
}
return vduse
}
@@ -97,6 +104,7 @@ func mapNbd(volName string, ctxVars map[string]string, readonly bool) (string, e
{
return "", fmt.Errorf("vitastor-nbd did not return the name of NBD device. output: %s", stderr)
}
klog.Infof("Attached volume %s via NBD as %s", volName, dev)
return dev, err
}
@@ -217,6 +225,7 @@ func mapVduse(stateDir string, volName string, ctxVars map[string]string, readon
err = os.WriteFile(stateFile, stateJSON, 0600)
if (err == nil)
{
klog.Infof("Attached volume %s via VDUSE as %s (VDPA ID %s)", volName, blockdev, vdpaId)
return blockdev, vdpaId, nil
}
}
@@ -299,3 +308,35 @@ func unmapVduseById(stateDir, vdpaId string)
os.Remove(pidFile)
}
}
func system(program string, args ...string) ([]byte, []byte, error)
{
klog.Infof("Running "+program+" "+strings.Join(args, " "))
c := exec.Command(program, args...)
var stdout, stderr bytes.Buffer
c.Stdout, c.Stderr = &stdout, &stderr
err := c.Run()
if (err != nil)
{
stdoutStr, stderrStr := string(stdout.Bytes()), string(stderr.Bytes())
klog.Errorf(program+" "+strings.Join(args, " ")+" failed: %s\nOutput:\n%s", err, stdoutStr+stderrStr)
return nil, nil, status.Error(codes.Internal, stdoutStr+stderrStr+" (status "+err.Error()+")")
}
return stdout.Bytes(), stderr.Bytes(), nil
}
func systemCombined(program string, args ...string) ([]byte, error)
{
klog.Infof("Running "+program+" "+strings.Join(args, " "))
c := exec.Command(program, args...)
var out bytes.Buffer
c.Stdout, c.Stderr = &out, &out
err := c.Run()
if (err != nil)
{
outStr := string(out.Bytes())
klog.Errorf(program+" "+strings.Join(args, " ")+" failed: %s, status %s\n", outStr, err)
return nil, status.Error(codes.Internal, outStr+" (status "+err.Error()+")")
}
return out.Bytes(), nil
}

2
debian/changelog vendored
View File

@@ -1,4 +1,4 @@
vitastor (1.6.1-1) unstable; urgency=medium
vitastor (1.11.0-1) unstable; urgency=medium
* Bugfixes

6
debian/control vendored
View File

@@ -53,3 +53,9 @@ Architecture: amd64
Depends: ${shlibs:Depends}, ${misc:Depends}, vitastor-client (= ${binary:Version})
Description: Vitastor Proxmox Virtual Environment storage plugin
Vitastor storage plugin for Proxmox Virtual Environment.
Package: vitastor-opennebula
Architecture: amd64
Depends: ${shlibs:Depends}, ${misc:Depends}, vitastor-client, patch, python3, jq
Description: Vitastor OpenNebula storage plugin
Vitastor storage plugin for OpenNebula.

View File

@@ -1,8 +1,10 @@
# Build patched QEMU for Debian inside a container
# cd ..; podman build --build-arg REL=bullseye -v `pwd`/packages:/root/packages -f debian/patched-qemu.Dockerfile .
ARG DISTRO=debian
ARG REL=
FROM debian:$REL
FROM $DISTRO:$REL
ARG DISTRO=debian
ARG REL=
WORKDIR /root
@@ -20,8 +22,8 @@ RUN if [ "$REL" = "buster" -o "$REL" = "bullseye" -o "$REL" = "bookworm" ]; then
echo 'APT::Install-Suggests false;' >> /etc/apt/apt.conf
RUN apt-get update
RUN apt-get -y install fio liburing-dev libgoogle-perftools-dev devscripts
RUN apt-get -y build-dep qemu
RUN DEBIAN_FRONTEND=noninteractive TZ=Europe/Moscow apt-get -y install fio liburing-dev libgoogle-perftools-dev devscripts
RUN DEBIAN_FRONTEND=noninteractive TZ=Europe/Moscow apt-get -y build-dep qemu
# To build a custom version
#RUN cp /root/packages/qemu-orig/* /root
RUN apt-get --download-only source qemu
@@ -38,9 +40,9 @@ ADD src/client/qemu_driver.c /root/qemu_driver.c
# apt-get install -y vitastor-client vitastor-client-dev quilt
RUN set -e; \
dpkg -i /root/packages/vitastor-$REL/vitastor-client_*.deb /root/packages/vitastor-$REL/vitastor-client-dev_*.deb; \
DEBIAN_FRONTEND=noninteractive TZ=Europe/Moscow apt-get -y install /root/packages/vitastor-$REL/vitastor-client_*.deb /root/packages/vitastor-$REL/vitastor-client-dev_*.deb; \
apt-get update; \
apt-get install -y quilt; \
DEBIAN_FRONTEND=noninteractive TZ=Europe/Moscow apt-get -y install quilt; \
mkdir -p /root/packages/qemu-$REL; \
rm -rf /root/packages/qemu-$REL/*; \
cd /root/packages/qemu-$REL; \

View File

@@ -1,3 +1,3 @@
mon usr/lib/vitastor/mon
mon usr/lib/vitastor/
mon/scripts/make-etcd usr/lib/vitastor/mon
mon/scripts/vitastor-mon.service /lib/systemd/system

View File

@@ -6,4 +6,6 @@ if [ "$1" = "configure" ]; then
addgroup --system --quiet vitastor
adduser --system --quiet --ingroup vitastor --no-create-home --home /nonexistent vitastor
mkdir -p /etc/vitastor
mkdir -p /var/lib/vitastor
chown vitastor:vitastor /var/lib/vitastor
fi

3
debian/vitastor-opennebula.install vendored Normal file
View File

@@ -0,0 +1,3 @@
opennebula/remotes var/lib/one/
opennebula/sudoers.d etc/
opennebula/install.sh var/lib/one/remotes/datastore/vitastor/

7
debian/vitastor-opennebula.postinst vendored Normal file
View File

@@ -0,0 +1,7 @@
#!/bin/sh
set -e
if [ "$1" = "configure" ]; then
/var/lib/one/remotes/datastore/vitastor/install.sh
fi

4
debian/vitastor-opennebula.triggers vendored Normal file
View File

@@ -0,0 +1,4 @@
interest /var/lib/one/remotes/datastore/downloader.sh
interest /etc/one/oned.conf
interest /etc/one/vmm_exec/vmm_execrc
interest /etc/apparmor.d/local/abstractions/libvirt-qemu

View File

@@ -9,23 +9,22 @@ ARG REL=
WORKDIR /root
RUN if [ "$REL" = "buster" -o "$REL" = "bullseye" ]; then \
echo "deb http://deb.debian.org/debian $REL-backports main" >> /etc/apt/sources.list; \
echo >> /etc/apt/preferences; \
echo 'Package: *' >> /etc/apt/preferences; \
echo "Pin: release a=$REL-backports" >> /etc/apt/preferences; \
echo 'Pin-Priority: 500' >> /etc/apt/preferences; \
RUN set -e -x; \
if [ "$REL" = "buster" ]; then \
apt-get update; \
apt-get -y install wget; \
wget https://vitastor.io/debian/pubkey.gpg -O /etc/apt/trusted.gpg.d/vitastor.gpg; \
echo "deb https://vitastor.io/debian $REL main" >> /etc/apt/sources.list; \
fi; \
grep '^deb ' /etc/apt/sources.list | perl -pe 's/^deb/deb-src/' >> /etc/apt/sources.list; \
perl -i -pe 's/Types: deb$/Types: deb deb-src/' /etc/apt/sources.list.d/debian.sources || true; \
echo 'APT::Install-Recommends false;' >> /etc/apt/apt.conf; \
echo 'APT::Install-Suggests false;' >> /etc/apt/apt.conf
RUN apt-get update
RUN apt-get -y install fio liburing-dev libgoogle-perftools-dev devscripts
RUN apt-get -y build-dep fio
RUN apt-get --download-only source fio
RUN apt-get update && apt-get -y install libjerasure-dev cmake libibverbs-dev libisal-dev libnl-3-dev libnl-genl-3-dev
RUN apt-get update && \
apt-get -y install fio liburing-dev libgoogle-perftools-dev devscripts libjerasure-dev cmake libibverbs-dev librdmacm-dev libisal-dev libnl-3-dev libnl-genl-3-dev curl && \
apt-get -y build-dep fio && \
apt-get --download-only source fio
ADD . /root/vitastor
RUN set -e -x; \
@@ -37,8 +36,10 @@ RUN set -e -x; \
mkdir -p /root/packages/vitastor-$REL; \
rm -rf /root/packages/vitastor-$REL/*; \
cd /root/packages/vitastor-$REL; \
cp -r /root/vitastor vitastor-1.6.1; \
cd vitastor-1.6.1; \
FULLVER=$(head -n1 /root/vitastor/debian/changelog | perl -pe 's/^.*\((.*?)\).*$/$1/'); \
VER=${FULLVER%%-*}; \
cp -r /root/vitastor vitastor-$VER; \
cd vitastor-$VER; \
ln -s /root/fio-build/fio-*/ ./fio; \
FIO=$(head -n1 fio/debian/changelog | perl -pe 's/^.*\((.*?)\).*$/$1/'); \
ls /usr/include/linux/raw.h || cp ./debian/raw.h /usr/include/linux/raw.h; \
@@ -50,10 +51,14 @@ RUN set -e -x; \
echo fio-headers.patch >> debian/patches/series; \
rm -rf a b; \
echo "dep:fio=$FIO" > debian/fio_version; \
cd /root/packages/vitastor-$REL/vitastor-$VER; \
mkdir mon/node_modules; \
cd mon/node_modules; \
curl -s https://git.yourcmc.ru/vitalif/antietcd/archive/master.tar.gz | tar -zx; \
curl -s https://git.yourcmc.ru/vitalif/tinyraft/archive/master.tar.gz | tar -zx; \
cd /root/packages/vitastor-$REL; \
tar --sort=name --mtime='2020-01-01' --owner=0 --group=0 --exclude=debian -cJf vitastor_1.6.1.orig.tar.xz vitastor-1.6.1; \
cd vitastor-1.6.1; \
V=$(head -n1 debian/changelog | perl -pe 's/^.*\((.*?)\).*$/$1/'); \
DEBFULLNAME="Vitaliy Filippov <vitalif@yourcmc.ru>" dch -D $REL -v "$V""$REL" "Rebuild for $REL"; \
tar --sort=name --mtime='2020-01-01' --owner=0 --group=0 --exclude=debian -cJf vitastor_$VER.orig.tar.xz vitastor-$VER; \
cd vitastor-$VER; \
DEBFULLNAME="Vitaliy Filippov <vitalif@yourcmc.ru>" dch -D $REL -v "$FULLVER""$REL" "Rebuild for $REL"; \
DEB_BUILD_OPTIONS=nocheck dpkg-buildpackage --jobs=auto -sa; \
rm -rf /root/packages/vitastor-$REL/vitastor-*/

View File

@@ -1,9 +1,11 @@
# Build Docker image with Vitastor packages
FROM debian:bullseye
FROM debian:bookworm
ADD vitastor.list /etc/apt/sources.list.d
ADD vitastor.gpg /etc/apt/trusted.gpg.d
ADD vitastor.pref /etc/apt/preferences.d
ADD apt.conf /etc/apt/
RUN apt-get update && apt-get -y install vitastor qemu-system-x86 qemu-system-common && apt-get clean
ADD etc/apt /etc/apt/
RUN apt-get update && apt-get -y install vitastor qemu-system-x86 qemu-system-common qemu-block-extra qemu-utils jq nfs-common && apt-get clean
ADD sleep.sh /usr/bin/
ADD install.sh /usr/bin/
ADD scripts /opt/scripts/
ADD etc /etc/
RUN ln -s /usr/lib/vitastor/mon/make-etcd /usr/bin/make-etcd

9
docker/Makefile Normal file
View File

@@ -0,0 +1,9 @@
VITASTOR_VERSION ?= v1.11.0
all: build push
build:
@docker build --rm -t vitalif/vitastor:$(VITASTOR_VERSION) .
push:
@docker push vitalif/vitastor:$(VITASTOR_VERSION)

View File

@@ -0,0 +1 @@
deb http://vitastor.io/debian bookworm main

View File

@@ -0,0 +1,27 @@
[Unit]
Description=Containerized etcd for Vitastor
After=network-online.target local-fs.target time-sync.target docker.service vitastor-host.service
Wants=network-online.target local-fs.target time-sync.target docker.service vitastor-host.service
PartOf=vitastor.target
[Service]
Restart=always
Environment=GOGC=50
EnvironmentFile=/etc/vitastor/docker.conf
EnvironmentFile=/etc/vitastor/etcd.conf
SyslogIdentifier=etcd
ExecStart=bash -c 'docker run --rm -i -v /var/lib/vitastor/etcd:/data \
--log-driver none --network host $CONTAINER_OPTIONS --name vitastor-etcd \
$ETCD_IMAGE /usr/local/bin/etcd --name "$ETCD_NAME" --data-dir /data \
--snapshot-count 10000 --advertise-client-urls http://$ETCD_IP:2379 --listen-client-urls http://$ETCD_IP:2379 \
--initial-advertise-peer-urls http://$ETCD_IP:2380 --listen-peer-urls http://$ETCD_IP:2380 \
--initial-cluster-token vitastor-etcd-1 --initial-cluster "$ETCD_INITIAL_CLUSTER" \
--initial-cluster-state new --max-txn-ops=100000 --max-request-bytes=104857600 \
--auto-compaction-retention=10 --auto-compaction-mode=revision'
ExecStop=docker stop vitastor-etcd
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,23 @@
[Unit]
Description=Empty container for running Vitastor commands
After=network-online.target local-fs.target time-sync.target docker.service
Wants=network-online.target local-fs.target time-sync.target docker.service
PartOf=vitastor.target
[Service]
Restart=always
EnvironmentFile=/etc/vitastor/docker.conf
ExecStart=bash -c 'docker run --rm -i -v /etc/vitastor:/etc/vitastor -v /dev:/dev \
--privileged --log-driver none --network host --name vitastor vitastor:$VITASTOR_VERSION \
sleep.sh'
ExecStartPost=udevadm trigger
ExecStop=docker stop vitastor
WorkingDirectory=/
PrivateTmp=false
TasksMax=infinity
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,23 @@
[Unit]
Description=Containerized Vitastor monitor
After=network-online.target local-fs.target time-sync.target docker.service
Wants=network-online.target local-fs.target time-sync.target docker.service
PartOf=vitastor.target
[Service]
Restart=always
EnvironmentFile=/etc/vitastor/docker.conf
SyslogIdentifier=vitastor-mon
ExecStart=bash -c 'docker run --rm -i -v /etc/vitastor:/etc/vitastor -v /var/lib/vitastor:/var/lib/vitastor -v /dev:/dev \
--log-driver none --network host $CONTAINER_OPTIONS --name vitastor-mon vitastor:$VITASTOR_VERSION \
node /usr/lib/vitastor/mon/mon-main.js'
ExecStop=docker stop vitastor-mon
WorkingDirectory=/
PrivateTmp=false
TasksMax=infinity
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,27 @@
[Unit]
Description=Containerized Vitastor object storage daemon osd.%i
After=network-online.target local-fs.target time-sync.target docker.service vitastor-host.service
Wants=network-online.target local-fs.target time-sync.target docker.service vitastor-host.service
PartOf=vitastor.target
[Service]
LimitNOFILE=1048576
LimitNPROC=1048576
LimitMEMLOCK=infinity
EnvironmentFile=/etc/vitastor/docker.conf
SyslogIdentifier=vitastor-osd%i
ExecStart=bash -c 'docker run --rm -i -v /etc/vitastor:/etc/vitastor -v /dev:/dev \
$(for i in $(ls /dev/vitastor/osd%i-*); do echo --device $i:$i; done) \
--log-driver none --network host --ulimit nofile=1048576 --ulimit memlock=-1 $CONTAINER_OPTIONS --name vitastor-osd%i \
vitastor:$VITASTOR_VERSION vitastor-disk exec-osd /dev/vitastor/osd%i-data'
ExecStartPre=+docker exec vitastor vitastor-disk pre-exec /dev/vitastor/osd%i-data
ExecStop=docker stop vitastor-etcd%i
WorkingDirectory=/
PrivateTmp=false
TasksMax=infinity
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=vitastor.target

View File

@@ -0,0 +1,4 @@
[Unit]
Description=vitastor target
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,7 @@
SUBSYSTEM=="block", ENV{ID_PART_ENTRY_TYPE}=="e7009fac-a5a1-4d72-af72-53de13059903", \
OWNER="vitastor", GROUP="vitastor", \
IMPORT{program}="/usr/bin/docker exec vitastor vitastor-disk udev $devnode", \
SYMLINK+="vitastor/$env{VITASTOR_ALIAS}"
ENV{VITASTOR_OSD_NUM}!="", ACTION=="add", RUN{program}+="/usr/bin/systemctl enable --now --no-block vitastor-osd@$env{VITASTOR_OSD_NUM}"
ENV{VITASTOR_OSD_NUM}!="", ACTION=="remove", RUN{program}+="/usr/bin/systemctl disable --now --no-block vitastor-osd@$env{VITASTOR_OSD_NUM}"

View File

@@ -0,0 +1,11 @@
#
# Configuration file for containerized Vitastor installation
# (non-Kubernetes, with systemd and udev-based orchestration)
#
# Desired Vitastor version
VITASTOR_VERSION=1.11.0
# Additional arguments for all containers
# For example, you may want to specify a custom logging driver here
CONTAINER_OPTIONS=""

View File

@@ -0,0 +1,4 @@
ETCD_IMAGE=quay.io/coreos/etcd:v3.5.18
ETCD_NAME=""
ETCD_IP=""
ETCD_INITIAL_CLUSTER=""

View File

@@ -0,0 +1,2 @@
{
}

9
docker/install.sh Executable file
View File

@@ -0,0 +1,9 @@
#!/bin/bash
set -e
cp -urv /etc/default /host-etc/
cp -urv /etc/systemd /host-etc/
cp -urv /etc/udev /host-etc/
cp -urnv /etc/vitastor /host-etc/
cp -urnv /opt/scripts/* /host-bin/

3
docker/scripts/vitastor-cli Executable file
View File

@@ -0,0 +1,3 @@
#!/bin/bash
docker exec -it vitastor vitastor-cli "$@"

3
docker/scripts/vitastor-disk Executable file
View File

@@ -0,0 +1,3 @@
#!/bin/bash
docker exec -it vitastor vitastor-disk "$@"

3
docker/scripts/vitastor-fio Executable file
View File

@@ -0,0 +1,3 @@
#!/bin/bash
docker exec -it vitastor fio "$@"

3
docker/scripts/vitastor-nbd Executable file
View File

@@ -0,0 +1,3 @@
#!/bin/bash
docker exec -it vitastor vitastor-nbd "$@"

3
docker/sleep.sh Executable file
View File

@@ -0,0 +1,3 @@
#!/bin/bash
while :; do sleep infinity; done

View File

@@ -1 +0,0 @@
deb http://vitastor.io/debian bullseye main

View File

@@ -13,7 +13,7 @@ Vitastor configuration consists of:
- [Separate OSD settings](config/pool.en.md#osd-settings)
- [Inode configuration](config/inode.en.md) i.e. image metadata like name, size and parent reference
Configuration parameters can be set in 3 places:
Configuration parameters can be set in 4 places:
- Configuration file (`/etc/vitastor/vitastor.conf` or other path)
- etcd key `/vitastor/config/global`. Most variables can be set there, but etcd
connection parameters should obviously be set in the configuration file.

View File

@@ -14,7 +14,7 @@
- [Настроек инодов](config/inode.ru.md), т.е. метаданных образов, таких, как имя, размер и ссылки на
родительский образ
Параметры конфигурации могут задаваться в 3 местах:
Параметры конфигурации могут задаваться в 4 местах:
- Файле конфигурации (`/etc/vitastor/vitastor.conf` или по другому пути)
- Ключе в etcd `/vitastor/config/global`. Большая часть параметров может
задаваться там, кроме, естественно, самих параметров соединения с etcd,

View File

@@ -9,6 +9,7 @@
These parameters apply only to Vitastor clients (QEMU, fio, NBD and so on) and
affect their interaction with the cluster.
- [client_iothread_count](#client_iothread_count)
- [client_retry_interval](#client_retry_interval)
- [client_eio_retry_interval](#client_eio_retry_interval)
- [client_retry_enospc](#client_retry_enospc)
@@ -23,6 +24,23 @@ affect their interaction with the cluster.
- [nbd_max_part](#nbd_max_part)
- [osd_nearfull_ratio](#osd_nearfull_ratio)
## client_iothread_count
- Type: integer
- Default: 0
Number of separate threads for handling TCP network I/O at client library
side. Enabling 4 threads usually allows to increase peak performance of each
client from approx. 2-3 to 7-8 GByte/s linear read/write and from approx.
100-150 to 400 thousand iops, but at the same time it increases latency.
Latency increase depends on CPU: with CPU power saving disabled latency
only increases by ~10 us (equivalent to Q=1 iops decrease from 10500 to 9500),
with CPU power saving enabled it may be as high as 500 us (equivalent to Q=1
iops decrease from 2000 to 1000). RDMA isn't affected by this option.
It's recommended to enable client I/O threads if you don't use RDMA and want
to increase peak client performance.
## client_retry_interval
- Type: milliseconds

View File

@@ -9,6 +9,7 @@
Данные параметры применяются только к клиентам Vitastor (QEMU, fio, NBD и т.п.) и
затрагивают логику их работы с кластером.
- [client_iothread_count](#client_iothread_count)
- [client_retry_interval](#client_retry_interval)
- [client_eio_retry_interval](#client_eio_retry_interval)
- [client_retry_enospc](#client_retry_enospc)
@@ -23,6 +24,24 @@
- [nbd_max_part](#nbd_max_part)
- [osd_nearfull_ratio](#osd_nearfull_ratio)
## client_iothread_count
- Тип: целое число
- Значение по умолчанию: 0
Число отдельных потоков для обработки ввода-вывода через TCP сеть на стороне
клиентской библиотеки. Включение 4 потоков обычно позволяет поднять пиковую
производительность каждого клиента примерно с 2-3 до 7-8 Гбайт/с линейного
чтения/записи и примерно с 100-150 до 400 тысяч операций ввода-вывода в
секунду, но ухудшает задержку. Увеличение задержки зависит от процессора:
при отключённом энергосбережении CPU это всего ~10 микросекунд (равносильно
падению iops с Q=1 с 10500 до 9500), а при включённом это может быть
и 500 микросекунд (равносильно падению iops с Q=1 с 2000 до 1000). На работу
RDMA данная опция не влияет.
Рекомендуется включать клиентские потоки ввода-вывода, если вы не используете
RDMA и хотите повысить пиковую производительность клиентов.
## client_retry_interval
- Тип: миллисекунды

View File

@@ -56,14 +56,24 @@ Can't be smaller than the OSD data device sector.
## immediate_commit
- Type: string
- Default: false
- Default: all
Another parameter which is really important for performance.
One of "none", "all" or "small". Global value, may be overriden [at pool level](pool.en.md#immediate_commit).
This parameter is also really important for performance.
TLDR: default "all" is optimal for server-grade SSDs with supercapacitor-based
power loss protection (nonvolatile write-through cache) and also for most HDDs.
"none" or "small" should be only selected if you use desktop SSDs without
capacitors or drives with slow write-back cache that can't be disabled. Check
immediate_commit of your OSDs in [ls-osd](../usage/cli.en.md#ls-osd).
Detailed explanation:
Desktop SSDs are very fast (100000+ iops) for simple random writes
without cache flush. However, they are really slow (only around 1000 iops)
if you try to fsync() each write, that is, when you want to guarantee that
each change gets immediately persisted to the physical media.
if you try to fsync() each write, that is, if you want to guarantee that
each change gets actually persisted to the physical media.
Server-grade SSDs with "Advanced/Enhanced Power Loss Protection" or with
"Supercapacitor-based Power Loss Protection", on the other hand, are equally
@@ -75,8 +85,8 @@ really slow when used with desktop SSDs. Vitastor, however, can also
efficiently utilize desktop SSDs by postponing fsync until the client calls
it explicitly.
This is what this parameter regulates. When it's set to "all" the whole
Vitastor cluster commits each change to disks immediately and clients just
This is what this parameter regulates. When it's set to "all" Vitastor
cluster commits each change to disks immediately and clients just
ignore fsyncs because they know for sure that they're unneeded. This reduces
the amount of network roundtrips performed by clients and improves
performance. So it's always better to use server grade SSDs with
@@ -96,12 +106,8 @@ SSD cache or "media-cache" - for example, a lot of Seagate EXOS drives have
it (they have internal SSD cache even though it's not stated in datasheets).
Setting this parameter to "all" or "small" in OSD parameters requires enabling
[disable_journal_fsync](layout-osd.en.yml#disable_journal_fsync) and
[disable_meta_fsync](layout-osd.en.yml#disable_meta_fsync), setting it to
"all" also requires enabling [disable_data_fsync](layout-osd.en.yml#disable_data_fsync).
TLDR: For optimal performance, set immediate_commit to "all" if you only use
SSDs with supercapacitor-based power loss protection (nonvolatile
write-through cache) for both data and journals in the whole Vitastor
cluster. Set it to "small" if you only use such SSDs for journals. Leave
empty if your drives have write-back cache.
[disable_journal_fsync](layout-osd.en.md#disable_journal_fsync) and
[disable_meta_fsync](layout-osd.en.md#disable_meta_fsync), setting it to
"all" also requires enabling [disable_data_fsync](layout-osd.en.md#disable_data_fsync).
vitastor-disk tried to do that by default, first checking/disabling drive cache.
If it can't disable drive cache, OSD get initialized with "none".

View File

@@ -57,9 +57,18 @@ amplification) и эффективность распределения нагр
## immediate_commit
- Тип: строка
- Значение по умолчанию: false
- Значение по умолчанию: all
Ещё один важный для производительности параметр.
Одно из значений "none", "small" или "all". Глобальное значение, может быть
переопределено [на уровне пула](pool.ru.md#immediate_commit).
Данный параметр тоже важен для производительности.
Вкратце: значение по умолчанию "all" оптимально для всех серверных SSD с
суперконденсаторами и также для большинства HDD. "none" и "small" имеет смысл
устанавливать только при использовании SSD настольного класса без
суперконденсаторов или дисков с медленным неотключаемым кэшем записи.
Проверьте настройку immediate_commit своих OSD в выводе команды [ls-osd](../usage/cli.ru.md#ls-osd).
Модели SSD для настольных компьютеров очень быстрые (100000+ операций в
секунду) при простой случайной записи без сбросов кэша. Однако они очень
@@ -80,7 +89,7 @@ Power Loss Protection" - одинаково быстрые и со сбросо
эффективно утилизировать настольные SSD.
Данный параметр влияет как раз на это. Когда он установлен в значение "all",
весь кластер Vitastor мгновенно фиксирует каждое изменение на физические
кластер Vitastor мгновенно фиксирует каждое изменение на физические
носители и клиенты могут просто игнорировать запросы fsync, т.к. они точно
знают, что fsync-и не нужны. Это уменьшает число необходимых обращений к OSD
по сети и улучшает производительность. Поэтому даже с Vitastor лучше всегда
@@ -103,13 +112,6 @@ HDD-дисках с внутренним SSD или "медиа" кэшем - н
указано в спецификациях).
Указание "all" или "small" в настройках / командной строке OSD требует
включения [disable_journal_fsync](layout-osd.ru.yml#disable_journal_fsync) и
[disable_meta_fsync](layout-osd.ru.yml#disable_meta_fsync), значение "all"
также требует включения [disable_data_fsync](layout-osd.ru.yml#disable_data_fsync).
Итого, вкратце: для оптимальной производительности установите
immediate_commit в значение "all", если вы используете в кластере только SSD
с суперконденсаторами и для данных, и для журналов. Если вы используете
такие SSD для всех журналов, но не для данных - можете установить параметр
в "small". Если и какие-то из дисков журналов имеют волатильный кэш записи -
оставьте параметр пустым.
включения [disable_journal_fsync](layout-osd.ru.md#disable_journal_fsync) и
[disable_meta_fsync](layout-osd.ru.md#disable_meta_fsync), значение "all"
также требует включения [disable_data_fsync](layout-osd.ru.md#disable_data_fsync).

View File

@@ -118,12 +118,13 @@ Physical block size of the journal device. Must be a multiple of
- Type: boolean
- Default: false
Do not issue fsyncs to the data device, i.e. do not flush its cache.
Safe ONLY if your data device has write-through cache. If you disable
the cache yourself using `hdparm` or `scsi_disk/cache_type` then make sure
that the cache disable command is run every time before starting Vitastor
OSD, for example, in the systemd unit. See also `immediate_commit` option
for the instructions to disable cache and how to benefit from it.
Do not issue fsyncs to the data device, i.e. do not force it to flush cache.
Safe ONLY if your data device has write-through cache or if write-back
cache is disabled. If you disable drive cache manually with `hdparm` or
writing to `/sys/.../scsi_disk/cache_type` then make sure that you do it
every time before starting Vitastor OSD (vitastor-disk does it automatically).
See also [immediate_commit](layout-cluster.en.md#immediate_commit)
for information about how to benefit from disabled cache.
## disable_meta_fsync
@@ -171,8 +172,7 @@ size, it actually has to write the whole 4 KB sector.
Because of this it can actually be beneficial to use SSDs which work well
with 512 byte sectors and use 512 byte disk_alignment, journal_block_size
and meta_block_size. But the only SSD that may fit into this category is
Intel Optane (probably, not tested yet).
and meta_block_size. But at the moment, no such SSDs are known...
Clients don't need to be aware of disk_alignment, so it's not required to
put a modified value into etcd key /vitastor/config/global.

View File

@@ -122,13 +122,14 @@ SSD-диске, иначе производительность пострада
- Тип: булево (да/нет)
- Значение по умолчанию: false
Не отправлять fsync-и устройству данных, т.е. не сбрасывать его кэш.
Не отправлять fsync-и устройству данных, т.е. не заставлять его сбрасывать кэш.
Безопасно, ТОЛЬКО если ваше устройство данных имеет кэш со сквозной
записью (write-through). Если вы отключаете кэш через `hdparm` или
`scsi_disk/cache_type`, то удостоверьтесь, что команда отключения кэша
выполняется перед каждым запуском Vitastor OSD, например, в systemd unit-е.
Смотрите также опцию `immediate_commit` для инструкций по отключению кэша
и о том, как из этого извлечь выгоду.
записью (write-through) или если кэш с отложенной записью (write-back) отключён.
Если вы отключаете кэш вручную через `hdparm` или запись в `/sys/.../scsi_disk/cache_type`,
то удостоверьтесь, что вы делаете это каждый раз перед запуском Vitastor OSD
(vitastor-disk делает это автоматически). Смотрите также опцию
[immediate_commit](layout-cluster.ru.md#immediate_commit) для информации о том,
как извлечь выгоду из отключённого кэша.
## disable_meta_fsync
@@ -179,9 +180,8 @@ SSD и HDD диски используют 4 КБ физические сект
Поэтому, на самом деле, может быть выгодно найти SSD, хорошо работающие с
меньшими, 512-байтными, блоками и использовать 512-байтные disk_alignment,
journal_block_size и meta_block_size. Однако единственные SSD, которые
теоретически могут попасть в эту категорию - это Intel Optane (но и это
пока не проверялось автором).
journal_block_size и meta_block_size. Однако на данный момент такие SSD
не известны...
Клиентам не обязательно знать про disk_alignment, так что помещать значение
этого параметра в etcd в /vitastor/config/global не нужно.

View File

@@ -8,6 +8,14 @@
These parameters only apply to Monitors.
- [use_antietcd](#use_antietcd)
- [enable_prometheus](#enable_prometheus)
- [mon_http_port](#mon_http_port)
- [mon_http_ip](#mon_http_ip)
- [mon_https_cert](#mon_https_cert)
- [mon_https_key](#mon_https_key)
- [mon_https_client_auth](#mon_https_client_auth)
- [mon_https_ca](#mon_https_ca)
- [etcd_mon_ttl](#etcd_mon_ttl)
- [etcd_mon_timeout](#etcd_mon_timeout)
- [etcd_mon_retries](#etcd_mon_retries)
@@ -16,6 +24,88 @@ These parameters only apply to Monitors.
- [osd_out_time](#osd_out_time)
- [placement_levels](#placement_levels)
- [use_old_pg_combinator](#use_old_pg_combinator)
- [osd_backfillfull_ratio](#osd_backfillfull_ratio)
## use_antietcd
- Type: boolean
- Default: false
Enable experimental built-in etcd replacement (clustered key-value database):
[antietcd](https://git.yourcmc.ru/vitalif/antietcd/).
When set to true, monitor runs internal antietcd automatically if it finds
a network interface with an IP address matching one of addresses in the
`etcd_address` configuration option (in `/etc/vitastor/vitastor.conf` or in
the monitor command line). If there are multiple matching addresses, it also
checks `antietcd_port` and antietcd is started for address with matching port.
By default, antietcd accepts connection on the selected IP address, but it
can also be overridden manually in the `antietcd_ip` option.
When antietcd is started, monitor stores cluster metadata itself and exposes
a etcd-compatible REST API. On disk, these metadata are stored in
`/var/lib/vitastor/mon_2379.json.gz` (can be overridden in antietcd_data_file
or antietcd_data_dir options). All other antietcd parameters
(see [here](https://git.yourcmc.ru/vitalif/antietcd/)) except node_id,
cluster, cluster_key, persist_filter, stale_read can also be set in
Vitastor configuration with `antietcd_` prefix.
You can dump/load data to or from antietcd using Antietcd `anticli` tool:
```
npm exec anticli -e http://etcd:2379/v3 get --prefix '' --no-temp > dump.json
npm exec anticli -e http://antietcd:2379/v3 load < dump.json
```
## enable_prometheus
- Type: boolean
- Default: true
Enable built-in Prometheus metrics exporter at mon_http_port (8060 by default).
Note that only the active (master) monitor exposes metrics, others return
HTTP 503. So you should add all monitor URLs to your Prometheus job configuration.
Grafana dashboard suitable for this exporter is here: [Vitastor-Grafana-6+.json](../../mon/scripts/Vitastor-Grafana-6+.json).
## mon_http_port
- Type: integer
- Default: 8060
HTTP port for monitors to listen on (including metrics exporter)
## mon_http_ip
- Type: string
IP address for monitors to listen on (all addresses by default)
## mon_https_cert
- Type: string
Path to PEM SSL certificate file for monitor to listen using HTTPS
## mon_https_key
- Type: string
Path to PEM SSL private key file for monitor to listen using HTTPS
## mon_https_client_auth
- Type: boolean
- Default: false
Enable HTTPS client certificate-based authorization for monitor connections
## mon_https_ca
- Type: string
Path to CA certificate for client HTTPS authorization
## etcd_mon_ttl
@@ -86,3 +176,18 @@ present in the configuration, then it is defined with the default priority
Use the old PG combination generator which doesn't support [level_placement](pool.en.md#level_placement)
and [raw_placement](pool.en.md#raw_placement) for pools which don't use this features.
## osd_backfillfull_ratio
- Type: number
- Default: 0.99
Monitors try to prevent OSDs becoming 100% full during rebalance or recovery by
calculating how much space will be occupied on every OSD after all rebalance
and recovery operations finish, and pausing rebalance and recovery if that
amount of space exceeds OSD capacity multiplied by the value of this
configuration parameter.
Future used space is calculated by summing space used by all user data blocks
(objects) in all PGs placed on a specific OSD, even if some of these objects
currently reside on a different set of OSDs.

View File

@@ -8,6 +8,14 @@
Данные параметры используются только мониторами Vitastor.
- [use_antietcd](#use_antietcd)
- [enable_prometheus](#enable_prometheus)
- [mon_http_port](#mon_http_port)
- [mon_http_ip](#mon_http_ip)
- [mon_https_cert](#mon_https_cert)
- [mon_https_key](#mon_https_key)
- [mon_https_client_auth](#mon_https_client_auth)
- [mon_https_ca](#mon_https_ca)
- [etcd_mon_ttl](#etcd_mon_ttl)
- [etcd_mon_timeout](#etcd_mon_timeout)
- [etcd_mon_retries](#etcd_mon_retries)
@@ -16,6 +24,90 @@
- [osd_out_time](#osd_out_time)
- [placement_levels](#placement_levels)
- [use_old_pg_combinator](#use_old_pg_combinator)
- [osd_backfillfull_ratio](#osd_backfillfull_ratio)
## use_antietcd
- Тип: булево (да/нет)
- Значение по умолчанию: false
Включить экспериментальный встроенный заменитель etcd (кластерную БД ключ-значение):
[antietcd](https://git.yourcmc.ru/vitalif/antietcd/).
Если параметр установлен в true, монитор запускает antietcd автоматически,
если обнаруживает сетевой интерфейс с одним из адресов, указанных в опции
конфигурации `etcd_address``/etc/vitastor/vitastor.conf` или в опциях
командной строки монитора). Если таких адресов несколько, также проверяется
опция `antietcd_port` и antietcd запускается для адреса с соответствующим
портом. По умолчанию antietcd принимает подключения по выбранному совпадающему
IP, но его также можно определить вручную опцией `antietcd_ip`.
При запуске antietcd монитор сам хранит центральные метаданные кластера и
выставляет etcd-совместимое REST API. На диске эти метаданные хранятся в файле
`/var/lib/vitastor/mon_2379.json.gz` (можно переопределить параметрами
antietcd_data_file или antietcd_data_dir). Все остальные параметры antietcd
(смотрите [по ссылке](https://git.yourcmc.ru/vitalif/antietcd/)), за исключением
node_id, cluster, cluster_key, persist_filter, stale_read также можно задавать
в конфигурации Vitastor с префиксом `antietcd_`.
Вы можете выгружать/загружать данные в или из antietcd с помощью его инструмента
`anticli`:
```
npm exec anticli -e http://etcd:2379/v3 get --prefix '' --no-temp > dump.json
npm exec anticli -e http://antietcd:2379/v3 load < dump.json
```
## enable_prometheus
- Тип: булево (да/нет)
- Значение по умолчанию: true
Включить встроенный Prometheus-экспортер метрик на порту mon_http_port (по умолчанию 8060).
Обратите внимание, что метрики выставляет только активный (главный) монитор, остальные
возвращают статус HTTP 503, поэтому вам следует добавлять адреса всех мониторов
в задание по сбору метрик Prometheus.
Дашборд для Grafana, подходящий для этого экспортера: [Vitastor-Grafana-6+.json](../../mon/scripts/Vitastor-Grafana-6+.json).
## mon_http_port
- Тип: целое число
- Значение по умолчанию: 8060
Порт, на котором мониторы принимают HTTP-соединения (в том числе для отдачи метрик)
## mon_http_ip
- Тип: строка
IP-адрес, на котором мониторы принимают HTTP-соединения (по умолчанию все адреса)
## mon_https_cert
- Тип: строка
Путь к PEM-файлу SSL-сертификата для монитора, чтобы принимать соединения через HTTPS
## mon_https_key
- Тип: строка
Путь к PEM-файлу секретного SSL-ключа для монитора, чтобы принимать соединения через HTTPS
## mon_https_client_auth
- Тип: булево (да/нет)
- Значение по умолчанию: false
Включить в HTTPS-сервере монитора авторизацию по клиентским сертификатам
## mon_https_ca
- Тип: строка
Путь к удостоверяющему сертификату для авторизации клиентских HTTPS соединений
## etcd_mon_ttl
@@ -87,3 +179,19 @@ OSD перед обновлением агрегированной статис
Использовать старый генератор комбинаций PG, не поддерживающий [level_placement](pool.ru.md#level_placement)
и [raw_placement](pool.ru.md#raw_placement) для пулов, которые не используют данные функции.
## osd_backfillfull_ratio
- Тип: число
- Значение по умолчанию: 0.99
Мониторы стараются предотвратить 100% заполнение OSD в процессе ребаланса
или восстановления, рассчитывая, сколько места будет занято на каждом OSD после
завершения всех операций ребаланса и восстановления, и приостанавливая
ребаланс и восстановление, если рассчитанный объём превышает ёмкость OSD,
умноженную на значение данного параметра.
Будущее занятое место рассчитывается сложением места, занятого всеми
пользовательскими блоками данных (объектами) во всех PG, расположенных
на конкретном OSD, даже если часть этих объектов в данный момент находится
на другом наборе OSD.

View File

@@ -68,11 +68,17 @@ but they are not connected to the cluster.
- Type: string
RDMA device name to use for Vitastor OSD communications (for example,
"rocep5s0f0"). Now Vitastor supports all adapters, even ones without
ODP support, like Mellanox ConnectX-3 and non-Mellanox cards.
"rocep5s0f0"). If not specified, Vitastor will try to find an RoCE
device matching [osd_network](osd.en.md#osd_network), preferring RoCEv2,
or choose the first available RDMA device if no RoCE devices are
found or if `osd_network` is not specified. Auto-selection is also
unsupported with old libibverbs < v32, like in Debian 10 Buster or
CentOS 7.
Versions up to Vitastor 1.2.0 required ODP which is only present in
Mellanox ConnectX >= 4. See also [rdma_odp](#rdma_odp).
Vitastor supports all adapters, even ones without ODP support, like
Mellanox ConnectX-3 and non-Mellanox cards. Versions up to Vitastor
1.2.0 required ODP which is only present in Mellanox ConnectX >= 4.
See also [rdma_odp](#rdma_odp).
Run `ibv_devinfo -v` as root to list available RDMA devices and their
features.
@@ -95,15 +101,17 @@ your device has.
## rdma_gid_index
- Type: integer
- Default: 0
Global address identifier index of the RDMA device to use. Different GID
indexes may correspond to different protocols like RoCEv1, RoCEv2 and iWARP.
Search for "GID" in `ibv_devinfo -v` output to determine which GID index
you need.
**IMPORTANT:** If you want to use RoCEv2 (as recommended) then the correct
rdma_gid_index is usually 1 (IPv6) or 3 (IPv4).
If not specified, Vitastor will try to auto-select a RoCEv2 IPv4 GID, then
RoCEv2 IPv6 GID, then RoCEv1 IPv4 GID, then RoCEv1 IPv6 GID, then IB GID.
GID auto-selection is unsupported with libibverbs < v32.
A correct rdma_gid_index for RoCEv2 is usually 1 (IPv6) or 3 (IPv4).
## rdma_mtu

View File

@@ -71,12 +71,17 @@ RDMA может быть нужно только если у клиентов е
- Тип: строка
Название RDMA-устройства для связи с Vitastor OSD (например, "rocep5s0f0").
Сейчас Vitastor поддерживает все модели адаптеров, включая те, у которых
нет поддержки ODP, то есть вы можете использовать RDMA с ConnectX-3 и
картами производства не Mellanox.
Если не указано, Vitastor попробует найти RoCE-устройство, соответствующее
[osd_network](osd.en.md#osd_network), предпочитая RoCEv2, или выбрать первое
попавшееся RDMA-устройство, если RoCE-устройств нет или если сеть `osd_network`
не задана. Также автовыбор не поддерживается со старыми версиями библиотеки
libibverbs < v32, например в Debian 10 Buster или CentOS 7.
Версии Vitastor до 1.2.0 включительно требовали ODP, который есть только
на Mellanox ConnectX 4 и более новых. См. также [rdma_odp](#rdma_odp).
Vitastor поддерживает все модели адаптеров, включая те, у которых
нет поддержки ODP, то есть вы можете использовать RDMA с ConnectX-3 и
картами производства не Mellanox. Версии Vitastor до 1.2.0 включительно
требовали ODP, который есть только на Mellanox ConnectX 4 и более новых.
См. также [rdma_odp](#rdma_odp).
Запустите `ibv_devinfo -v` от имени суперпользователя, чтобы посмотреть
список доступных RDMA-устройств, их параметры и возможности.
@@ -101,15 +106,18 @@ Control) и ECN (Explicit Congestion Notification).
## rdma_gid_index
- Тип: целое число
- Значение по умолчанию: 0
Номер глобального идентификатора адреса RDMA-устройства, который следует
использовать. Разным gid_index могут соответствовать разные протоколы связи:
RoCEv1, RoCEv2, iWARP. Чтобы понять, какой нужен вам - смотрите строчки со
словом "GID" в выводе команды `ibv_devinfo -v`.
**ВАЖНО:** Если вы хотите использовать RoCEv2 (как мы и рекомендуем), то
правильный rdma_gid_index, как правило, 1 (IPv6) или 3 (IPv4).
Если не указан, Vitastor попробует автоматически выбрать сначала GID,
соответствующий RoCEv2 IPv4, потом RoCEv2 IPv6, потом RoCEv1 IPv4, потом
RoCEv1 IPv6, потом IB. Авто-выбор GID не поддерживается со старыми версиями
libibverbs < v32.
Правильный rdma_gid_index для RoCEv2, как правило, 1 (IPv6) или 3 (IPv4).
## rdma_mtu

View File

@@ -10,6 +10,7 @@ These parameters only apply to OSDs, are not fixed at the moment of OSD drive
initialization and can be changed - either with an OSD restart or, for some of
them, even without restarting by updating configuration in etcd.
- [osd_iothread_count](#osd_iothread_count)
- [etcd_report_interval](#etcd_report_interval)
- [etcd_stats_interval](#etcd_stats_interval)
- [run_primary](#run_primary)
@@ -61,6 +62,18 @@ them, even without restarting by updating configuration in etcd.
- [recovery_tune_sleep_min_us](#recovery_tune_sleep_min_us)
- [recovery_tune_sleep_cutoff_us](#recovery_tune_sleep_cutoff_us)
## osd_iothread_count
- Type: integer
- Default: 0
TCP network I/O thread count for OSD. When non-zero, a single OSD process
may handle more TCP I/O, but at a cost of increased latency because thread
switching overhead occurs. RDMA isn't affected by this option.
Because of latency, instead of enabling OSD I/O threads it's recommended to
just create multiple OSDs per disk, or use RDMA.
## etcd_report_interval
- Type: seconds

View File

@@ -11,6 +11,7 @@
момент с помощью перезапуска OSD, а некоторые и без перезапуска, с помощью
изменения конфигурации в etcd.
- [osd_iothread_count](#osd_iothread_count)
- [etcd_report_interval](#etcd_report_interval)
- [etcd_stats_interval](#etcd_stats_interval)
- [run_primary](#run_primary)
@@ -62,6 +63,19 @@
- [recovery_tune_sleep_min_us](#recovery_tune_sleep_min_us)
- [recovery_tune_sleep_cutoff_us](#recovery_tune_sleep_cutoff_us)
## osd_iothread_count
- Тип: целое число
- Значение по умолчанию: 0
Число отдельных потоков для обработки ввода-вывода через TCP-сеть на
стороне OSD. Включение опции позволяет каждому отдельному OSD передавать
по сети больше данных, но ухудшает задержку из-за накладных расходов
переключения потоков. На работу RDMA опция не влияет.
Из-за задержек вместо включения потоков ввода-вывода OSD рекомендуется
просто создавать по несколько OSD на каждом диске, или использовать RDMA.
## etcd_report_interval
- Тип: секунды

View File

@@ -55,7 +55,7 @@ Examples:
OSD placement tree is set in a separate etcd key `/vitastor/config/node_placement`
in the following JSON format:
`
```
{
"<node name or OSD number>": {
"level": "<level>",
@@ -63,7 +63,7 @@ in the following JSON format:
},
...
}
`
```
Here, if a node name is a number then it is assumed to refer to an OSD.
Level of the OSD is always "osd" and cannot be overriden. You may only

View File

@@ -54,7 +54,7 @@
Дерево размещения OSD задаётся в отдельном ключе etcd `/vitastor/config/node_placement`
в следующем JSON-формате:
`
```
{
"<имя узла или номер OSD>": {
"level": "<уровень>",
@@ -62,7 +62,7 @@
},
...
}
`
```
Здесь, если название узла - число, считается, что это OSD. Уровень OSD
всегда равен "osd" и не может быть переопределён. Для OSD вы можете только

View File

@@ -1,3 +1,32 @@
- name: client_iothread_count
type: int
default: 0
online: false
info: |
Number of separate threads for handling TCP network I/O at client library
side. Enabling 4 threads usually allows to increase peak performance of each
client from approx. 2-3 to 7-8 GByte/s linear read/write and from approx.
100-150 to 400 thousand iops, but at the same time it increases latency.
Latency increase depends on CPU: with CPU power saving disabled latency
only increases by ~10 us (equivalent to Q=1 iops decrease from 10500 to 9500),
with CPU power saving enabled it may be as high as 500 us (equivalent to Q=1
iops decrease from 2000 to 1000). RDMA isn't affected by this option.
It's recommended to enable client I/O threads if you don't use RDMA and want
to increase peak client performance.
info_ru: |
Число отдельных потоков для обработки ввода-вывода через TCP сеть на стороне
клиентской библиотеки. Включение 4 потоков обычно позволяет поднять пиковую
производительность каждого клиента примерно с 2-3 до 7-8 Гбайт/с линейного
чтения/записи и примерно с 100-150 до 400 тысяч операций ввода-вывода в
секунду, но ухудшает задержку. Увеличение задержки зависит от процессора:
при отключённом энергосбережении CPU это всего ~10 микросекунд (равносильно
падению iops с Q=1 с 10500 до 9500), а при включённом это может быть
и 500 микросекунд (равносильно падению iops с Q=1 с 2000 до 1000). На работу
RDMA данная опция не влияет.
Рекомендуется включать клиентские потоки ввода-вывода, если вы не используете
RDMA и хотите повысить пиковую производительность клиентов.
- name: client_retry_interval
type: ms
min: 10
@@ -32,6 +61,24 @@
info_ru: |
Повторять запросы записи, завершившиеся с ошибками нехватки места, т.е.
ожидать, пока на OSD не освободится место.
- name: client_wait_up_timeout
type: sec
default: 16
online: true
info: |
Wait for this number of seconds until PGs are up when doing operations
which require all PGs to be up. Currently only used by object listings
in delete and merge-based commands ([vitastor-cli rm](../usage/cli.en.md#rm), merge and so on).
The default value is calculated as `1 + OSD lease timeout`, which is
`1 + etcd_report_interval + max_etcd_attempts*2*etcd_quick_timeout`.
info_ru: |
Время ожидания поднятия PG при операциях, требующих активности всех PG.
В данный момент используется листингами объектов в командах, использующих
удаление и слияние ([vitastor-cli rm](../usage/cli.ru.md#rm), merge и подобные).
Значение по умолчанию вычисляется как `1 + время lease OSD`, равное
`1 + etcd_report_interval + max_etcd_attempts*2*etcd_quick_timeout`.
- name: client_max_dirty_bytes
type: int
default: 33554432

View File

@@ -14,8 +14,12 @@
{{../../installation/packages.en.md}}
{{../../installation/docker.en.md}}
{{../../installation/proxmox.en.md}}
{{../../installation/opennebula.en.md}}
{{../../installation/openstack.en.md}}
{{../../installation/kubernetes.en.md}}

View File

@@ -14,8 +14,12 @@
{{../../installation/packages.ru.md}}
{{../../installation/docker.ru.md}}
{{../../installation/proxmox.ru.md}}
{{../../installation/opennebula.ru.md}}
{{../../installation/openstack.ru.md}}
{{../../installation/kubernetes.ru.md}}

View File

@@ -47,14 +47,24 @@
Не может быть меньше размера сектора дисков данных OSD.
- name: immediate_commit
type: string
default: false
default: all
info: |
Another parameter which is really important for performance.
One of "none", "all" or "small". Global value, may be overriden [at pool level](pool.en.md#immediate_commit).
This parameter is also really important for performance.
TLDR: default "all" is optimal for server-grade SSDs with supercapacitor-based
power loss protection (nonvolatile write-through cache) and also for most HDDs.
"none" or "small" should be only selected if you use desktop SSDs without
capacitors or drives with slow write-back cache that can't be disabled. Check
immediate_commit of your OSDs in [ls-osd](../usage/cli.en.md#ls-osd).
Detailed explanation:
Desktop SSDs are very fast (100000+ iops) for simple random writes
without cache flush. However, they are really slow (only around 1000 iops)
if you try to fsync() each write, that is, when you want to guarantee that
each change gets immediately persisted to the physical media.
if you try to fsync() each write, that is, if you want to guarantee that
each change gets actually persisted to the physical media.
Server-grade SSDs with "Advanced/Enhanced Power Loss Protection" or with
"Supercapacitor-based Power Loss Protection", on the other hand, are equally
@@ -66,8 +76,8 @@
efficiently utilize desktop SSDs by postponing fsync until the client calls
it explicitly.
This is what this parameter regulates. When it's set to "all" the whole
Vitastor cluster commits each change to disks immediately and clients just
This is what this parameter regulates. When it's set to "all" Vitastor
cluster commits each change to disks immediately and clients just
ignore fsyncs because they know for sure that they're unneeded. This reduces
the amount of network roundtrips performed by clients and improves
performance. So it's always better to use server grade SSDs with
@@ -87,17 +97,22 @@
it (they have internal SSD cache even though it's not stated in datasheets).
Setting this parameter to "all" or "small" in OSD parameters requires enabling
[disable_journal_fsync](layout-osd.en.yml#disable_journal_fsync) and
[disable_meta_fsync](layout-osd.en.yml#disable_meta_fsync), setting it to
"all" also requires enabling [disable_data_fsync](layout-osd.en.yml#disable_data_fsync).
TLDR: For optimal performance, set immediate_commit to "all" if you only use
SSDs with supercapacitor-based power loss protection (nonvolatile
write-through cache) for both data and journals in the whole Vitastor
cluster. Set it to "small" if you only use such SSDs for journals. Leave
empty if your drives have write-back cache.
[disable_journal_fsync](layout-osd.en.md#disable_journal_fsync) and
[disable_meta_fsync](layout-osd.en.md#disable_meta_fsync), setting it to
"all" also requires enabling [disable_data_fsync](layout-osd.en.md#disable_data_fsync).
vitastor-disk tried to do that by default, first checking/disabling drive cache.
If it can't disable drive cache, OSD get initialized with "none".
info_ru: |
Ещё один важный для производительности параметр.
Одно из значений "none", "small" или "all". Глобальное значение, может быть
переопределено [на уровне пула](pool.ru.md#immediate_commit).
Данный параметр тоже важен для производительности.
Вкратце: значение по умолчанию "all" оптимально для всех серверных SSD с
суперконденсаторами и также для большинства HDD. "none" и "small" имеет смысл
устанавливать только при использовании SSD настольного класса без
суперконденсаторов или дисков с медленным неотключаемым кэшем записи.
Проверьте настройку immediate_commit своих OSD в выводе команды [ls-osd](../usage/cli.ru.md#ls-osd).
Модели SSD для настольных компьютеров очень быстрые (100000+ операций в
секунду) при простой случайной записи без сбросов кэша. Однако они очень
@@ -118,7 +133,7 @@
эффективно утилизировать настольные SSD.
Данный параметр влияет как раз на это. Когда он установлен в значение "all",
весь кластер Vitastor мгновенно фиксирует каждое изменение на физические
кластер Vitastor мгновенно фиксирует каждое изменение на физические
носители и клиенты могут просто игнорировать запросы fsync, т.к. они точно
знают, что fsync-и не нужны. Это уменьшает число необходимых обращений к OSD
по сети и улучшает производительность. Поэтому даже с Vitastor лучше всегда
@@ -141,13 +156,6 @@
указано в спецификациях).
Указание "all" или "small" в настройках / командной строке OSD требует
включения [disable_journal_fsync](layout-osd.ru.yml#disable_journal_fsync) и
[disable_meta_fsync](layout-osd.ru.yml#disable_meta_fsync), значение "all"
также требует включения [disable_data_fsync](layout-osd.ru.yml#disable_data_fsync).
Итого, вкратце: для оптимальной производительности установите
immediate_commit в значение "all", если вы используете в кластере только SSD
с суперконденсаторами и для данных, и для журналов. Если вы используете
такие SSD для всех журналов, но не для данных - можете установить параметр
в "small". Если и какие-то из дисков журналов имеют волатильный кэш записи -
оставьте параметр пустым.
включения [disable_journal_fsync](layout-osd.ru.md#disable_journal_fsync) и
[disable_meta_fsync](layout-osd.ru.md#disable_meta_fsync), значение "all"
также требует включения [disable_data_fsync](layout-osd.ru.md#disable_data_fsync).

View File

@@ -110,20 +110,22 @@
type: bool
default: false
info: |
Do not issue fsyncs to the data device, i.e. do not flush its cache.
Safe ONLY if your data device has write-through cache. If you disable
the cache yourself using `hdparm` or `scsi_disk/cache_type` then make sure
that the cache disable command is run every time before starting Vitastor
OSD, for example, in the systemd unit. See also `immediate_commit` option
for the instructions to disable cache and how to benefit from it.
Do not issue fsyncs to the data device, i.e. do not force it to flush cache.
Safe ONLY if your data device has write-through cache or if write-back
cache is disabled. If you disable drive cache manually with `hdparm` or
writing to `/sys/.../scsi_disk/cache_type` then make sure that you do it
every time before starting Vitastor OSD (vitastor-disk does it automatically).
See also [immediate_commit](layout-cluster.en.md#immediate_commit)
for information about how to benefit from disabled cache.
info_ru: |
Не отправлять fsync-и устройству данных, т.е. не сбрасывать его кэш.
Не отправлять fsync-и устройству данных, т.е. не заставлять его сбрасывать кэш.
Безопасно, ТОЛЬКО если ваше устройство данных имеет кэш со сквозной
записью (write-through). Если вы отключаете кэш через `hdparm` или
`scsi_disk/cache_type`, то удостоверьтесь, что команда отключения кэша
выполняется перед каждым запуском Vitastor OSD, например, в systemd unit-е.
Смотрите также опцию `immediate_commit` для инструкций по отключению кэша
и о том, как из этого извлечь выгоду.
записью (write-through) или если кэш с отложенной записью (write-back) отключён.
Если вы отключаете кэш вручную через `hdparm` или запись в `/sys/.../scsi_disk/cache_type`,
то удостоверьтесь, что вы делаете это каждый раз перед запуском Vitastor OSD
(vitastor-disk делает это автоматически). Смотрите также опцию
[immediate_commit](layout-cluster.ru.md#immediate_commit) для информации о том,
как извлечь выгоду из отключённого кэша.
- name: disable_meta_fsync
type: bool
default: false
@@ -179,8 +181,7 @@
Because of this it can actually be beneficial to use SSDs which work well
with 512 byte sectors and use 512 byte disk_alignment, journal_block_size
and meta_block_size. But the only SSD that may fit into this category is
Intel Optane (probably, not tested yet).
and meta_block_size. But at the moment, no such SSDs are known...
Clients don't need to be aware of disk_alignment, so it's not required to
put a modified value into etcd key /vitastor/config/global.
@@ -198,9 +199,8 @@
Поэтому, на самом деле, может быть выгодно найти SSD, хорошо работающие с
меньшими, 512-байтными, блоками и использовать 512-байтные disk_alignment,
journal_block_size и meta_block_size. Однако единственные SSD, которые
теоретически могут попасть в эту категорию - это Intel Optane (но и это
пока не проверялось автором).
journal_block_size и meta_block_size. Однако на данный момент такие SSD
не известны...
Клиентам не обязательно знать про disk_alignment, так что помещать значение
этого параметра в etcd в /vitastor/config/global не нужно.

View File

@@ -1,3 +1,103 @@
- name: use_antietcd
type: bool
default: false
info: |
Enable experimental built-in etcd replacement (clustered key-value database):
[antietcd](https://git.yourcmc.ru/vitalif/antietcd/).
When set to true, monitor runs internal antietcd automatically if it finds
a network interface with an IP address matching one of addresses in the
`etcd_address` configuration option (in `/etc/vitastor/vitastor.conf` or in
the monitor command line). If there are multiple matching addresses, it also
checks `antietcd_port` and antietcd is started for address with matching port.
By default, antietcd accepts connection on the selected IP address, but it
can also be overridden manually in the `antietcd_ip` option.
When antietcd is started, monitor stores cluster metadata itself and exposes
a etcd-compatible REST API. On disk, these metadata are stored in
`/var/lib/vitastor/mon_2379.json.gz` (can be overridden in antietcd_data_file
or antietcd_data_dir options). All other antietcd parameters
(see [here](https://git.yourcmc.ru/vitalif/antietcd/)) except node_id,
cluster, cluster_key, persist_filter, stale_read can also be set in
Vitastor configuration with `antietcd_` prefix.
You can dump/load data to or from antietcd using Antietcd `anticli` tool:
```
npm exec anticli -e http://etcd:2379/v3 get --prefix '' --no-temp > dump.json
npm exec anticli -e http://antietcd:2379/v3 load < dump.json
```
info_ru: |
Включить экспериментальный встроенный заменитель etcd (кластерную БД ключ-значение):
[antietcd](https://git.yourcmc.ru/vitalif/antietcd/).
Если параметр установлен в true, монитор запускает antietcd автоматически,
если обнаруживает сетевой интерфейс с одним из адресов, указанных в опции
конфигурации `etcd_address` (в `/etc/vitastor/vitastor.conf` или в опциях
командной строки монитора). Если таких адресов несколько, также проверяется
опция `antietcd_port` и antietcd запускается для адреса с соответствующим
портом. По умолчанию antietcd принимает подключения по выбранному совпадающему
IP, но его также можно определить вручную опцией `antietcd_ip`.
При запуске antietcd монитор сам хранит центральные метаданные кластера и
выставляет etcd-совместимое REST API. На диске эти метаданные хранятся в файле
`/var/lib/vitastor/mon_2379.json.gz` (можно переопределить параметрами
antietcd_data_file или antietcd_data_dir). Все остальные параметры antietcd
(смотрите [по ссылке](https://git.yourcmc.ru/vitalif/antietcd/)), за исключением
node_id, cluster, cluster_key, persist_filter, stale_read также можно задавать
в конфигурации Vitastor с префиксом `antietcd_`.
Вы можете выгружать/загружать данные в или из antietcd с помощью его инструмента
`anticli`:
```
npm exec anticli -e http://etcd:2379/v3 get --prefix '' --no-temp > dump.json
npm exec anticli -e http://antietcd:2379/v3 load < dump.json
```
- name: enable_prometheus
type: bool
default: true
info: |
Enable built-in Prometheus metrics exporter at mon_http_port (8060 by default).
Note that only the active (master) monitor exposes metrics, others return
HTTP 503. So you should add all monitor URLs to your Prometheus job configuration.
Grafana dashboard suitable for this exporter is here: [Vitastor-Grafana-6+.json](../../mon/scripts/Vitastor-Grafana-6+.json).
info_ru: |
Включить встроенный Prometheus-экспортер метрик на порту mon_http_port (по умолчанию 8060).
Обратите внимание, что метрики выставляет только активный (главный) монитор, остальные
возвращают статус HTTP 503, поэтому вам следует добавлять адреса всех мониторов
в задание по сбору метрик Prometheus.
Дашборд для Grafana, подходящий для этого экспортера: [Vitastor-Grafana-6+.json](../../mon/scripts/Vitastor-Grafana-6+.json).
- name: mon_http_port
type: int
default: 8060
info: HTTP port for monitors to listen on (including metrics exporter)
info_ru: Порт, на котором мониторы принимают HTTP-соединения (в том числе для отдачи метрик)
- name: mon_http_ip
type: string
info: IP address for monitors to listen on (all addresses by default)
info_ru: IP-адрес, на котором мониторы принимают HTTP-соединения (по умолчанию все адреса)
- name: mon_https_cert
type: string
info: Path to PEM SSL certificate file for monitor to listen using HTTPS
info_ru: Путь к PEM-файлу SSL-сертификата для монитора, чтобы принимать соединения через HTTPS
- name: mon_https_key
type: string
info: Path to PEM SSL private key file for monitor to listen using HTTPS
info_ru: Путь к PEM-файлу секретного SSL-ключа для монитора, чтобы принимать соединения через HTTPS
- name: mon_https_client_auth
type: bool
default: false
info: Enable HTTPS client certificate-based authorization for monitor connections
info_ru: Включить в HTTPS-сервере монитора авторизацию по клиентским сертификатам
- name: mon_https_ca
type: string
info: Path to CA certificate for client HTTPS authorization
info_ru: Путь к удостоверяющему сертификату для авторизации клиентских HTTPS соединений
- name: etcd_mon_ttl
type: sec
min: 5
@@ -72,3 +172,27 @@
info_ru: |
Использовать старый генератор комбинаций PG, не поддерживающий [level_placement](pool.ru.md#level_placement)
и [raw_placement](pool.ru.md#raw_placement) для пулов, которые не используют данные функции.
- name: osd_backfillfull_ratio
type: float
default: 0.99
info: |
Monitors try to prevent OSDs becoming 100% full during rebalance or recovery by
calculating how much space will be occupied on every OSD after all rebalance
and recovery operations finish, and pausing rebalance and recovery if that
amount of space exceeds OSD capacity multiplied by the value of this
configuration parameter.
Future used space is calculated by summing space used by all user data blocks
(objects) in all PGs placed on a specific OSD, even if some of these objects
currently reside on a different set of OSDs.
info_ru: |
Мониторы стараются предотвратить 100% заполнение OSD в процессе ребаланса
или восстановления, рассчитывая, сколько места будет занято на каждом OSD после
завершения всех операций ребаланса и восстановления, и приостанавливая
ребаланс и восстановление, если рассчитанный объём превышает ёмкость OSD,
умноженную на значение данного параметра.
Будущее занятое место рассчитывается сложением места, занятого всеми
пользовательскими блоками данных (объектами) во всех PG, расположенных
на конкретном OSD, даже если часть этих объектов в данный момент находится
на другом наборе OSD.

View File

@@ -48,11 +48,17 @@
type: string
info: |
RDMA device name to use for Vitastor OSD communications (for example,
"rocep5s0f0"). Now Vitastor supports all adapters, even ones without
ODP support, like Mellanox ConnectX-3 and non-Mellanox cards.
"rocep5s0f0"). If not specified, Vitastor will try to find an RoCE
device matching [osd_network](osd.en.md#osd_network), preferring RoCEv2,
or choose the first available RDMA device if no RoCE devices are
found or if `osd_network` is not specified. Auto-selection is also
unsupported with old libibverbs < v32, like in Debian 10 Buster or
CentOS 7.
Versions up to Vitastor 1.2.0 required ODP which is only present in
Mellanox ConnectX >= 4. See also [rdma_odp](#rdma_odp).
Vitastor supports all adapters, even ones without ODP support, like
Mellanox ConnectX-3 and non-Mellanox cards. Versions up to Vitastor
1.2.0 required ODP which is only present in Mellanox ConnectX >= 4.
See also [rdma_odp](#rdma_odp).
Run `ibv_devinfo -v` as root to list available RDMA devices and their
features.
@@ -64,12 +70,17 @@
PFC (Priority Flow Control) and ECN (Explicit Congestion Notification).
info_ru: |
Название RDMA-устройства для связи с Vitastor OSD (например, "rocep5s0f0").
Сейчас Vitastor поддерживает все модели адаптеров, включая те, у которых
нет поддержки ODP, то есть вы можете использовать RDMA с ConnectX-3 и
картами производства не Mellanox.
Если не указано, Vitastor попробует найти RoCE-устройство, соответствующее
[osd_network](osd.en.md#osd_network), предпочитая RoCEv2, или выбрать первое
попавшееся RDMA-устройство, если RoCE-устройств нет или если сеть `osd_network`
не задана. Также автовыбор не поддерживается со старыми версиями библиотеки
libibverbs < v32, например в Debian 10 Buster или CentOS 7.
Версии Vitastor до 1.2.0 включительно требовали ODP, который есть только
на Mellanox ConnectX 4 и более новых. См. также [rdma_odp](#rdma_odp).
Vitastor поддерживает все модели адаптеров, включая те, у которых
нет поддержки ODP, то есть вы можете использовать RDMA с ConnectX-3 и
картами производства не Mellanox. Версии Vitastor до 1.2.0 включительно
требовали ODP, который есть только на Mellanox ConnectX 4 и более новых.
См. также [rdma_odp](#rdma_odp).
Запустите `ibv_devinfo -v` от имени суперпользователя, чтобы посмотреть
список доступных RDMA-устройств, их параметры и возможности.
@@ -94,23 +105,29 @@
`ibv_devinfo -v`.
- name: rdma_gid_index
type: int
default: 0
info: |
Global address identifier index of the RDMA device to use. Different GID
indexes may correspond to different protocols like RoCEv1, RoCEv2 and iWARP.
Search for "GID" in `ibv_devinfo -v` output to determine which GID index
you need.
**IMPORTANT:** If you want to use RoCEv2 (as recommended) then the correct
rdma_gid_index is usually 1 (IPv6) or 3 (IPv4).
If not specified, Vitastor will try to auto-select a RoCEv2 IPv4 GID, then
RoCEv2 IPv6 GID, then RoCEv1 IPv4 GID, then RoCEv1 IPv6 GID, then IB GID.
GID auto-selection is unsupported with libibverbs < v32.
A correct rdma_gid_index for RoCEv2 is usually 1 (IPv6) or 3 (IPv4).
info_ru: |
Номер глобального идентификатора адреса RDMA-устройства, который следует
использовать. Разным gid_index могут соответствовать разные протоколы связи:
RoCEv1, RoCEv2, iWARP. Чтобы понять, какой нужен вам - смотрите строчки со
словом "GID" в выводе команды `ibv_devinfo -v`.
**ВАЖНО:** Если вы хотите использовать RoCEv2 (как мы и рекомендуем), то
правильный rdma_gid_index, как правило, 1 (IPv6) или 3 (IPv4).
Если не указан, Vitastor попробует автоматически выбрать сначала GID,
соответствующий RoCEv2 IPv4, потом RoCEv2 IPv6, потом RoCEv1 IPv4, потом
RoCEv1 IPv6, потом IB. Авто-выбор GID не поддерживается со старыми версиями
libibverbs < v32.
Правильный rdma_gid_index для RoCEv2, как правило, 1 (IPv6) или 3 (IPv4).
- name: rdma_mtu
type: int
default: 4096

View File

@@ -1,3 +1,21 @@
- name: osd_iothread_count
type: int
default: 0
info: |
TCP network I/O thread count for OSD. When non-zero, a single OSD process
may handle more TCP I/O, but at a cost of increased latency because thread
switching overhead occurs. RDMA isn't affected by this option.
Because of latency, instead of enabling OSD I/O threads it's recommended to
just create multiple OSDs per disk, or use RDMA.
info_ru: |
Число отдельных потоков для обработки ввода-вывода через TCP-сеть на
стороне OSD. Включение опции позволяет каждому отдельному OSD передавать
по сети больше данных, но ухудшает задержку из-за накладных расходов
переключения потоков. На работу RDMA опция не влияет.
Из-за задержек вместо включения потоков ввода-вывода OSD рекомендуется
просто создавать по несколько OSD на каждом диске, или использовать RDMA.
- name: etcd_report_interval
type: sec
default: 5

View File

@@ -0,0 +1,60 @@
[Documentation](../../README.md#documentation) → Installation → Dockerized Installation
-----
[Читать на русском](docker.ru.md)
# Dockerized Installation
Vitastor may be installed in Docker/Podman. In such setups etcd, monitors and OSD
all run in containers, but everything else looks as close as possible to a usual
setup with packages:
- host network is used
- auto-start is implemented through udev and systemd
- logs are written to journald (not docker json log files)
- command-line wrapper scripts are installed to the host system to call vitastor-disk,
vitastor-cli and others through the container
Such installations may be useful when it's impossible or inconvenient to install
Vitastor from packages, for example, in exotic Linux distributions.
If you don't want just a simple containerized installation, you can also take a look
at Vitastor Kubernetes operator: https://github.com/Antilles7227/vitastor-operator
## Installing Containers
The instruction is very simple.
1. Download a Docker image of the desired version: \
`docker pull vitastor:1.10.2`
2. Install scripts to the host system: \
`docker run --rm -it -v /etc:/host-etc -v /usr/bin:/host-bin vitastor:1.10.2 install.sh`
3. Reload udev rules: \
`udevadm control --reload-rules`
And you can return to [Quick Start](../intro/quickstart.en.md).
## Upgrading Containers
First make sure to check the topic [Upgrading Vitastor](../usage/admin.en.md#upgrading-vitastor)
to figure out if you need any additional steps.
Then, to upgrade a containerized installation, you just need to change the `VITASTOR_VERSION`
option in `/etc/vitastor/docker.conf` and restart all Vitastor services:
`systemctl restart vitastor.target`
## QEMU
Vitastor Docker image also contains QEMU, qemu-img and qemu-storage-daemon built with Vitastor support.
However, running QEMU in Docker is harder to setup and it depends on the used virtualization UI
(OpenNebula, Proxmox and so on). Some of them also required patched Libvirt.
That's why containerized installation of Vitastor doesn't contain a ready-made QEMU setup and it's
recommended to install QEMU from packages or build it manually.
## fio
Vitastor Docker image also contains fio and installs a wrapper called `vitastor-fio` to use it from
the host system.

View File

@@ -0,0 +1,60 @@
[Документация](../../README-ru.md#документация) → Установка → Установка в Docker
-----
[Read in English](docker.en.md)
# Установка в Docker
Vitastor можно установить в Docker/Podman. При этом etcd, мониторы и OSD запускаются
в контейнерах, но всё остальное выглядит максимально приближенно к установке из пакетов:
- используется сеть хост-системы
- для автозапуска используются udev и systemd
- журналы записываются в journald (не в json-файлы журналов docker)
- в хост-систему устанавливаются обёртки для вызова консольных инструментов vitastor-disk,
vitastor-cli и других через контейнер
Такая установка полезна тогда, когда установка из пакетов невозможна или неудобна,
например, в нестандартных Linux-дистрибутивах.
Если вам нужна не просто контейнеризованная инсталляция, вы также можете обратить внимание
на Vitastor Kubernetes-оператор: https://github.com/Antilles7227/vitastor-operator
## Установка контейнеров
Инструкция по установке максимально простая.
1. Скачайте Docker-образ желаемой версии: \
`docker pull vitastor:1.10.2`
2. Установите скрипты в хост-систему командой: \
`docker run --rm -it -v /etc:/host-etc -v /usr/bin:/host-bin vitastor:1.10.2 install.sh`
3. Перезагрузите правила udev: \
`udevadm control --reload-rules`
После этого вы можете возвращаться к разделу [Быстрый старт](../intro/quickstart.ru.md).
## Обновление контейнеров
Сначала обязательно проверьте раздел [Обновление Vitastor](../usage/admin.ru.md#обновление-vitastor),
чтобы понять, не требуются ли вам какие-то дополнительные действия.
После этого для обновления Docker-инсталляции вам нужно просто поменять опцию `VITASTOR_VERSION`
в файле `/etc/vitastor/docker.conf` и перезапустить все сервисы Vitastor командой:
`systemctl restart vitastor.target`
## QEMU
В Docker-образ также входят QEMU, qemu-img и qemu-storage-daemon, собранные с поддержкой Vitastor.
Однако настроить запуск QEMU в Docker сложнее и способ запуска зависит от используемого интерфейса
виртуализации (OpenNebula, Proxmox и т.п.). Также для OpenNebula, например, требуется патченый
Libvirt.
Поэтому по умолчанию Docker-сборка пока что не включает в себя готового способа запуска QEMU
и QEMU рекомендуется устанавливать из пакетов или собирать самостоятельно.
## fio
fio также входит в Docker-контейнер vitastor, и в хост-систему устанавливается обёртка `vitastor-fio`
для запуска fio в контейнер.

View File

@@ -6,9 +6,18 @@
# Kubernetes CSI
Vitastor has a CSI plugin for Kubernetes which supports RWO (and block RWX) volumes.
Vitastor has a CSI plugin for Kubernetes which supports block-based and VitastorFS-based volumes.
To deploy it, take manifests from [csi/deploy/](../../csi/deploy/) directory, put your
Block-based volumes may be formatted and mounted with a normal FS (ext4 or xfs). Such volumes
only support RWO (ReadWriteOnce) mode.
Block-based volumes may also be left without FS and attached into the container as a block
device. Such volumes also support RWX (ReadWriteMany) mode.
VitastorFS-based volumes use a clustered file system and support FS-based RWX (ReadWriteMany)
mode. However, such volumes don't support quotas and snapshots.
To deploy the CSI plugin, take manifests from [csi/deploy/](../../csi/deploy/) directory, put your
Vitastor configuration in [001-csi-config-map.yaml](../../csi/deploy/001-csi-config-map.yaml),
configure storage class in [009-storage-class.yaml](../../csi/deploy/009-storage-class.yaml)
and apply all `NNN-*.yaml` manifests to your Kubernetes installation:
@@ -23,16 +32,16 @@ After that you'll be able to create PersistentVolumes.
kernel modules enabled (vdpa, vduse, virtio-vdpa). If your distribution doesn't
have them pre-built - build them yourself ([instructions](../usage/qemu.en.md#vduse)),
I promise it's worth it :-). When VDUSE is unavailable, CSI driver uses [NBD](../usage/nbd.en.md)
to map Vitastor devices. NBD is slower and prone to timeout issues: if Vitastor
cluster becomes unresponsible for more than [nbd_timeout](../config/client.en.md#nbd_timeout),
the NBD device detaches and breaks pods using it.
to map Vitastor devices. NBD is slower and, with kernels older than 5.19, unmountable
if the cluster becomes unresponsible.
## Features
Vitastor CSI supports:
- Kubernetes starting with 1.20 (or 1.17 for older vitastor-csi <= 1.1.0)
- Filesystem RWO (ReadWriteOnce) volumes. Example: [PVC](../../csi/deploy/example-pvc.yaml), [pod](../../csi/deploy/example-test-pod.yaml)
- Block-based FS-formatted RWO (ReadWriteOnce) volumes. Example: [PVC](../../csi/deploy/example-pvc.yaml), [pod](../../csi/deploy/example-test-pod.yaml)
- Raw block RWX (ReadWriteMany) volumes. Example: [PVC](../../csi/deploy/example-pvc-block.yaml), [pod](../../csi/deploy/example-test-pod-block.yaml)
- VitastorFS-based volumes RWX (ReadWriteMany) volumes. Example: [storage class](../../csi/deploy/example-storage-class-fs.yaml)
- Volume expansion
- Volume snapshots. Example: [snapshot class](../../csi/deploy/example-snapshot-class.yaml), [snapshot](../../csi/deploy/example-snapshot.yaml), [clone](../../csi/deploy/example-snapshot-clone.yaml)
- [VDUSE](../usage/qemu.en.md#vduse) (preferred) and [NBD](../usage/nbd.en.md) device mapping methods

View File

@@ -6,7 +6,17 @@
# Kubernetes CSI
У Vitastor есть CSI-плагин для Kubernetes, поддерживающий RWO, а также блочные RWX, тома.
У Vitastor есть CSI-плагин для Kubernetes, поддерживающий блочные тома и тома на основе
кластерной ФС VitastorFS.
Блочные тома могут быть отформатированы и примонтированы со стандартной ФС (ext4 или xfs).
Такие тома поддерживают только режим RWO (ReadWriteOnce, одновременный доступ с одного узла).
Блочные тома также могут не форматироваться и подключаться в контейнер в виде блочного устройства.
В таком случае их можно подключать в режиме RWX (ReadWriteMany, одновременный доступ с многих узлов).
Тома на основе VitastorFS используют кластерную ФС и поэтому также поддерживают режим RWX
(ReadWriteMany). Однако, такие тома не поддерживают ограничение размера и снимки.
Для установки возьмите манифесты из директории [csi/deploy/](../../csi/deploy/), поместите
вашу конфигурацию подключения к Vitastor в [csi/deploy/001-csi-config-map.yaml](../../csi/deploy/001-csi-config-map.yaml),
@@ -33,6 +43,7 @@ CSI-плагин Vitastor поддерживает:
- Версии Kubernetes, начиная с 1.20 (или с 1.17 для более старых vitastor-csi <= 1.1.0)
- Файловые RWO (ReadWriteOnce) тома. Пример: [PVC](../../csi/deploy/example-pvc.yaml), [под](../../csi/deploy/example-test-pod.yaml)
- Сырые блочные RWX (ReadWriteMany) тома. Пример: [PVC](../../csi/deploy/example-pvc-block.yaml), [под](../../csi/deploy/example-test-pod-block.yaml)
- Основанные на VitastorFS RWX (ReadWriteMany) тома. Пример: [класс хранения](../../csi/deploy/example-storage-class-fs.yaml)
- Расширение размера томов
- Снимки томов. Пример: [класс снимков](../../csi/deploy/example-snapshot-class.yaml), [снимок](../../csi/deploy/example-snapshot.yaml), [клон снимка](../../csi/deploy/example-snapshot-clone.yaml)
- Способы подключения устройств [VDUSE](../usage/qemu.ru.md#vduse) (предпочитаемый) и [NBD](../usage/nbd.ru.md)

View File

@@ -0,0 +1,186 @@
[Documentation](../../README.md#documentation) → Installation → OpenNebula
-----
[Читать на русском](opennebula.ru.md)
# OpenNebula
## Automatic Installation
OpenNebula plugin is packaged as `vitastor-opennebula` Debian and RPM package since Vitastor 1.9.0. So:
- Run `apt-get install vitastor-opennebula` or `yum install vitastor-opennebula` after installing OpenNebula on all nodes
- Check that it prints "OK, Vitastor OpenNebula patches successfully applied" or "OK, Vitastor OpenNebula patches are already applied"
- If it does not, refer to [Manual Installation](#manual-installation) and apply configuration file changes manually
- Make sure that Vitastor patched versions of QEMU and libvirt are installed
(`dpkg -l qemu-system-x86`, `dpkg -l | grep libvirt`, `rpm -qa | grep qemu`, `rpm -qa | grep qemu`, `rpm -qa | grep libvirt-libs` should show "vitastor" in version names)
- [Block VM access to Vitastor cluster](#block-vm-access-to-vitastor-cluster)
## Manual Installation
Install OpenNebula. Then, on each node:
- Copy [opennebula/remotes](../../opennebula/remotes) into `/var/lib/one` recursively: `cp -r opennebula/remotes /var/lib/one/`
- Copy [opennebula/sudoers.d](../../opennebula/sudoers.d) to `/etc`: `cp -r opennebula/sudoers.d /etc/`
- Apply [downloader-vitastor.sh.diff](../../opennebula/remotes/datastore/vitastor/downloader-vitastor.sh.diff) to `/var/lib/one/remotes/datastore/downloader.sh`:
`patch /var/lib/one/remotes/datastore/downloader.sh < opennebula/remotes/datastore/vitastor/downloader-vitastor.sh.diff` - or read the patch and apply the same change manually
- Add `kvm-vitastor` to `LIVE_DISK_SNAPSHOTS` in `/etc/one/vmm_exec/vmm_execrc`
- If on Debian or Ubuntu (and AppArmor is used), add Vitastor config file path(s) to `/etc/apparmor.d/local/abstractions/libvirt-qemu`: for example,
`echo ' "/etc/vitastor/vitastor.conf" r,' >> /etc/apparmor.d/local/abstractions/libvirt-qemu`
- Apply changes to `/etc/one/oned.conf`
### oned.conf changes
1. Add deploy script override in kvm VM_MAD: add `-l deploy.vitastor` to ARGUMENTS.
```diff
VM_MAD = [
NAME = "kvm",
SUNSTONE_NAME = "KVM",
EXECUTABLE = "one_vmm_exec",
- ARGUMENTS = "-t 15 -r 0 kvm -p",
+ ARGUMENTS = "-t 15 -r 0 kvm -p -l deploy=deploy.vitastor",
DEFAULT = "vmm_exec/vmm_exec_kvm.conf",
TYPE = "kvm",
KEEP_SNAPSHOTS = "yes",
LIVE_RESIZE = "yes",
SUPPORT_SHAREABLE = "yes",
IMPORTED_VMS_ACTIONS = "terminate, terminate-hard, hold, release, suspend,
resume, delete, reboot, reboot-hard, resched, unresched, disk-attach,
disk-detach, nic-attach, nic-detach, snapshot-create, snapshot-delete,
resize, updateconf, update"
]
```
Optional: if you also want to save VM RAM checkpoints to Vitastor, use
`-l deploy=deploy.vitastor,save=save.vitastor,restore=restore.vitastor`
instead of just `-l deploy=deploy.vitastor`.
2. Add `vitastor` to TM_MAD.ARGUMENTS and DATASTORE_MAD.ARGUMENTS:
```diff
TM_MAD = [
EXECUTABLE = "one_tm",
- ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,fs_lvm_ssh,qcow2,ssh,ceph,dev,vcenter,iscsi_libvirt"
+ ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,fs_lvm_ssh,qcow2,ssh,ceph,vitastor,dev,vcenter,iscsi_libvirt"
]
DATASTORE_MAD = [
EXECUTABLE = "one_datastore",
- ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,dev,iscsi_libvirt,vcenter,restic,rsync -s shared,ssh,ceph,fs_lvm,fs_lvm_ssh,qcow2,vcenter"
+ ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,vitastor,dev,iscsi_libvirt,vcenter,restic,rsync -s shared,ssh,ceph,vitastor,fs_lvm,fs_lvm_ssh,qcow2,vcenter"
]
```
3. Add INHERIT_DATASTORE_ATTR for two Vitastor attributes:
```
INHERIT_DATASTORE_ATTR = "VITASTOR_CONF"
INHERIT_DATASTORE_ATTR = "IMAGE_PREFIX"
```
4. Add TM_MAD_CONF and DS_MAD_CONF for Vitastor:
```
TM_MAD_CONF = [
NAME = "vitastor", LN_TARGET = "NONE", CLONE_TARGET = "SELF", SHARED = "YES",
DS_MIGRATE = "NO", DRIVER = "raw", ALLOW_ORPHANS="format",
TM_MAD_SYSTEM = "ssh,shared", LN_TARGET_SSH = "SYSTEM", CLONE_TARGET_SSH = "SYSTEM",
DISK_TYPE_SSH = "FILE", LN_TARGET_SHARED = "NONE",
CLONE_TARGET_SHARED = "SELF", DISK_TYPE_SHARED = "FILE"
]
DS_MAD_CONF = [
NAME = "vitastor",
REQUIRED_ATTRS = "DISK_TYPE,BRIDGE_LIST",
PERSISTENT_ONLY = "NO",
MARKETPLACE_ACTIONS = "export"
]
```
## Create Datastores
Example Image and System Datastore definitions:
[opennebula/vitastor-imageds.conf](../../opennebula/vitastor-imageds.conf) and
[opennebula/vitastor-systemds.conf](../../opennebula/vitastor-systemds.conf).
Change parameters to your will:
- POOL_NAME is Vitastor pool name to store images.
- IMAGE_PREFIX is a string prepended to all Vitastor image names.
- BRIDGE_LIST is a list of hosts with access to Vitastor cluster, mostly used for image (not system) datastore operations.
- VITASTOR_CONF is the path to cluster configuration. Note that it should be also added to `/etc/apparmor.d/local/abstractions/libvirt-qemu` if you use AppArmor.
- STAGING_DIR is a temporary directory used when importing external images. Should have free space sufficient for downloading external images.
Then create datastores using `onedatastore create vitastor-imageds.conf` and `onedatastore create vitastor-systemds.conf` (or use UI).
## Block VM access to Vitastor cluster
Vitastor doesn't support any authentication yet, so you MUST block VM guest access to the Vitastor cluster at the network level.
If you use VLAN networking for VMs - make sure you use different VLANs for VMs and hypervisor/storage network and
block access between them using your firewall/switch configuration.
If you use something more stupid like bridged networking, you probably have to use manual firewall/iptables setup
to only allow access to Vitastor from hypervisor IPs.
Also you need to switch network to "Bridged & Security Groups" and enable IP spoofing filters in OpenNebula.
Problem is that OpenNebula's IP spoofing filter doesn't affect local interfaces of the hypervisor i.e. when
it's enabled a VM can't talk to other VMs or to the outer world using a spoofed IP, but it CAN talk to the
hypervisor if it takes an IP from its subnet. To fix that you also need some more iptables.
So the complete "stupid" bridged network filter setup could look like the following
(here `10.0.3.0/24` is the VM subnet and `10.0.2.0/24` is the hypervisor subnet):
```
# Allow incoming traffic from physical device
iptables -A INPUT -m physdev --physdev-in eth0 -j ACCEPT
# Do not allow incoming traffic from VMs, but not from VM subnet
iptables -A INPUT ! -s 10.0.3.0/24 -i onebr0 -j DROP
# Drop traffic from VMs to hypervisor/storage subnet
iptables -I FORWARD 1 -s 10.0.3.0/24 -d 10.0.2.0/24 -j DROP
```
## Testing
The OpenNebula plugin includes quite a bit of bash scripts, so here's their description to get an idea about what they actually do.
| Script | Action | How to Test |
| ----------------------- | ----------------------------------------- | ------------------------------------------------------------------------------------ |
| vmm/kvm/deploy.vitastor | Start a VM | Create and start a VM with Vitastor disk(s): persistent / non-persistent / volatile. |
| vmm/kvm/save.vitastor | Save VM memory checkpoint | Stop a VM using "Stop" command. |
| vmm/kvm/restore.vitastor| Restore VM memory checkpoint | Start a VM back after stopping it. |
| datastore/clone | Copy an image as persistent | Create a VM template and instantiate it as persistent. |
| datastore/cp | Import an external image | Import a VM template with images from Marketplace. |
| datastore/export | Export an image as URL | Probably: export a VM template with images to Marketplace. |
| datastore/mkfs | Create an image with FS | Storage → Images → Create → Type: Datablock, Location: Empty disk image, Filesystem: Not empty. |
| datastore/monitor | Monitor used space in image datastore | Check reported used/free space in image datastore list. |
| datastore/rm | Remove a persistent image | Storage → Images → Select an image → Delete. |
| datastore/snap_delete | Delete a snapshot of a persistent image | Storage → Images → Select an image → Select a snapshot → Delete; <br> To create an image with snapshot: attach a persistent image to a VM; create a snapshot; detach the image. |
| datastore/snap_flatten | Revert an image to snapshot and delete other snapshots | Storage → Images → Select an image → Select a snapshot → Flatten. |
| datastore/snap_revert | Revert an image to snapshot | Storage → Images → Select an image → Select a snapshot → Revert. |
| datastore/stat | Get virtual size of an image in MB | No idea. Seems to be unused both in Vitastor and Ceph datastores. |
| tm/clone | Clone a non-persistent image to a VM disk | Attach a non-persistent image to a VM. |
| tm/context | Generate a contextualisation VM disk | Create a VM with enabled contextualisation (default). Common host FS-based version is used in Vitastor and Ceph datastores. |
| tm/cpds | Copy a VM disk / its snapshot to an image | Select a VM → Select a disk → Optionally select a snapshot → Save as. |
| tm/delete | Delete a cloned or volatile VM disk | Detach a volatile disk or a non-persistent image from a VM. |
| tm/failmigrate | Handle live migration failure | No action. Script is empty in Vitastor and Ceph. In other datastores, should roll back actions done by tm/premigrate. |
| tm/ln | Attach a persistent image to a VM | No action. Script is empty in Vitastor and Ceph. |
| tm/mkimage | Create a volatile disk, maybe with FS | Attach a volatile disk to a VM, with or without file system. |
| tm/mkswap | Create a volatile swap disk | Attach a volatile disk to a VM, formatted as swap. |
| tm/monitor | Monitor used space in system datastore | Check reported used/free space in system datastore list. |
| tm/mv | Move a migrated VM disk between hosts | Migrate a VM between hosts. In Vitastor and Ceph datastores, doesn't do any storage action. |
| tm/mvds | Detach a persistent image from a VM | No action. The opposite of tm/ln. Script is empty in Vitastor and Ceph. In other datastores, script may copy the image from VM host back to the datastore. |
| tm/postbackup | Executed after backup | Seems that the script just removes temporary files after backup. Perform a VM backup and check that temporary files are cleaned up. |
| tm/postbackup_live | Executed after backup of a running VM | Same as tm/postbackup, but for a running VM. |
| tm/postmigrate | Executed after VM live migration | No action. Only executed for system datastore, so the script tries to call other TMs for other disks. Except that, the script does nothing in Vitastor and Ceph datastores. |
| tm/prebackup | Actual backup script: backup VM disks | Set up "rsync" backup datastore → Backup a VM to it. |
| tm/prebackup_live | Backup VM disks of a running VM | Same as tm/prebackup, but also does fsfreeze/thaw. So perform a live backup, restore it and check that disks are consistent. |
| tm/premigrate | Executed before live migration | No action. Only executed for system datastore, so the script tries to call other TMs for other disks. Except that, the script does nothing in Vitastor and Ceph datastores. |
| tm/resize | Resize a VM disk | Select a VM → Select a non-persistent disk → Resize. |
| tm/restore | Restore VM disks from backup | Set up "rsync" backup datastore → Backup a VM to it → Restore it back. |
| tm/snap_create | Create a VM disk snapshot | Select a VM → Select a disk → Create snapshot. |
| tm/snap_create_live | Create a VM disk snapshot for a live VM | Select a running VM → Select a disk → Create snapshot. |
| tm/snap_delete | Delete a VM disk snapshot | Select a VM → Select a disk → Select a snapshot → Delete. |
| tm/snap_revert | Revert a VM disk to a snapshot | Select a VM → Select a disk → Select a snapshot → Revert. |

View File

@@ -0,0 +1,189 @@
[Документация](../../README-ru.md#документация) → Установка → OpenNebula
-----
[Read in English](opennebula.en.md)
# OpenNebula
## Автоматическая установка
Плагин OpenNebula Vitastor распространяется как Debian и RPM пакет `vitastor-opennebula`, начиная с версии Vitastor 1.9.0. Так что:
- Запустите `apt-get install vitastor-opennebula` или `yum install vitastor-opennebula` после установки OpenNebula на всех серверах
- Проверьте, что он выводит "OK, Vitastor OpenNebula patches successfully applied" или "OK, Vitastor OpenNebula patches are already applied" в процессе установки
- Если сообщение не выведено, пройдите по шагам инструкцию [Ручная установка](#ручная-установка) и примените правки файлов конфигурации вручную
- Удостоверьтесь, что установлены версии QEMU и libvirt с изменениями Vitastor
(`dpkg -l qemu-system-x86`, `dpkg -l | grep libvirt`, `rpm -qa | grep qemu`, `rpm -qa | grep qemu`, `rpm -qa | grep libvirt-libs` должны показывать "vitastor" в номере версии)
- [Заблокируйте доступ виртуальных машин в Vitastor](#блокировка-доступа-вм-в-vitastor)
## Ручная установка
Сначала установите саму OpenNebula. После этого, на каждом сервере:
- Скопируйте директорию [opennebula/remotes](../../opennebula/remotes) в `/var/lib/one`: `cp -r opennebula/remotes /var/lib/one/`
- Скопируйте директорию [opennebula/sudoers.d](../../opennebula/sudoers.d) в `/etc`: `cp -r opennebula/sudoers.d /etc/`
- Примените патч [downloader-vitastor.sh.diff](../../opennebula/remotes/datastore/vitastor/downloader-vitastor.sh.diff) к `/var/lib/one/remotes/datastore/downloader.sh`:
`patch /var/lib/one/remotes/datastore/downloader.sh < opennebula/remotes/datastore/vitastor/downloader-vitastor.sh.diff` - либо прочитайте патч и примените изменение вручную
- Добавьте `kvm-vitastor` в список `LIVE_DISK_SNAPSHOTS` в файле `/etc/one/vmm_exec/vmm_execrc`
- Если вы используете Debian или Ubuntu (и AppArmor), добавьте пути к файлу(ам) конфигурации Vitastor в файл `/etc/apparmor.d/local/abstractions/libvirt-qemu`: например,
`echo ' "/etc/vitastor/vitastor.conf" r,' >> /etc/apparmor.d/local/abstractions/libvirt-qemu`
- Примените изменения `/etc/one/oned.conf`
### Изменения oned.conf
1. Добавьте переопределение скрипта deploy в VM_MAD kvm, добавив `-l deploy.vitastor` в `ARGUMENTS`:
```diff
VM_MAD = [
NAME = "kvm",
SUNSTONE_NAME = "KVM",
EXECUTABLE = "one_vmm_exec",
- ARGUMENTS = "-t 15 -r 0 kvm -p",
+ ARGUMENTS = "-t 15 -r 0 kvm -p -l deploy=deploy.vitastor",
DEFAULT = "vmm_exec/vmm_exec_kvm.conf",
TYPE = "kvm",
KEEP_SNAPSHOTS = "yes",
LIVE_RESIZE = "yes",
SUPPORT_SHAREABLE = "yes",
IMPORTED_VMS_ACTIONS = "terminate, terminate-hard, hold, release, suspend,
resume, delete, reboot, reboot-hard, resched, unresched, disk-attach,
disk-detach, nic-attach, nic-detach, snapshot-create, snapshot-delete,
resize, updateconf, update"
]
```
Опционально: если вы хотите также сохранять снимки памяти ВМ в Vitastor, добавьте
`-l deploy=deploy.vitastor,save=save.vitastor,restore=restore.vitastor`
вместо просто `-l deploy=deploy.vitastor`.
2. Добавьте `vitastor` в значения TM_MAD.ARGUMENTS и DATASTORE_MAD.ARGUMENTS:
```diff
TM_MAD = [
EXECUTABLE = "one_tm",
- ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,fs_lvm_ssh,qcow2,ssh,ceph,dev,vcenter,iscsi_libvirt"
+ ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,fs_lvm_ssh,qcow2,ssh,ceph,vitastor,dev,vcenter,iscsi_libvirt"
]
DATASTORE_MAD = [
EXECUTABLE = "one_datastore",
- ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,dev,iscsi_libvirt,vcenter,restic,rsync -s shared,ssh,ceph,fs_lvm,fs_lvm_ssh,qcow2,vcenter"
+ ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,vitastor,dev,iscsi_libvirt,vcenter,restic,rsync -s shared,ssh,ceph,vitastor,fs_lvm,fs_lvm_ssh,qcow2,vcenter"
]
```
3. Добавьте строчки с INHERIT_DATASTORE_ATTR для двух атрибутов Vitastor-хранилищ:
```
INHERIT_DATASTORE_ATTR = "VITASTOR_CONF"
INHERIT_DATASTORE_ATTR = "IMAGE_PREFIX"
```
4. Добавьте TM_MAD_CONF и DS_MAD_CONF для Vitastor:
```
TM_MAD_CONF = [
NAME = "vitastor", LN_TARGET = "NONE", CLONE_TARGET = "SELF", SHARED = "YES",
DS_MIGRATE = "NO", DRIVER = "raw", ALLOW_ORPHANS="format",
TM_MAD_SYSTEM = "ssh,shared", LN_TARGET_SSH = "SYSTEM", CLONE_TARGET_SSH = "SYSTEM",
DISK_TYPE_SSH = "FILE", LN_TARGET_SHARED = "NONE",
CLONE_TARGET_SHARED = "SELF", DISK_TYPE_SHARED = "FILE"
]
DS_MAD_CONF = [
NAME = "vitastor",
REQUIRED_ATTRS = "DISK_TYPE,BRIDGE_LIST",
PERSISTENT_ONLY = "NO",
MARKETPLACE_ACTIONS = "export"
]
```
## Создайте хранилища
Примеры настроек хранилищ образов (image) и дисков ВМ (system):
[opennebula/vitastor-imageds.conf](../../opennebula/vitastor-imageds.conf) и
[opennebula/vitastor-systemds.conf](../../opennebula/vitastor-systemds.conf).
Скопируйте настройки и поменяйте следующие параметры так, как вам необходимо:
- POOL_NAME - имя пула Vitastor для сохранения образов дисков.
- IMAGE_PREFIX - строка, добавляемая в начало имён образов дисков.
- BRIDGE_LIST - список серверов с доступом к кластеру Vitastor, используемых для операций с хранилищем образов (image, не system).
- VITASTOR_CONF - путь к конфигурации Vitastor. Имейте в виду, что этот путь также надо добавить в `/etc/apparmor.d/local/abstractions/libvirt-qemu`, если вы используете AppArmor.
- STAGING_DIR - путь к временному каталогу, используемому при импорте внешних образов. Должен иметь достаточно свободного места, чтобы вмещать скачанные образы.
После этого создайте хранилища с помощью команд `onedatastore create vitastor-imageds.conf` и `onedatastore create vitastor-systemds.conf` (либо через UI).
## Блокировка доступа ВМ в Vitastor
Vitastor пока не поддерживает никакую аутентификацию, так что вы ДОЛЖНЫ заблокировать доступ гостевых ВМ
в кластер Vitastor на сетевом уровне.
Если вы используете VLAN-сети для ВМ - удостоверьтесь, что ВМ и гипервизор/сеть хранения помещены в разные
изолированные друг от друга VLAN-ы.
Если вы используете что-то более примитивное, например, мосты (bridge), вам, скорее всего, придётся вручную
настроить iptables / межсетевой экран, чтобы разрешить доступ к Vitastor только с IP гипервизоров.
Также в этом случае нужно будет переключить обычные мосты на "Bridged & Security Groups" и включить фильтр
спуфинга IP в OpenNebula. Правда, реализация этого фильтра пока не полная, и она не блокирует доступ к
локальным интерфейсам гипервизора. То есть, включённый фильтр спуфинга IP запрещает ВМ отправлять трафик
с чужими IP к другим ВМ или во внешний мир, но не запрещает отправлять его напрямую гипервизору. Чтобы
исправить это, тоже нужны дополнительные правила iptables.
Таким образом, более-менее полная блокировка при использовании простой сети на сетевых мостах может
выглядеть так (здесь `10.0.3.0/24` - подсеть ВМ, `10.0.2.0/24` - подсеть гипервизора):
```
# Разрешаем входящий трафик с физического устройства
iptables -A INPUT -m physdev --physdev-in eth0 -j ACCEPT
# Запрещаем трафик со всех ВМ, но с IP не из подсети ВМ
iptables -A INPUT ! -s 10.0.3.0/24 -i onebr0 -j DROP
# Запрещаем трафик от ВМ к сети гипервизора
iptables -I FORWARD 1 -s 10.0.3.0/24 -d 10.0.2.0/24 -j DROP
```
## Тестирование
Плагин OpenNebula по большей части состоит из bash-скриптов, и чтобы было понятнее, что они
вообще делают - ниже приведены описания процедур, которыми можно протестировать каждый из них.
| Скрипт | Описание | Как протестировать |
| ----------------------- | --------------------------------------------- | ------------------------------------------------------------------------------------ |
| vmm/kvm/deploy.vitastor | Запустить виртуальную машину | Создайте и запустите виртуальную машину с дисками Vitastor: постоянным / непостоянным / волатильным (временным). |
| vmm/kvm/save.vitastor | Сохранить снимок памяти ВМ | Остановите виртуальную машину командой "Остановить". |
| vmm/kvm/restore.vitastor| Восстановить снимок памяти ВМ | Запустите ВМ после остановки обратно. |
| datastore/clone | Скопировать образ как "постоянный" | Создайте шаблон ВМ и создайте из него постоянную ВМ. |
| datastore/cp | Импортировать внешний образ | Импортируйте шаблон ВМ с образами дисков из Магазина OpenNebula. |
| datastore/export | Экспортировать образ как URL | Вероятно: экспортируйте шаблон ВМ с образами в Магазин. |
| datastore/mkfs | Создать образ с файловой системой | Хранилище → Образы → Создать → Тип: базовый блок данных, Расположение: пустой образ диска, Файловая система: любая непустая. |
| datastore/monitor | Вывод статистики места в хранилище образов | Проверьте статистику свободного/занятого места в списке хранилищ образов. |
| datastore/rm | Удалить "постоянный" образ | Хранилище → Образы → Выберите образ → Удалить. |
| datastore/snap_delete | Удалить снимок "постоянного" образа | Хранилище → Образы → Выберите образ → Выберите снимок → Удалить; <br> Чтобы создать образ со снимком: подключите постоянный образ к ВМ, создайте снимок, отключите образ. |
| datastore/snap_flatten | Откатить образ к снимку, удалив другие снимки | Хранилище → Образы → Выберите образ → Выберите снимок → "Выровнять" (flatten). |
| datastore/snap_revert | Откатить образ к снимку | Хранилище → Образы → Выберите образ → Выберите снимок → Откатить. |
| datastore/stat | Показать виртуальный размер образа в МБ | Неизвестно. По-видимому, в плагинах Vitastor и Ceph не используется. |
| tm/clone | Клонировать "непостоянный" образ в диск ВМ | Подключите "непостоянный" образ к ВМ. |
| tm/context | Создать диск контекстуализации ВМ | Создайте ВМ с контекстуализацией, как обычно. Но тестировать особенно нечего: в плагинах Vitastor и Ceph образ контекста хранится в локальной ФС гипервизора. |
| tm/cpds | Копировать диск ВМ/его снимок в новый образ | Выберите ВМ → Выберите диск → Опционально выберите снимок → "Сохранить как". |
| tm/delete | Удалить диск-клон или волатильный диск ВМ | Отключите волатильный или не-постоянный диск от ВМ. |
| tm/failmigrate | Обработать неудачную миграцию | Тестировать нечего. Скрипт пуст в плагинах Vitastor и Ceph. В других плагинах скрипт должен откатывать действия tm/premigrate. |
| tm/ln | Подключить "постоянный" образ к ВМ | Тестировать нечего. Скрипт пуст в плагинах Vitastor и Ceph. |
| tm/mkimage | Создать волатильный диск, без или с ФС | Подключите волатильный диск к ВМ, с или без файловой системы. |
| tm/mkswap | Создать волатильный диск подкачки | Подключите волатильный диск к ВМ, форматированный как диск подкачки (swap). |
| tm/monitor | Вывод статистики места в хранилище дисков ВМ | Проверьте статистику свободного/занятого места в списке хранилищ дисков ВМ. |
| tm/mv | Мигрировать диск ВМ между хостами | Мигрируйте ВМ между серверами. Правда, с точки зрения хранилища в плагинах Vitastor и Ceph этот скрипт ничего не делает. |
| tm/mvds | Отключить "постоянный" образ от ВМ | Тестировать нечего. Скрипт пуст в плагинах Vitastor и Ceph. В целом же скрипт обратный к tm/ln и в других хранилищах он может, например, копировать образ ВМ с диска гипервизора обратно в хранилище. |
| tm/postbackup | Выполняется после бэкапа | По-видимому, скрипт просто удаляет временные файлы после резервного копирования. Так что можно провести его и проверить, что на серверах не осталось временных файлов. |
| tm/postbackup_live | Выполняется после бэкапа запущенной ВМ | То же, что tm/postbackup, но для запущенной ВМ. |
| tm/postmigrate | Выполняется после миграции ВМ | Тестировать нечего. Однако, OpenNebula запускает скрипт только для системного хранилища, поэтому он вызывает аналогичные скрипты для хранилищ других дисков той же ВМ. Помимо этого в плагинах Vitastor и Ceph скрипт ничего не делает. |
| tm/prebackup | Выполнить резервное копирование дисков ВМ | Создайте хранилище резервных копий типа "rsync" → Забэкапьте в него ВМ. |
| tm/prebackup_live | То же самое для запущенной ВМ | То же, что tm/prebackup, но запускает fsfreeze/thaw (остановку доступа к дискам). Так что смысл теста - проведите резервное копирование и проверьте, что данные скопировались консистентно. |
| tm/premigrate | Выполняется перед миграцией ВМ | Тестировать нечего. Аналогично tm/postmigrate запускается только для системного хранилища. |
| tm/resize | Изменить размер диска ВМ | Выберите ВМ → Выберите непостоянный диск → Измените его размер. |
| tm/restore | Восстановить диски ВМ из бэкапа | Создайте хранилище резервных копий → Забэкапьте в него ВМ → Восстановите её обратно. |
| tm/snap_create | Создать снимок диска ВМ | Выберите ВМ → Выберите диск → Создайте снимок. |
| tm/snap_create_live | Создать снимок диска запущенной ВМ | Выберите запущенную ВМ → Выберите диск → Создайте снимок. |
| tm/snap_delete | Удалить снимок диска ВМ | Выберите ВМ → Выберите диск → Выберите снимок → Удалить. |
| tm/snap_revert | Откатить диск ВМ к снимку | Выберите ВМ → Выберите диск → Выберите снимок → Откатить. |

View File

@@ -14,10 +14,9 @@
- Debian 12 (Bookworm/Sid): `deb https://vitastor.io/debian bookworm main`
- Debian 11 (Bullseye): `deb https://vitastor.io/debian bullseye main`
- Debian 10 (Buster): `deb https://vitastor.io/debian buster main`
- Ubuntu 22.04 (Jammy): `deb https://vitastor.io/debian jammy main`
- Add `-oldstable` to bookworm/bullseye/buster in this line to install the last
stable version from 0.9.x branch instead of 1.x
- For Debian 10 (Buster) also enable backports repository:
`deb http://deb.debian.org/debian buster-backports main`
- Install packages: `apt update; apt install vitastor lp-solve etcd linux-image-amd64 qemu-system-x86`
## CentOS

View File

@@ -14,10 +14,9 @@
- Debian 12 (Bookworm/Sid): `deb https://vitastor.io/debian bookworm main`
- Debian 11 (Bullseye): `deb https://vitastor.io/debian bullseye main`
- Debian 10 (Buster): `deb https://vitastor.io/debian buster main`
- Ubuntu 22.04 (Jammy): `deb https://vitastor.io/debian jammy main`
- Добавьте `-oldstable` к слову bookworm/bullseye/buster в этой строке, чтобы
установить последнюю стабильную версию из ветки 0.9.x вместо 1.x
- Для Debian 10 (Buster) также включите репозиторий backports:
`deb http://deb.debian.org/debian buster-backports main`
- Установите пакеты: `apt update; apt install vitastor lp-solve etcd linux-image-amd64 qemu-system-x86`
## CentOS

View File

@@ -17,10 +17,10 @@ To enable Vitastor support in Proxmox Virtual Environment (6.4-8.1 are supported
- Restart pvedaemon: `systemctl restart pvedaemon`
`/etc/pve/storage.cfg` example (the only required option is vitastor_pool, all others
are listed below with their default values):
are listed below with their default values; `vitastor_ssd` is Proxmox storage pool id):
```
vitastor: vitastor
vitastor: vitastor_ssd
# pool to put new images into
vitastor_pool testpool
# path to the configuration file

View File

@@ -16,10 +16,10 @@
- Перезапустите демон Proxmox: `systemctl restart pvedaemon`
Пример `/etc/pve/storage.cfg` (единственная обязательная опция - vitastor_pool, все остальные
перечислены внизу для понимания значений по умолчанию):
перечислены внизу для понимания значений по умолчанию; `vitastor_ssd` - имя хранилища в Proxmox):
```
vitastor: vitastor
vitastor: vitastor_ssd
# Пул, в который будут помещаться образы дисков
vitastor_pool testpool
# Путь к файлу конфигурации

View File

@@ -6,19 +6,151 @@
# Architecture
- [Server-side components](#server-side-components)
- [Basic concepts](#basic-concepts)
- [Client-side components](#client-side-components)
- [Additional utilities](#additional-utilities)
- [Overall read/write process](#overall-read-write-process)
- [Nuances of request handling](#nuances-of-request-handling)
- [Similarities to Ceph](#similarities-to-ceph)
- [Differences from Ceph](#differences-from-ceph)
- [Implementation Principles](#implementation-principles)
## Server-side components
- **OSD** (Object Storage Daemon) is a process that directly works with the disk, stores data
and serves read/write requests. One OSD serves one disk (or one partition). OSDs talk to etcd
and to each other — they receive cluster state from etcd, and send read/write requests for
secondary copies of data to other OSDs.
- **etcd** — clustered key/value database, used as a reliable storage for configuration
and high-level cluster state. Etcd is the component that prevents splitbrain in the cluster.
Data blocks are not stored in etcd, etcd doesn't participate in data write or read path.
- **Монитор** — a separate node.js based daemon which monitors the cluster, calculates
required configuration changes and saves them to etcd, thus commanding OSDs to apply these
changes. Monitor also aggregates cluster statistics. OSD don't talk to monitor, monitor
only sends and receives data from etcd.
## Basic concepts
- OSD (Object Storage Daemon) is a process that stores data and serves read/write requests.
- PG (Placement Group) is a "shard" of the cluster, group of data stored on one set of replicas.
- Pool is a container for data that has equal redundancy scheme and placement rules.
- Monitor is a separate daemon that watches cluster state and handles failures.
- Failure Domain is a group of OSDs that you allow to fail. It's "host" by default.
- Placement Tree groups OSDs in a hierarchy to later split them into Failure Domains.
- **Pool** is a container for data that has equal redundancy scheme and disk placement rules.
- **PG (Placement Group)** is a "shard" of the cluster, subdivision unit that has its own
set of OSDs for data storage.
- **Failure Domain** is a group of OSDs, from the simultaneous failure of which you are
protected by Vitastor. Default failure domain is "host" (server), but you choose a
larger (for example, a rack of servers) or smaller (a single drive) failure domain
for every pool.
- **Placement Tree** (similar to Ceph CRUSH Tree) groups OSDs in a hierarchy to later
split them into Failure Domains.
## Client-side components
- **Client library** encapsulates client I/O logic. Client library connects to etcd and to all OSDs,
receives cluster state from etcd, sends read and write requests directly to all OSDs. Due
to the symmetric distributed architecture, all data blocks (each 128 KB by default) are placed
to different OSDs, but clients always know where each data block is stored and connect directly
to the right OSD.
All other client-side components are based on the client library:
- **[vitastor-cli](../usage/cli.en.md)** — command-line utility for cluster management.
Allows to view cluster state, manage pools and images, i.e. create, modify and remove
virtual disks, their snapshots and clones.
- **[QEMU driver](../usage/qemu.en.md)** — pluggable QEMU module allowing QEMU/KVM virtual
machines work with virtual Vitastor disks directly from userspace through the client library,
without the need to attach disks as kernel block devices. However, if you want to attach
disks, you can also do that with the same driver and [VDUSE](../usage/qemu.en.md#vduse).
- **[vitastor-nbd](../usage/nbd.en.md)** — utility that allows to attach Vitastor disks as
kernel block devices using NBD (Network Block Device), which works more like "BUSE"
(Block Device In Userspace). Vitastor doesn't have Linux kernel modules for the same task
(at least by now). NBD is an older, non-recommended way to attach disks — you should use
VDUSE whenever you can.
- **[CSI driver](../installation/kubernetes.en.md)** — driver for attaching Vitastor images
and VitastorFS subdirectories as Kubernetes persistent volumes. Block-based CSI uses
VDUSE (when available) or NBD — images are attached as kernel block devices and mounted
into containers. FS-based CSI uses **[vitastor-nfs](../usage/nfs.en.md)**.
- **Drivers for Proxmox, OpenStack and so on** — pluggable modules for corresponding systems,
allowing to use Vitastor as storage in them.
- **[vitastor-nfs](../usage/nfs.en.md)** — NFS 3.0 server allowing export of two file system variants:
the first is a simplified pseudo-FS for file-based access to Vitastor block images (for non-QEMU
hypervisors with NFS support), the second is **VitastorFS**, full-featured clustered POSIX FS.
Both variants support parallel access from multiple vitastor-nfs servers. In fact, you are
not required to setup separate NFS servers at all and use vitastor-nfs mount command on every
client node — it starts the NFS server and mounts the FS locally.
- **[fio driver](../usage/fio.en.md)** — pluggable module for fio disk benchmarking tool for
running performance tests on your Vitastor cluster.
- **vitastor-kv** — client for a key-value DB working over shared block volumes (usual
vitastor images). VitastorFS metadata is stored in vitastor-kv.
## Additional utilities
- **vitastor-disk** — a Vitastor OSD disk management tool. You can create, remove,
resize and move OSD partitions with it.
## Overall read/write process
- Vitastor stores virtual disks, also named "images" or "inodes".
- Each image is stored in some pool. Pool specifies storage parameters such as redundancy
scheme (replication or EC — erasure codes, i.e. error correction codes), failure domain
and restrictions on OSD selection for image data placement. See [Pool configuration](../config/pool.en.md) for details.
- Each image is split into objects/blocks of fixed size, equal to [block_size](../config/layout-cluster.en.md#block_size)
(128 KB by default), multiplied by data part count for EC or 1 for replicas. That is,
if a pool uses EC 4+2 coding scheme (4 data parts + 2 parity parts), then, with the
default block_size, images are split into 512 KB objects.
- Client read/write requests are split into parts at object boundaries.
- Each object is mapped to a PG number it belongs to, by simply taking a remainder of
division of its offset by PG count of the image's pool.
- Client reads primary OSD for all PGs from etcd. Primary OSD for each PG is assigned
by the monitor during cluster operation, along with the full PG OSD set.
- If not already connected, client connects to primary OSDs of all PGs involved in a
read/write request and sends parts of the request to them.
- If a primary OSD is unavailable, client retries connection attempts indefinitely
either until it becomes available or until the monitor assigns another OSD as primary
for that PG.
- Client also retries requests if the primary OSD replies with error code EPIPE, meaning
that the PG is inactive at this OSD at the moment - for example, when the primary OSD
is switched, or if the primary OSD itself loses connection to replicas during request
handling.
- Primary OSD determines where the parts of the object are stored. By default, all objects
are assumed to be stored at the target OSD set of a PG, but some of them may be present
at a different OSD set if they are degraded or moved, or if the data rebalancing process
is active. OSDs doesn't do any network requests, if calculates locations of all objects
during PG activation and stores it in memory.
- Primary OSD handles the request locally when it can - for example, when it's a read
from a replicated pool or when it's a read from a EC pool involving only one data part
stored on the OSD's local disk.
- When a request requires reads or writes to additional OSDs, primary OSD uses already
established connections to secondary OSDs of the PG to execute these requests. This happens
in parallel to local disk operations. All such connections are guaranteed to be already
established when the PG is active, and if any of them is dropped, PG is restarted and
all current read/write operations to it fail with EPIPE error and are retried by clients.
- After completing all secondary read/write requests, primary OSD sends the response to
the client.
### Nuances of request handling
- If a pool uses erasure codes and some of the OSDs are unavailable, primary OSDs recover
data from the remaining parts during read.
- Each object has a version number. During write, primary OSD first determines the current
version of the object. As primary OSD usually stores the object or its part itself, most
of the time version is read from the memory of the OSD itself. However, if primary OSD
doesn't contain parts of the object, it requests the version number from a secondary OSD
which has that part. Such request still doesn't involve reading from the disk though,
because object metadata, including version number, is always stored in OSD memory.
- If a pool uses erasure codes, partial writes of an object require reading other parts of
it from secondary OSDs or from the local disk of the primary OSD itself. This is called
"read-modify-write" process.
- If a pool uses erasure codes, two-phase write process is used to get rid of the Write Hole
problem: first a new version of object parts is written to all secondary OSDs without
removing the previous version, and then, after receiving successful write confirmations
from all OSDs, new version is committed and the old one is allowed to be removed.
- In a pool doesn't use immediate_commit mode, then write requests sent by clients aren't
treated as committed to physical media instantly. Clients have to send separate type of
requests (SYNC) to commit changes, and before it isn't sent, new versions of data are
allowed to be lost if some OSDs die. Thus, when immediate_commit is disabled, clients
store copies of all write requests in memory and repeat them from there when the
connection to primary OSD is lost. This in-memory copy is removed after a successful
SYNC, and to prevent excessive memory usage, clients also do an automatic SYNC
every [client_dirty_limit](../config/network.en.md#client_dirty_limit) written bytes.
## Similarities to Ceph
@@ -87,5 +219,5 @@
- Deleting images in a degraded cluster may currently lead to objects reappearing
after dead OSDs come back, and in case of erasure-coded pools, they may even
reappear as incomplete. Just repeat the removal request again in this case.
This problem will be fixed in the nearest future, the fix is already implemented
in the "epoch-deletions" branch.
This problem will be fixed in the future, along with the metadata disk storage
format update.

View File

@@ -11,6 +11,7 @@
- [Серверные компоненты](#серверные-компоненты)
- [Базовые понятия](#базовые-понятия)
- [Клиентские компоненты](#клиентские-компоненты)
- [Дополнительные утилиты](#дополнительные-утилиты)
- [Общий процесс записи и чтения](#общий-процесс-записи-и-чтения)
- [Особенности обработки запросов](#особенности-обработки-запросов)
- [Схожесть с Ceph](#схожесть-с-ceph)
@@ -23,8 +24,8 @@
Один OSD управляет одним диском (или разделом). OSD общаются с etcd и друг с другом — от etcd они
получают состояние кластера, а друг другу передают запросы записи и чтения вторичных копий данных.
- **etcd** — кластерная key/value база данных, используется для хранения настроек и верхнеуровневого
состояния кластера, а также предотвращения разделения сознания. Блоки данных в etcd не хранятся,
в обработке клиентских запросов чтения и записи etcd не участвует.
состояния кластера, а также предотвращения разделения сознания (splitbrain). Блоки данных в etcd не
хранятся, в обработке клиентских запросов чтения и записи etcd не участвует.
- **Монитор** — отдельный демон на node.js, рассчитывающий необходимые изменения в конфигурацию
кластера, сохраняющий эту информацию в etcd и таким образом командующий OSD применить эти изменения.
Также агрегирует статистику. Контактирует только с etcd, OSD с монитором не общаются.
@@ -34,40 +35,56 @@
- **Пул (Pool)** — контейнер для данных, имеющих одну и ту же схему избыточности и правила распределения по OSD.
- **PG (Placement Group)** — "шард", единица деления пулов в кластере, которой назначается свой набор
OSD для хранения данных (копий или частей объектов).
- **Домен отказа (Failure Domain)** — группа OSD, одновременное падение которых рассматривается
как вероятное. По умолчанию это "host" (сервер).
- **Домен отказа (Failure Domain)** — группа OSD, от одновременного падения которых должен защищать
Vitastor. По умолчанию домен отказа — "host" (сервер), но вы можете установить для пула как больший
домен отказа (например, стойку серверов), так и меньший (например, отдельный диск).
- **Дерево распределения** (Placement Tree, в Ceph CRUSH Tree) — иерархическая группировка OSD
в узлы, которые далее можно использовать как домены отказа.
## Клиентские компоненты
- **Клиентская библиотека** — инкапсулирует логику на стороне клиента. Соединяются с etcd и со всеми OSD,
от etcd получают состояние кластера, команды чтения и записи отправляют на все OSD напрямую.
- **Клиентская библиотека** — инкапсулирует логику на стороне клиента. Соединяется с etcd и со всеми OSD,
от etcd получает состояние кластера, команды чтения и записи отправляет на все OSD напрямую.
В силу архитектуры все отдельные блоки данных (по умолчанию по 128 КБ) располагается на разных
OSD, но клиент устроен так, что всегда точно знает, к какому OSD обращаться, и подключается
к нему напрямую.
На базе клиентской библиотеки реализованы все остальные клиенты:
- **vitastor-cli** — утилита командной строки для управления кластером. В данный момент позволяет
просматривать общее состояние кластера и управлять образами — т.е. создавать, менять и удалять
виртуальные диски, их снимки и клоны.
- **Драйвер QEMU** — подключаемый модуль QEMU, позволяющий QEMU/KVM виртуальным машинам работать
с виртуальными дисками Vitastor напрямую из пространства пользователя с помощью клиентской
библиотеки, без необходимости отображения дисков в виде блочных устройств. Тот же драйвер
позволяет подключать диски в систему через [VDUSE](../usage/qemu.ru.md#vduse).
- **vitastor-nbd** — утилита, позволяющая монтировать образы Vitastor в виде блочных устройств
с помощью NBD (Network Block Device), на самом деле скорее работающего как "BUSE"
(Block Device In Userspace). Модуля ядра Linux для выполнения той же задачи в Vitastor нет
(по крайней мере, пока).
- **CSI драйвер** — драйвер для подключения Vitastor-образов в виде персистентных томов (PV) Kubernetes.
Работает через vitastor-nbd — образы отражаются в виде блочных устройств и монтируются
в контейнеры.
- **[vitastor-cli](../usage/cli.ru.md)** — утилита командной строки для управления кластером.
Позволяет просматривать общее состояние кластера, управлять пулами и образами — то есть
создавать, менять и удалять виртуальные диски, их снимки и клоны.
- **[Драйвер QEMU](../usage/qemu.ru.md)** — подключаемый модуль QEMU, позволяющий QEMU/KVM
виртуальным машинам работать с виртуальными дисками Vitastor напрямую из пространства пользователя
с помощью клиентской библиотеки, без необходимости подключения дисков в виде блочных устройств
Linux. Если, однако, вы хотите подключать диски в виде блочных устройств, то вы тоже можете
сделать это с помощью того же самого драйвера и [VDUSE](../usage/qemu.ru.md#vduse).
- **[vitastor-nbd](../usage/nbd.ru.md)** — утилита, позволяющая монтировать образы Vitastor
в виде блочных устройств с помощью NBD (Network Block Device), на самом деле скорее работающего
как "BUSE" (Block Device In Userspace). Модуля ядра Linux для выполнения той же задачи в
Vitastor нет (по крайней мере, пока). NBD — более старый и нерекомендуемый способ подключения
дисков — вам следует использовать VDUSE всегда, когда это возможно.
- **[CSI драйвер](../installation/kubernetes.ru.md)** — драйвер для подключения Vitastor-образов
и поддиректорий VitastorFS в виде персистентных томов (PV) Kubernetes. Блочный CSI работает через
VDUSE (когда это возможно) или через NBD — образы отражаются в виде блочных устройств и монтируются
в контейнеры. Файловый CSI использует **[vitastor-nfs](../usage/nfs.ru.md)**.
- **Драйвера Proxmox, OpenStack и т.п.** — подключаемые модули для соответствующих систем,
позволяющие использовать Vitastor как хранилище в оных.
- **vitastor-nfs** — утилита, предоставляющая файловый доступ к образам в кластере Vitastor
по протоколу NFS 3.0. Предназначена для гипервизоров, не основанных на QEMU и Linux, но при
этом поддерживающих NFS.
- **[vitastor-nfs](../usage/nfs.ru.md)** — NFS 3.0 сервер, предоставляющий два варианта файловой системы:
первая — упрощённая для файлового доступа к блочным образам (для не-QEMU гипервизоров, поддерживающих NFS),
вторая — VitastorFS, полноценная кластерная POSIX ФС. Оба варианта поддерживают параллельный
доступ с нескольких vitastor-nfs серверов. На самом деле можно вообще не выделять
отдельные NFS-серверы, а вместо этого использовать команду vitastor-nfs mount, запускающую
NFS-сервер прямо на клиентской машине и монтирующую ФС локально.
- **[Драйвер fio](../usage/fio.ru.md)** — подключаемый модуль для утилиты тестирования
производительности дисков fio, позволяющий тестировать Vitastor-кластеры.
- **vitastor-kv** — клиент для key-value базы данных, работающей поверх разделяемого блочного
образа (обычного блочного образа vitastor). Метаданные VitastorFS хранятся именно в vitastor-kv.
## Дополнительные утилиты
- **vitastor-disk** — утилита для разметки дисков под Vitastor OSD. С её помощью можно
создавать, удалять, менять размеры или перемещать разделы OSD.
## Общий процесс записи и чтения
@@ -98,16 +115,22 @@
находиться на других OSD, если эти объекты деградированы или перемещены, или идёт процесс
ребаланса. Запросы для проверки по сети не отправляются, информация о местоположении всех
объектов рассчитывается первичным OSD при активации PG и хранится в памяти.
- Первичный OSD соединяется (если ещё не соединён) с вторичными OSD, на которых располагаются
части объекта, и отправляет им запросы чтения/записи, а также читает/пишет из/в своё локальное
хранилище, если сам входит в набор.
- Когда это возможно, первичный OSD обрабатывает запрос локально. Например, так происходит
при чтениях объектов из пулов с репликацией или при чтении из EC пула, затрагивающего
только часть, хранимую на диске самого первичного OSD.
- Когда запрос требует записи или чтения с вторичных OSD, первичный OSD использует заранее
установленные соединения с ними для выполнения этих запросов. Это происходит параллельно
локальным операциям чтения/записи с диска самого OSD. Так как соединения к вторичным OSD PG
устанавливаются при её запуске, то они уже гарантированно установлены, когда PG активна,
и если любое из этих соединений отключается, PG перезапускается, а все текущие запросы чтения
и записи в неё завершаются с ошибкой EPIPE, после чего повторяются клиентами.
- После завершения всех вторичных операций чтения/записи первичный OSD отправляет ответ клиенту.
### Особенности обработки запросов
- Если в пуле используются коды коррекции ошибок и при этом часть OSD недоступна, первичный
OSD при чтении восстанавливает данные из оставшихся частей.
- Каждый объект имеет номер версии. При записи объекта первичный OSD сначала читает из номер
- Каждый объект имеет номер версии. При записи объекта первичный OSD сначала получает номер
версии объекта. Так как первичный OSD обычно сам хранит копию или часть объекта, номер
версии обычно читается из памяти самого OSD. Однако, если ни одна часть обновляемого объекта
не находится на первичном OSD, для получения номера версии он обращается к одному из вторичных
@@ -115,20 +138,20 @@
так как метаданные объектов, включая номер версии, все OSD хранят в памяти.
- Если в пуле используются коды коррекции ошибок, перед частичной записью объекта для вычисления
чётности зачастую требуется чтение частей объекта с вторичных OSD или с локального диска
самого первичного OSD.
- Также, если в пуле используются коды коррекции ошибок, для закрытия Write Hole применяется
самого первичного OSD. Это называется процессом "чтение-модификация-запись" (read-modify-write).
- Если в пуле используются коды коррекции ошибок, для закрытия Write Hole применяется
двухфазный алгоритм записи: сначала на все вторичные OSD записывается новая версия частей
объекта, но при этом старая версия не удаляется, а потом, после получения подтверждения
успешной записи от всех вторичных OSD, новая версия фиксируется и разрешается удаление старой.
- Если в кластере не включён режим immediate_commit, то запросы записи, отправляемые клиентами,
- Если в пуле не включён режим immediate_commit, то запросы записи, отправляемые клиентами,
не считаются зафиксированными на физических накопителях сразу. Для фиксации данных клиенты
должны отдельно отправлять запросы SYNC (отдельный от чтения и записи вид запроса),
а пока такой запрос не отправлен, считается, что записанные данные могут исчезнуть,
если соответствующий OSD упадёт. Поэтому, когда режим immediate_commit отключён, все
запросы записи клиенты копируют в памяти и при потере соединения и повторном соединении
с OSD повторяют из памяти. Скопированные в память данные удаляются при успешном fsync,
с OSD повторяют из памяти. Скопированные в память данные удаляются при успешном SYNC,
а чтобы хранение этих данных не приводило к чрезмерному потреблению памяти, клиенты
автоматически выполняют fsync каждые [client_dirty_limit](../config/network.ru.md#client_dirty_limit)
автоматически выполняют SYNC каждые [client_dirty_limit](../config/network.ru.md#client_dirty_limit)
записанных байт.
## Схожесть с Ceph
@@ -205,5 +228,5 @@
- Удаление образов в деградированном кластере может в данный момент приводить к повторному
"появлению" удалённых объектов после поднятия отключённых OSD, причём в случае EC-пулов,
объекты могут появиться в виде "неполных". Если вы столкнётесь с такой ситуацией, просто
повторите запрос удаления. Исправление этой проблемы уже реализовано в ветке "epoch-deletions"
и вскоре будет включено в релиз.
повторите запрос удаления. Данная проблема будет исправлена в будущем вместе с обновлением
дискового формата хранения метаданных.

View File

@@ -34,9 +34,16 @@
- [Client write-back cache](../config/client.en.md#client_enable_writeback)
- [Intelligent recovery auto-tuning](../config/osd.en.md#recovery_tune_interval)
- [Clustered file system](../usage/nfs.en.md#vitastorfs)
- [Experimental internal etcd replacement - antietcd](../config/monitor.en.md#use_antietcd)
- [Built-in Prometheus metric exporter](../config/monitor.en.md#enable_prometheus)
- [NFS RDMA support](../usage/nfs.en.md#rdma) (probably also usable for GPUDirect)
## Plugins and tools
- [Proxmox storage plugin and packages](../installation/proxmox.en.md)
- [OpenNebula storage plugin](../installation/opennebula.en.md)
- [CSI plugin for Kubernetes](../installation/kubernetes.en.md)
- [OpenStack support: Cinder driver, Nova and libvirt patches](../installation/openstack.en.md)
- [Debian and CentOS packages](../installation/packages.en.md)
- [Image management CLI (vitastor-cli)](../usage/cli.en.md)
- [Disk management CLI (vitastor-disk)](../usage/disk.en.md)
@@ -44,9 +51,6 @@
- [Native QEMU driver](../usage/qemu.en.md)
- [Loadable fio engine for benchmarks](../usage/fio.en.md)
- [NBD proxy for kernel mounts](../usage/nbd.en.md)
- [CSI plugin for Kubernetes](../installation/kubernetes.en.md)
- [OpenStack support: Cinder driver, Nova and libvirt patches](../installation/openstack.en.md)
- [Proxmox storage plugin and packages](../installation/proxmox.en.md)
- [Simplified NFS proxy for file-based image access emulation (suitable for VMWare)](../usage/nfs.en.md#pseudo-fs)
## Roadmap
@@ -56,7 +60,6 @@ The following features are planned for the future:
- Control plane optimisation
- Other administrative tools
- Web GUI
- OpenNebula plugin
- iSCSI and NVMeoF gateways
- Multi-threaded client
- Faster failover

View File

@@ -36,9 +36,16 @@
- [Буферизация записи на стороне клиента](../config/client.ru.md#client_enable_writeback)
- [Интеллектуальная автоподстройка скорости восстановления](../config/osd.ru.md#recovery_tune_interval)
- [Кластерная файловая система](../usage/nfs.ru.md#vitastorfs)
- [Экспериментальная встроенная замена etcd - antietcd](../config/monitor.ru.md#use_antietcd)
- [Встроенный Prometheus-экспортер метрик](../config/monitor.ru.md#enable_prometheus)
- [Поддержка NFS RDMA](../usage/nfs.ru.md#rdma) (вероятно, также подходящая для GPUDirect)
## Драйверы и инструменты
- [Плагин для Proxmox](../installation/proxmox.ru.md)
- [Плагин для OpenNebula](../installation/opennebula.ru.md)
- [CSI-плагин для Kubernetes](../installation/kubernetes.ru.md)
- [Базовая поддержка OpenStack: драйвер Cinder, патчи для Nova и libvirt](../installation/openstack.ru.md)
- [Пакеты для Debian и CentOS](../installation/packages.ru.md)
- [Консольный интерфейс управления образами (vitastor-cli)](../usage/cli.ru.md)
- [Инструмент управления дисками (vitastor-disk)](../usage/disk.ru.md)
@@ -46,9 +53,6 @@
- [Драйвер диска для QEMU](../usage/qemu.ru.md)
- [Драйвер диска для утилиты тестирования производительности fio](../usage/fio.ru.md)
- [NBD-прокси для монтирования образов ядром](../usage/nbd.ru.md) ("блочное устройство в режиме пользователя")
- [CSI-плагин для Kubernetes](../installation/kubernetes.ru.md)
- [Базовая поддержка OpenStack: драйвер Cinder, патчи для Nova и libvirt](../installation/openstack.ru.md)
- [Плагин для Proxmox](../installation/proxmox.ru.md)
- [Упрощённая NFS-прокси для эмуляции файлового доступа к образам (подходит для VMWare)](../usage/nfs.ru.md#псевдо-фс)
## Планы развития
@@ -56,7 +60,6 @@
- Оптимизация слоя управления
- Другие инструменты администрирования
- Web-интерфейс
- Плагин для OpenNebula
- iSCSI и NVMeoF прокси
- Многопоточный клиент
- Более быстрое переключение при отказах

View File

@@ -26,13 +26,13 @@
you also need small SSDs for journal and metadata (even 2 GB per 1 TB of HDD space is enough).
- Get a fast network (at least 10 Gbit/s). Something like Mellanox ConnectX-4 with RoCEv2 is ideal.
- Disable CPU powersaving: `cpupower idle-set -D 0 && cpupower frequency-set -g performance`.
- [Install Vitastor packages](../installation/packages.en.md).
- Either [install Vitastor packages](../installation/packages.en.md) or [install Vitastor in Docker](../installation/docker.en.md).
## Recommended drives
- SATA SSD: Micron 5100/5200/5300/5400, Samsung PM863/PM883/PM893, Intel D3-S4510/4520/4610/4620, Kingston DC500M
- NVMe: Micron 9100/9200/9300/9400, Micron 7300/7450, Samsung PM983/PM9A3, Samsung PM1723/1735/1743,
Intel DC-P3700/P4500/P4600, Intel D7-P5500/P5600, Intel Optane, Kingston DC1000B/DC1500M
Intel DC-P3700/P4500/P4600, Intel D5-P4320/P5530, Intel D7-P5500/P5600, Intel Optane, Kingston DC1000B/DC1500M
- HDD: HGST Ultrastar, Toshiba MG, Seagate EXOS
## Configure monitors
@@ -45,7 +45,8 @@ On the monitor hosts:
}
```
- Create systemd units for etcd by running: `/usr/lib/vitastor/mon/make-etcd`
- Start etcd and monitors: `systemctl enable --now etcd vitastor-mon`
Or, if you installed Vitastor in Docker, run `systemctl start vitastor-host; docker exec vitastor make-etcd`.
- Start etcd and monitors: `systemctl enable --now vitastor-etcd vitastor-mon`
## Configure OSDs
@@ -68,10 +69,6 @@ On the monitor hosts:
but some free unpartitioned space must be available because the script creates new partitions for journals.
- You can change OSD configuration in units or in `vitastor.conf`.
Check [Configuration Reference](../config.en.md) for parameter descriptions.
- If all your drives have capacitors, and even if not, but if you ran `vitastor-disk`
without `--disable_data_fsync off` at the first step, then put the following
setting into etcd: \
`etcdctl --endpoints=... put /vitastor/config/global '{"immediate_commit":"all"}'`
- Start all OSDs: `systemctl start vitastor.target`
## Create a pool
@@ -88,6 +85,10 @@ For EC pools the configuration should look like the following:
vitastor-cli create-pool testpool --ec 2+2 --pg_count 256
```
Add `--immediate_commit none` if you added `--disable_data_fsync off` at the OSD
initialization step, or if `vitastor-disk` complained about impossibility to
disable drive cache.
After you do this, one of the monitors will configure PGs and OSDs will start them.
If you use HDDs you should also add `"block_size": 1048576` to pool configuration.

View File

@@ -22,18 +22,18 @@
использовать и десктопные SSD, включив режим отложенного fsync, но производительность будет хуже.
О конденсаторах читайте [здесь](../config/layout-cluster.ru.md#immediate_commit).
- Если хотите использовать HDD, берите современные модели с Media или SSD кэшем - HGST Ultrastar,
Toshiba MG08, Seagate EXOS или что-то похожее. Если такого кэша у ваших дисков нет,
Toshiba MG, Seagate EXOS или что-то похожее. Если такого кэша у ваших дисков нет,
обязательно возьмите SSD под метаданные и журнал (маленькие, буквально 2 ГБ на 1 ТБ HDD-места).
- Возьмите быструю сеть, минимум 10 гбит/с. Идеал - что-то вроде Mellanox ConnectX-4 с RoCEv2.
- Для лучшей производительности отключите энергосбережение CPU: `cpupower idle-set -D 0 && cpupower frequency-set -g performance`.
- [Установите пакеты Vitastor](../installation/packages.ru.md).
- Либо [установите пакеты Vitastor](../installation/packages.ru.md), либо [установите Vitastor в Docker](../installation/docker.ru.md).
## Рекомендуемые диски
- SATA SSD: Micron 5100/5200/5300/5400, Samsung PM863/PM883/PM893, Intel D3-S4510/4520/4610/4620, Kingston DC500M
- NVMe: Micron 9100/9200/9300/9400, Micron 7300/7450, Samsung PM983/PM9A3, Samsung PM1723/1735/1743,
Intel DC-P3700/P4500/P4600, Intel D7-P5500/P5600, Intel Optane, Kingston DC1000B/DC1500M
- HDD: HGST Ultrastar, Toshiba MG06/MG07/MG08, Seagate EXOS
Intel DC-P3700/P4500/P4600, Intel D5-P4320/P5530, Intel D7-P5500/P5600, Intel Optane, Kingston DC1000B/DC1500M
- HDD: HGST Ultrastar, Toshiba MG, Seagate EXOS
## Настройте мониторы
@@ -44,8 +44,9 @@
"etcd_address": ["10.200.1.10:2379","10.200.1.11:2379","10.200.1.12:2379"]
}
```
- Инициализируйте сервисы etcd, запустив `/usr/lib/vitastor/mon/make-etcd`
- Запустите etcd и мониторы: `systemctl enable --now etcd vitastor-mon`
- Инициализируйте сервисы etcd, запустив `/usr/lib/vitastor/mon/make-etcd`.\
Либо, если вы установили Vitastor в Docker, запустите `systemctl start vitastor-host; docker exec vitastor make-etcd`.
- Запустите etcd и мониторы: `systemctl enable --now vitastor-etcd vitastor-mon`
## Настройте OSD
@@ -69,11 +70,6 @@
для журналов, на SSD должно быть доступно свободное нераспределённое место.
- Вы можете менять параметры OSD в юнитах systemd или в `vitastor.conf`. Описания параметров
смотрите в [справке по конфигурации](../config.ru.md).
- Если все ваши диски - серверные с конденсаторами, и даже если нет, но при этом
вы не добавляли опцию `--disable_data_fsync off` на первом шаге, а `vitastor-disk`
не ругался на невозможность отключения кэша дисков, пропишите следующую настройку
в глобальную конфигурацию в etcd: \
`etcdctl --endpoints=... put /vitastor/config/global '{"immediate_commit":"all"}'`.
- Запустите все OSD: `systemctl start vitastor.target`
## Создайте пул
@@ -90,6 +86,10 @@ vitastor-cli create-pool testpool --pg_size 2 --pg_count 256
vitastor-cli create-pool testpool --ec 2+2 --pg_count 256
```
Добавьте также опцию `--immediate_commit none`, если вы добавляли `--disable_data_fsync off`
на этапе инициализации OSD, либо если `vitastor-disk` ругался на невозможность отключения
кэша дисков.
После этого один из мониторов должен сконфигурировать PG, а OSD должны запустить их.
Если вы используете HDD-диски, то добавьте в конфигурацию пулов опцию `"block_size": 1048576`.

View File

@@ -42,7 +42,7 @@ PG state always includes exactly 1 of the following base states:
- **offline** — PG isn't activated by any OSD at all. Either primary OSD isn't set for
this PG at all (if the pool is just created), or an unavailable OSD is set as primary,
or the primary OSD refuses to start this PG (for example, because of wrong block_size),
or the PG is stopped by the monitor using `pause: true` flag in `/vitastor/config/pgs` in etcd.
or the PG is stopped by the monitor using `pause: true` flag in `/vitastor/pg/config` in etcd.
- **starting** — primary OSD has acquired PG lock in etcd, PG is starting.
- **peering** — primary OSD requests PG object listings from secondary OSDs and calculates
the PG state.
@@ -58,8 +58,9 @@ and during switching primary OSD of PGs.
**starting**, **repeering**, **stopping** states normally almost aren't visible at all.
If you notice them for any noticeable time — chances are some operations on some OSDs hung.
Search for "slow op" in OSD logs to find them — operations hung for more than
[slow_log_interval](../config/osd.en.md#slow_log_interval) are logged as "slow ops".
Check `vitastor-cli status` and search for "slow op" in OSD logs to find them — operations
hung for more than [slow_log_interval](../config/osd.en.md#slow_log_interval) are logged as
"slow ops" and displayed in `status`.
State transition diagram:
@@ -107,16 +108,17 @@ If a PG is active it can also have any number of the following additional states
## Removing a healthy disk
Befor removing a healthy disk from the cluster set its OSD weight(s) to 0 to
move data away. To do that, add `"reweight":0` to etcd key `/vitastor/config/osd/<OSD_NUMBER>`.
For example:
Before removing a healthy disk from the cluster set its OSD weight(s) to 0 to
move data away. To do that, run `vitastor-cli modify-osd --reweight 0 <НОМЕР_OSD>`.
Then wait until rebalance finishes and remove OSD by running `vitastor-disk purge /dev/vitastor/osdN-data`.
Zero weight can also be put manually into etcd key `/vitastor/config/osd/<НОМЕР_OSD>`, for example:
```
etcdctl --endpoints=http://1.1.1.1:2379/v3 put /vitastor/config/osd/1 '{"reweight":0}'
```
Then wait until rebalance finishes and remove OSD by running `vitastor-disk purge /dev/vitastor/osdN-data`.
## Removing a failed disk
If a disk is already dead, its OSD(s) are likely already stopped.
@@ -149,7 +151,7 @@ POOL_ID=1
ALL_OSDS=$(etcdctl --endpoints=your_etcd_address:2379 get --keys-only --prefix /vitastor/osd/stats/ | \
perl -e '$/ = undef; $a = <>; $a =~ s/\s*$//; $a =~ s!/vitastor/osd/stats/!!g; $a =~ s/\s+/,/g; print $a')
for i in $(seq 1 $PG_COUNT); do
etcdctl --endpoints=your_etcd_address:2379 put /vitastor/pg/history/$POOL_ID/$i '{"all_peers":['$ALL_OSDS']}'; done
etcdctl --endpoints=your_etcd_address:2379 put /vitastor/pg/history/$POOL_ID/$i '{"all_peers":['$ALL_OSDS']}'
done
```
@@ -168,21 +170,70 @@ Upgrading is performed without stopping clients (VMs/containers), you just need
upgrade and restart servers one by one. However, ideally you should restart VMs too
to make them use the new version of the client library.
Exceptions (specific upgrade instructions):
- Upgrading <= 1.1.x to 1.2.0 or later, if you use EC n+k with k>=2, is recommended
to be performed with full downtime: first you should stop all clients, then all OSDs,
then upgrade and start everything back — because versions before 1.2.0 have several
bugs leading to invalid data being read in EC n+k, k>=2 configurations in degraded pools.
- Versions <= 0.8.7 are incompatible with versions >= 0.9.0, so you should first
upgrade from <= 0.8.7 to 0.8.8 or 0.8.9, and only then to >= 0.9.x. If you upgrade
without this intermediate step, client I/O will hang until the end of upgrade process.
- Upgrading from <= 0.5.x to >= 0.6.x is not supported.
### 1.7.x to 1.8.0
Rollback:
- Version 1.0.0 has a new disk format, so OSDs initiaziled on 1.0.0 can't be rolled
back to 0.9.x or previous versions.
- Versions before 0.8.0 don't have vitastor-disk, so OSDs, initialized by it, won't
start with 0.7.x or 0.6.x. :-)
It's recommended to upgrade from version <= 1.7.x to version >= 1.8.0 with full downtime,
i.e. you should first stop clients and then the cluster (OSDs and monitor), because 1.8.0
includes a fix for etcd event stream inconsistency which could lead to "incomplete" objects
appearing in EC pools, and in rare cases, probably, even to data corruption during mass OSD
restarts. It doesn't mean that you WILL hit this problem if you upgrade without full downtime,
but it's better to secure yourself against it.
Also, if you upgrade version from <= 1.7.x to version >= 1.8.0, BUT <= 1.9.0: restart all clients
(VMs and so on), otherwise they will hang when monitor clears old PG configuration key,
which happens 24 hours after upgrade.
This is fixed in 1.9.1. So, after upgrading version <= 1.7.x directly to version >= 1.9.1,
you DO NOT have to restart all old clients immediately - they will work like before until
you decide to upgrade them too. The downside is that you'll have to remove the old PG
configuration key (`/vitastor/config/pgs`) from etcd by hand when you make sure that all
your clients are restarted.
### 1.1.x to 1.2.0
Upgrading version <= 1.1.x to version >= 1.2.0, if you use EC n+k with k>=2, is recommended
to be performed with full downtime: first you should stop all clients, then all OSDs,
then upgrade and start everything back — because versions before 1.2.0 have several
bugs leading to invalid data being read in EC n+k, k>=2 configurations in degraded pools.
### 0.8.7 to 0.9.0
Versions <= 0.8.7 are incompatible with versions >= 0.9.0, so you should first
upgrade from <= 0.8.7 to 0.8.8 or 0.8.9, and only then to >= 0.9.x. If you upgrade
without this intermediate step, client I/O will hang until the end of upgrade process.
### 0.5.x to 0.6.x
Upgrading from <= 0.5.x to >= 0.6.x is not supported.
## Downgrade
Downgrade are also allowed freely, except the following specific instructions:
### 1.8.0 to 1.7.1
Before downgrading from version >= 1.8.0 to version <= 1.7.1
you have to copy /vitastor/pg/config etcd key to /vitastor/config/pgs:
```
etcdctl --endpoints=http://... get --print-value-only /vitastor/pg/config | \
etcdctl --endpoints=http://... put /vitastor/config/pgs
```
Then you can just install older packages and restart all services.
If you performed downgrade without first copying that key, run "add all OSDs into the
history records of all PGs" from [Restoring from lost pool configuration](#restoring-from-lost-pool-configuration).
### 1.0.0 to 0.9.x
Version 1.0.0 has a new disk format, so OSDs initialized on 1.0.0 or later can't
be rolled back to 0.9.x or previous versions.
### 0.8.0 to 0.7.x
Versions before 0.8.0 don't have vitastor-disk, so OSDs, initialized by it, won't
start with older versions (0.4.x - 0.7.x). :-)
## OSD memory usage

View File

@@ -42,7 +42,7 @@
- **offline** — PG вообще не активирована ни одним OSD. Либо первичный OSD не назначен вообще
(если пул только создан), либо в качестве первичного назначен недоступный OSD, либо
назначенный OSD отказывается запускать эту PG (например, из-за несовпадения block_size),
либо PG остановлена монитором через флаг `pause: true` в `/vitastor/config/pgs` в etcd.
либо PG остановлена монитором через флаг `pause: true` в `/vitastor/pg/config` в etcd.
- **starting** — первичный OSD захватил блокировку PG в etcd, PG запускается.
- **peering** — первичный OSD опрашивает вторичные OSD на предмет списков объектов данной PG и рассчитывает её состояние.
- **repeering** — PG ожидает завершения текущих операций ввода-вывода, после чего перейдёт в состояние **peering**.
@@ -56,9 +56,9 @@ OSD, на протяжении небольшого периода времен
Состояния **starting**, **repeering**, **stopping** в норме практически не заметны вообще,
PG должны очень быстро переходить из них в другие. Если эти состояния заметны
хоть сколько-то значительное время — вероятно, какие-то операции на каких-то OSD зависли.
Чтобы найти их, ищите "slow op" в журналах OSD — операции, зависшие дольше,
чем на [slow_log_interval](../config/osd.ru.md#slow_log_interval), записываются в
журналы OSD как "slow op".
Чтобы найти их, посморите `vitastor-cli status` и поищите слова "slow op" в журналах OSD —
операции, зависшие дольше, чем на [slow_log_interval](../config/osd.ru.md#slow_log_interval),
записываются в журналы OSD как "slow op" и отображаются в `status`.
Диаграмма переходов:
@@ -105,14 +105,16 @@ PG должны очень быстро переходить из них в др
## Удаление исправного диска
Перед удалением исправного диска из кластера установите его OSD вес в 0, чтобы убрать с него данные.
Для этого добавьте в ключ `/vitastor/config/osd/<НОМЕР_OSD>` в etcd значение `"reweight":0`, например:
Для этого выполните команду `vitastor-cli modify-osd --reweight 0 <НОМЕР_OSD>`.
Дождитесь завершения перебалансировки данных, после чего удалите OSD командой `vitastor-disk purge /dev/vitastor/osdN-data`.
Также вес 0 можно прописать вручную прямо в etcd в ключ `/vitastor/config/osd/<НОМЕР_OSD>`, например:
```
etcdctl --endpoints=http://1.1.1.1:2379/v3 put /vitastor/config/osd/1 '{"reweight":0}'
```
Дождитесь завершения ребаланса, после чего удалите OSD командой `vitastor-disk purge /dev/vitastor/osdN-data`.
## Удаление неисправного диска
Если диск уже умер, его OSD, скорее всего, уже будет/будут остановлен(ы).
@@ -145,7 +147,7 @@ POOL_ID=1
ALL_OSDS=$(etcdctl --endpoints=your_etcd_address:2379 get --keys-only --prefix /vitastor/osd/stats/ | \
perl -e '$/ = undef; $a = <>; $a =~ s/\s*$//; $a =~ s!/vitastor/osd/stats/!!g; $a =~ s/\s+/,/g; print $a')
for i in $(seq 1 $PG_COUNT); do
etcdctl --endpoints=your_etcd_address:2379 put /vitastor/pg/history/$POOL_ID/$i '{"all_peers":['$ALL_OSDS']}'; done
etcdctl --endpoints=your_etcd_address:2379 put /vitastor/pg/history/$POOL_ID/$i '{"all_peers":['$ALL_OSDS']}'
done
```
@@ -164,21 +166,70 @@ done
достаточно обновлять серверы по одному. Однако, конечно, чтобы запущенные виртуальные машины
начали использовать новую версию клиентской библиотеки, их тоже нужно перезапустить.
Исключения (особые указания при обновлении):
- Обновляться с версий <= 1.1.x до версий >= 1.2.0, если вы используете EC n+k и k>=2,
рекомендуется с временной остановкой кластера — сначала нужно остановить всех клиентов,
потом все OSD, потом обновить и запустить всё обратно — из-за нескольких багов, которые
могли приводить к некорректному чтению данных в деградированных EC-пулах.
- Версии <= 0.8.7 несовместимы с версиями >= 0.9.0, поэтому при обновлении с <= 0.8.7
нужно сначала обновиться до 0.8.8 или 0.8.9, а уже потом до любых версий >= 0.9.x.
Иначе клиентский ввод-вывод зависнет до завершения обновления.
- Обновление с версий 0.5.x и более ранних до 0.6.x и более поздних не поддерживается.
### 1.7.x -> 1.8.0
Откат:
- В версии 1.0.0 поменялся дисковый формат, поэтому OSD, созданные на версии >= 1.0.0,
нельзя откатить до версии 0.9.x и более ранних.
- В версиях ранее 0.8.0 нет vitastor-disk, значит, созданные им OSD нельзя откатить
до 0.7.x или 0.6.x. :-)
Обновляться с версий <= 1.7.x до версий >= 1.8.0 рекомендуется с полной остановкой
сначала клиентов, а затем кластера, так как в 1.8.0 исправлена проблема (неконсистентность
потоков событий от etcd), способная приводить к появлению incomplete объектов в EC-пулах
и, хоть и редко, но даже к повреждению данных при массовых перезапусках OSD. Если вы
обновляетесь без полной остановки - это не значит, что вы обязательно столкнётесь с этой
проблемой, но лучше подстраховаться.
Также, если вы обновляетесь с версии <= 1.7.x до версии >= 1.8.0, НО <= 1.9.0: перезапустите всех
клиентов (процессы виртуальных машин можно перезапустить путём миграции на другой сервер),
иначе они зависнут, когда монитор удалит старый ключ конфигурации PG, что происходит через
24 часа после обновления.
Однако, это исправлено в 1.9.1. Так что, если вы обновляетесь с <= 1.7.x сразу до >= 1.9.1,
вам НЕ нужно сразу перезапускать всех клиентов - они будут работать, как раньше. Минус,
правда, в том, что старый ключ конфигурации PG (`/vitastor/config/pgs`) будет нужно удалить
вам из etcd вручную - после того, как вы убедитесь, что все клиенты перезапущены.
### 1.1.x -> 1.2.0
Обновляться с версий <= 1.1.x до версий >= 1.2.0, если вы используете EC n+k и k>=2,
рекомендуется с временной остановкой кластера — сначала нужно остановить всех клиентов,
потом все OSD, потом обновить и запустить всё обратно — из-за нескольких багов, которые
могли приводить к некорректному чтению данных в деградированных EC-пулах.
### 0.8.7 -> 0.9.0
Версии <= 0.8.7 несовместимы с версиями >= 0.9.0, поэтому при обновлении с <= 0.8.7
нужно сначала обновиться до 0.8.8 или 0.8.9, а уже потом до любых версий >= 0.9.x.
Иначе клиентский ввод-вывод зависнет до завершения обновления.
### 0.5.x -> 0.6.x
Обновление с версий 0.5.x и более ранних до 0.6.x и более поздних не поддерживается.
## Откат версии
Откат (понижение версии) тоже свободно разрешён, кроме указанных ниже случаев:
### 1.8.0 -> 1.7.1
Перед понижением версии с >= 1.8.0 до <= 1.7.1 вы должны скопировать ключ
etcd `/vitastor/pg/config` в `/vitastor/config/pgs`:
```
etcdctl --endpoints=http://... get --print-value-only /vitastor/pg/config | \
etcdctl --endpoints=http://... put /vitastor/config/pgs
```
После этого можно просто установить более старые пакеты и перезапустить все сервисы.
Если вы откатили версию, не скопировав предварительно этот ключ - выполните "добавление всех
OSD в исторические записи всех PG" из раздела [Восстановление потерянной конфигурации пулов](#восстановление-потерянной-конфигурации-пулов).
### 1.0.0 -> 0.9.x
В версии 1.0.0 поменялся дисковый формат, поэтому OSD, созданные на версии >= 1.0.0,
нельзя откатить до версии 0.9.x и более ранних.
### 0.8.0 -> 0.7.x
В версиях ранее 0.8.0 нет vitastor-disk, значит, созданные им OSD не запустятся на
более ранних версиях (0.4.x - 0.7.x). :-)
## Потребление памяти OSD

View File

@@ -16,6 +16,7 @@ It supports the following commands:
- [create](#create)
- [snap-create](#create)
- [modify](#modify)
- [dd](#dd)
- [rm](#rm)
- [flatten](#flatten)
- [rm-data](#rm-data)
@@ -24,6 +25,10 @@ It supports the following commands:
- [fix](#fix)
- [alloc-osd](#alloc-osd)
- [rm-osd](#rm-osd)
- [osd-tree](#osd-tree)
- [ls-osd](#ls-osd)
- [modify-osd](#modify-osd)
- [pg-list](#pg-list)
- [create-pool](#create-pool)
- [modify-pool](#modify-pool)
- [ls-pools](#ls-pools)
@@ -32,7 +37,7 @@ It supports the following commands:
Global options:
```
--config_file FILE Path to Vitastor configuration file
--config_path FILE Path to Vitastor configuration file
--etcd_address URL Etcd connection address
--iodepth N Send N operations in parallel to each OSD when possible (default 32)
--parallel_osds M Work with M osds in parallel when possible (default 4)
@@ -141,22 +146,64 @@ Rename, resize image or change its readonly status. Images with children can't b
If the new size is smaller than the old size, extra data will be purged.
You should resize file system in the image, if present, before shrinking it.
* `--deleted 1|0` - Set/clear 'deleted image' flag (set automatically during unfinished deletes).
* `-f|--force` - Proceed with shrinking or setting readwrite flag even if the image has children.
* `--down-ok` - Proceed with shrinking even if some data will be left on unavailable OSDs.
## dd
```
vitastor-cli dd [iimg=<image> | if=<file>] [oimg=<image> | of=<file>] [bs=1M] \
[count=N] [seek/oseek=N] [skip/iseek=M] [iodepth=N] [status=progress] \
[conv=nocreat,noerror,nofsync,trunc,nosparse] [iflag=direct] [oflag=direct,append]
```
Copy data between Vitastor images, files and pipes.
Options can be specified in classic dd style (`key=value`) or like usual (`--key value`).
| <!-- --> | <!-- --> |
|-----------------|-------------------------------------------------------------------------|
| `iimg=<image>` | Copy from Vitastor image `<image>` |
| `if=<file>` | Copy from file `<file>` |
| `oimg=<image>` | Copy to Vitastor image `<image>` |
| `of=<file>` | Copy to file `<file>` |
| `bs=1M` | Set copy block size |
| `count=N` | Copy only N input blocks. If N ends in B it counts bytes, not blocks |
| `seek/oseek=N` | Skip N output blocks. If N ends in B it counts bytes, not blocks |
| `skip/iseek=N` | Skip N input blocks. If N ends in B it counts bytes, not blocks |
| `iodepth=N` | Send N reads or writes in parallel (default 4) |
| `status=LEVEL` | The LEVEL of information to print to stderr: none/noxfer/progress |
| `size=N` | Specify size for the created output file/image (defaults to input size) |
| `iflag=direct` | For input files only: use direct I/O |
| `oflag=direct` | For output files only: use direct I/O |
| `oflag=append` | For files only: append to output file |
| `conv=nocreat` | Do not create output file/image |
| `conv=trunc` | Truncate output file/image |
| `conv=noerror` | Continue copying after errors |
| `conv=nofsync` | Do not call fsync before finishing (default behaviour is fsync) |
| `conv=nosparse` | Write all output blocks including all-zero blocks |
## rm
`vitastor-cli rm <from> [<to>] [--writers-stopped] [--down-ok]`
Remove `<from>` or all layers between `<from>` and `<to>` (`<to>` must be a child of `<from>`),
rebasing all their children accordingly. --writers-stopped allows merging to be a bit
more effective in case of a single 'slim' read-write child and 'fat' removed parent:
the child is merged into parent and parent is renamed to child in that case.
In other cases parent layers are always merged into children.
`vitastor-cli rm (--exact|--matching) <glob> ...`
Other options:
Remove layer(s) and rebase all their children accordingly.
* `--down-ok` - Continue deletion/merging even if some data will be left on unavailable OSDs.
In the first form, remove `<from>` or layers between `<from>` and its child `<to>`.
In the second form, remove all images with exact or pattern-matched names.
Options:
* `--writers-stopped` allows optimised removal in case of a single 'slim' read-write
child and 'fat' removed parent: the child is merged into parent and parent is renamed
to child in that case. In other cases parent layers are always merged into children.
* `--exact` - remove multiple images with names matching given glob patterns.
* `--matching` - remove multiple images with given names
* `--down-ok` - continue deletion/merging even if some data will be left on unavailable OSDs.
## flatten
@@ -174,6 +221,8 @@ Remove inode data without changing metadata.
--wait-list Retrieve full objects listings before starting to remove objects.
Requires more memory, but allows to show correct removal progress.
--min-offset Purge only data starting with specified offset.
--max-offset Purge only data before specified offset.
--client_wait_up_timeout 16 Timeout for waiting until PGs are up in seconds.
```
## merge-data
@@ -246,6 +295,82 @@ Refuses to remove OSDs with data without `--force` and `--allow-data-loss`.
With `--dry-run` only checks if deletion is possible without data loss and
redundancy degradation.
## osd-tree
`vitastor-cli osd-tree [-l|--long]`
Show current OSD tree, optionally with I/O statistics if -l is specified.
Example output:
```
TYPE NAME UP SIZE USED% TAGS WEIGHT BLOCK BITMAP IMM NOOUT
host kaveri
disk nvme0n1p1
osd 3 down 100G 0 % abc,kaveri 1 128k 4k none -
osd 4 down 100G 0 % 1 128k 4k none -
disk nvme1n1p1
osd 5 down 100G 0 % abc,kaveri 1 128k 4k none -
osd 6 down 100G 0 % 1 128k 4k none -
host stump
osd 1 up 100G 37.29 % osdone 1 128k 4k all -
osd 2 up 100G 26.8 % abc 1 128k 4k all -
osd 7 up 100G 21.84 % 1 128k 4k all -
osd 8 up 100G 21.63 % 1 128k 4k all -
osd 9 up 100G 20.69 % 1 128k 4k all -
osd 10 up 100G 21.61 % 1 128k 4k all -
osd 11 up 100G 21.53 % 1 128k 4k all -
osd 12 up 100G 22.4 % 1 128k 4k all -
```
## ls-osd
`vitastor-cli osds|ls-osd|osd-ls [-l|--long]`
Show current OSDs as list, optionally with I/O statistics if -l is specified.
Example output:
```
OSD PARENT UP SIZE USED% TAGS WEIGHT BLOCK BITMAP IMM NOOUT
3 kaveri/nvme0n1p1 down 100G 0 % globl,kaveri 1 128k 4k none -
4 kaveri/nvme0n1p1 down 100G 0 % 1 128k 4k none -
5 kaveri/nvme1n1p1 down 100G 0 % globl,kaveri 1 128k 4k none -
6 kaveri/nvme1n1p1 down 100G 0 % 1 128k 4k none -
1 stump up 100G 37.29 % osdone 1 128k 4k all -
2 stump up 100G 26.8 % globl 1 128k 4k all -
7 stump up 100G 21.84 % 1 128k 4k all -
8 stump up 100G 21.63 % 1 128k 4k all -
9 stump up 100G 20.69 % 1 128k 4k all -
10 stump up 100G 21.61 % 1 128k 4k all -
11 stump up 100G 21.53 % 1 128k 4k all -
12 stump up 100G 22.4 % 1 128k 4k all -
```
## modify-osd
`vitastor-cli modify-osd [--tags tag1,tag2,...] [--reweight <number>] [--noout true/false] <osd_number>`
Set OSD reweight, tags or noout flag. See detail description in [OSD config documentation](../config/pool.en.md#osd-settings).
## pg-list
`vitastor-cli pg-list|pg-ls|list-pg|ls-pg|ls-pgs [OPTIONS] [state1+state2] [^state3] [...]`
List PGs with any of listed state filters (^ or ! in the beginning is negation). Options:
```
--pool <pool name or number> Only list PGs of the given pool.
--min <min pg number> Only list PGs with number >= min.
--max <max pg number> Only list PGs with number <= max.
```
Examples:
`vitastor-cli pg-list active+degraded`
`vitastor-cli pg-list ^active`
## create-pool
`vitastor-cli create-pool|pool-create <name> (-s <pg_size>|--ec <N>+<K>) -n <pg_count> [OPTIONS]`

View File

@@ -17,12 +17,17 @@ vitastor-cli - интерфейс командной строки для адм
- [create](#create)
- [snap-create](#create)
- [modify](#modify)
- [dd](#dd)
- [rm](#rm)
- [flatten](#flatten)
- [rm-data](#rm-data)
- [merge-data](#merge-data)
- [alloc-osd](#alloc-osd)
- [rm-osd](#rm-osd)
- [osd-tree](#osd-tree)
- [ls-osd](#ls-osd)
- [modify-osd](#modify-osd)
- [pg-list](#pg-list)
- [create-pool](#create-pool)
- [modify-pool](#modify-pool)
- [ls-pools](#ls-pools)
@@ -31,7 +36,7 @@ vitastor-cli - интерфейс командной строки для адм
Глобальные опции:
```
--config_file FILE Путь к файлу конфигурации Vitastor
--config_path FILE Путь к файлу конфигурации Vitastor
--etcd_address URL Адрес соединения с etcd
--iodepth N Отправлять параллельно N операций на каждый OSD (по умолчанию 32)
--parallel_osds M Работать параллельно с M OSD (по умолчанию 4)
@@ -144,26 +149,65 @@ vitastor-cli snap-create [-p|--pool <id|name>] <image>@<snapshot>
Если новый размер меньше старого, "лишние" данные будут удалены, поэтому перед уменьшением
образа сначала уменьшите файловую систему в нём.
* `--deleted 1|0` - Установить/снять флаг "образ удалён" (устанавливается при незавершённом удалении).
* `-f|--force` - Разрешить уменьшение или перевод в чтение-запись образа, у которого есть клоны.
* `--down-ok` - Разрешить уменьшение, даже если часть данных останется неудалённой на недоступных OSD.
## dd
```
vitastor-cli dd [iimg=<image> | if=<file>] [oimg=<image> | of=<file>] [bs=1M] \
[count=N] [seek/oseek=N] [skip/iseek=M] [iodepth=N] [status=progress] \
[conv=nocreat,noerror,nofsync,trunc,nosparse] [iflag=direct] [oflag=direct,append]
```
Копировать данные между образами Vitastor, файлами и каналами.
Опции можно передавать в классическом стиле dd (`key=value`) или как обычно (`--key value`).
| <!-- --> | <!-- --> |
|-----------------|-------------------------------------------------------------------------|
| `iimg=<image>` | Копировать из образа Vitastor `<image>` |
| `if=<file>` | Копировать из файла `<file>` |
| `oimg=<image>` | Копировать в образ Vitastor `<image>` |
| `of=<file>` | Копировать в файл `<file>` |
| `bs=1M` | Задать размер блока копирования |
| `count=N` | Копировать не более N блоков. Если N заканчивается на B - то N байт. |
| `seek/oseek=N` | Пропустить N выходных блоков. Если N заканчивается на B - то N байт. |
| `skip/iseek=N` | Пропустить N входных блоков. Если N заканчивается на B - то N байт. |
| `iodepth=N` | Отправлять N чтений/записей параллельно (по умолчанию 4). |
| `status=LEVEL` | Уровень вывода в консоль: none/noxfer/progress |
| `size=N` | Задать размер выходного файла/образа (по умолчанию равен размеру входа).|
| `iflag=direct` | Только для входного файла: использовать прямой ввод-вывод |
| `oflag=direct` | Только для выходного файла: использовать прямой ввод-вывод |
| `oflag=append` | Только для файлов: дописывать в конец выходного файла |
| `conv=nocreat` | Не создавать выходной файл/образ |
| `conv=trunc` | Обрезать выходной файл/образ до размера входа |
| `conv=noerror` | Продолжать копирование после ошибок |
| `conv=nofsync` | Не вызывать fsync перед завершением |
| `conv=nosparse` | Записывать все выходные блоки, включая пустые |
## rm
`vitastor-cli rm <from> [<to>] [--writers-stopped] [--down-ok]`
Удалить образ `<from>` или все слои от `<from>` до `<to>` (`<to>` должен быть дочерним
образом `<from>`), одновременно меняя родительские образы их клонов (если таковые есть).
`vitastor-cli rm (--exact|--matching) <glob> ...`
`--writers-stopped` позволяет чуть более эффективно удалять образы в частом случае, когда
у удаляемой цепочки есть только один дочерний образ, содержащий небольшой объём данных.
В этом случае дочерний образ вливается в родительский и удаляется, а родительский
переименовывается в дочерний.
Удалить образ(ы), корректно перебазируя их дочерние образы.
В других случаях родительские слои вливаются в дочерние.
В первой форме удаляет один образ `<from>` или все слои между `<from>` и его дочерним `<to>`.
Другие опции:
Во второй форме, удаляет все образы с точными именами или именами, подходящими под шаблон(ы).
* `--down-ok` - Продолжать удаление/слияние, даже если часть данных останется неудалённой на недоступных OSD.
Опции:
* `--writers-stopped` позволяет чуть более эффективно удалять образы в частом случае, когда
у удаляемой цепочки есть только один дочерний образ, содержащий небольшой объём данных.
В этом случае дочерний образ вливается в родительский и удаляется, а родительский
переименовывается в дочерний.
* `--exact` - удалить все образы с именами, подходящими под переданные glob-шаблоны.
* `--matching` - удалить все образы с точно заданными именами.
* `--down-ok` - продолжать удаление/слияние, даже если часть данных останется неудалённой на недоступных OSD.
## flatten
@@ -182,6 +226,8 @@ vitastor-cli snap-create [-p|--pool <id|name>] <image>@<snapshot>
--wait-list Сначала запросить полный листинг объектов, а потом начать удалять.
Требует больше памяти, но позволяет правильно печатать прогресс удаления.
--min-offset Удалять только данные, начиная с заданного смещения.
--max-offset Удалять только данные до (исключительно) заданного смещения.
--client_wait_up_timeout 16 Время ожидания поднятия PG в секундах.
```
## merge-data
@@ -263,6 +309,83 @@ vitastor-cli snap-create [-p|--pool <id|name>] <image>@<snapshot>
С опцией `--dry-run` только проверяет, возможно ли удаление без потери данных и деградации
избыточности.
## osd-tree
`vitastor-cli osd-tree [-l|--long]`
Показать дерево OSD, со статистикой ввода-вывода, если установлено -l.
Пример вывода:
```
TYPE NAME UP SIZE USED% TAGS WEIGHT BLOCK BITMAP IMM NOOUT
host kaveri
disk nvme0n1p1
osd 3 down 100G 0 % globl,kaveri 1 128k 4k none -
osd 4 down 100G 0 % 1 128k 4k none -
disk nvme1n1p1
osd 5 down 100G 0 % globl,kaveri 1 128k 4k none -
osd 6 down 100G 0 % 1 128k 4k none -
host stump
osd 1 up 100G 37.29 % osdone 1 128k 4k all -
osd 2 up 100G 26.8 % globl 1 128k 4k all -
osd 7 up 100G 21.84 % 1 128k 4k all -
osd 8 up 100G 21.63 % 1 128k 4k all -
osd 9 up 100G 20.69 % 1 128k 4k all -
osd 10 up 100G 21.61 % 1 128k 4k all -
osd 11 up 100G 21.53 % 1 128k 4k all -
osd 12 up 100G 22.4 % 1 128k 4k all -
```
## ls-osd
`vitastor-cli osds|ls-osd|osd-ls [-l|--long]`
Показать список OSD, со статистикой ввода-вывода, если установлено -l.
Пример вывода:
```
OSD PARENT UP SIZE USED% TAGS WEIGHT BLOCK BITMAP IMM NOOUT
3 kaveri/nvme0n1p1 down 100G 0 % globl,kaveri 1 128k 4k none -
4 kaveri/nvme0n1p1 down 100G 0 % 1 128k 4k none -
5 kaveri/nvme1n1p1 down 100G 0 % globl,kaveri 1 128k 4k none -
6 kaveri/nvme1n1p1 down 100G 0 % 1 128k 4k none -
1 stump up 100G 37.29 % osdone 1 128k 4k all -
2 stump up 100G 26.8 % globl 1 128k 4k all -
7 stump up 100G 21.84 % 1 128k 4k all -
8 stump up 100G 21.63 % 1 128k 4k all -
9 stump up 100G 20.69 % 1 128k 4k all -
10 stump up 100G 21.61 % 1 128k 4k all -
11 stump up 100G 21.53 % 1 128k 4k all -
12 stump up 100G 22.4 % 1 128k 4k all -
```
## modify-osd
`vitastor-cli modify-osd [--tags tag1,tag2,...] [--reweight <number>] [--noout true/false] <osd_number>`
Установить вес OSD, теги или флаг noout. Смотрите подробное описание в [документации настроек OSD](../config/pool.ru.md#настройки-osd).
## pg-list
`vitastor-cli pg-list|pg-ls|list-pg|ls-pg|ls-pgs [OPTIONS] [state1+state2] [^state3] [...]`
Вывести список PG с состояними, удовлетворяющими любому из переданных фильтров (^ или !
в начале фильтра означает отрицание). Опции:
```
--pool <pool name or number> Only list PGs of the given pool.
--min <min pg number> Only list PGs with number >= min.
--max <max pg number> Only list PGs with number <= max.
```
Примеры:
`vitastor-cli pg-list active+degraded`
`vitastor-cli pg-list ^active`
## create-pool
`vitastor-cli create-pool|pool-create <name> (-s <pg_size>|--ec <N>+<K>) -n <pg_count> [OPTIONS]`

View File

@@ -13,6 +13,7 @@ It supports the following commands:
- [prepare](#prepare)
- [upgrade-simple](#upgrade-simple)
- [resize](#resize)
- [raw-resize](#raw-resize)
- [start/stop/restart/enable/disable](#start/stop/restart/enable/disable)
- [purge](#purge)
- [read-sb](#read-sb)
@@ -50,12 +51,16 @@ Options (automatic mode):
--osd_per_disk <N>
Create <N> OSDs on each disk (default 1)
--hybrid
Prepare hybrid (HDD+SSD) OSDs using provided devices. SSDs will be used for
journals and metadata, HDDs will be used for data. Partitions for journals and
metadata will be created automatically. Whether disks are SSD or HDD is decided
by the `/sys/block/.../queue/rotational` flag. In hybrid mode, default object
size is 1 MB instead of 128 KB, default journal size is 1 GB instead of 32 MB,
and throttle_small_writes is enabled by default.
Prepare hybrid (HDD+SSD, NVMe+SATA or etc) OSDs using provided devices. By default,
any passed SSDs will be used for journals and metadata, HDDs will be used for data,
but you can override this behaviour with --fast-devices option. Journal and metadata
partitions will be created automatically. In the default mode, SSD and HDD disks
are distinguished by the `/sys/block/.../queue/rotational` flag. When HDDs are used
for data in hybrid mode, default block_size is 1 MB instead of 128 KB, default journal
size is 1 GB instead of 32 MB, and throttle_small_writes is enabled by default.
--fast-devices /dev/nvmeX,/dev/nvmeY
In --hybrid mode, use these devices for journal and metadata instead of auto-detecting
and extracting them from the main [devices...] list.
--disable_data_fsync auto
Disable data device cache and fsync (1/yes/true = on, default auto)
--disable_meta_fsync auto
@@ -127,25 +132,49 @@ Requires the `sfdisk` utility.
## resize
`vitastor-disk resize <ALL_OSD_PARAMETERS> <NEW_LAYOUT> [--iodepth 32]`
`vitastor-disk resize <osd_num>|<osd_device> [OPTIONS]`
Resize data area and/or rewrite/move journal and metadata.
Resize data area and/or move journal and metadata:
| <!-- --> | <!-- --> |
|---------------------------|----------------------------------------|
| `--move-journal TARGET` | move journal to `TARGET` |
| `--move-meta TARGET` | move metadata to `TARGET` |
| `--journal-size NEW_SIZE` | resize journal to `NEW_SIZE` |
| `--data-size NEW_SIZE` | resize data device to `NEW_SIZE` |
| `--dry-run` | only show new layout, do not apply it |
`NEW_SIZE` may include k/m/g/t suffixes.
`TARGET` may be one of:
| <!-- --> | <!-- --> |
|----------------|--------------------------------------------------------------------------|
| `<partition>` | move journal/metadata to an existing GPT partition |
| `<raw_device>` | create a GPT partition on `<raw_device>` and move journal/metadata to it |
| `""` | (empty string) move journal/metadata back to the data device |
## raw-resize
`vitastor-disk raw-resize <ALL_OSD_PARAMETERS> <NEW_LAYOUT> [--iodepth 32]`
Resize data area and/or rewrite/move journal and metadata (manual format).
`ALL_OSD_PARAMETERS` must include all (at least all disk-related)
parameters from OSD command line (i.e. from systemd unit or superblock).
`NEW_LAYOUT` may include new disk layout parameters:
```
--new_data_offset SIZE resize data area so it starts at SIZE
--new_data_len SIZE resize data area to SIZE bytes
--new_meta_device PATH use PATH for new metadata
--new_meta_offset SIZE make new metadata area start at SIZE
--new_meta_len SIZE make new metadata area SIZE bytes long
--new_journal_device PATH use PATH for new journal
--new_journal_offset SIZE make new journal area start at SIZE
--new_journal_len SIZE make new journal area SIZE bytes long
```
| <!-- --> | <!-- --> |
|-----------------------------|-------------------------------------------|
| `--new_data_offset SIZE` | resize data area so it starts at `SIZE` |
| `--new_data_len SIZE` | resize data area to `SIZE` bytes |
| `--new_meta_device PATH` | use `PATH` for new metadata |
| `--new_meta_offset SIZE` | make new metadata area start at `SIZE` |
| `--new_meta_len SIZE` | make new metadata area `SIZE` bytes long |
| `--new_journal_device PATH` | use `PATH` for new journal |
| `--new_journal_offset SIZE` | make new journal area start at `SIZE` |
| `--new_journal_len SIZE` | make new journal area `SIZE` bytes long |
SIZE may include k/m/g/t suffixes. If any of the new layout parameter
options are not specified, old values will be used.
@@ -217,10 +246,14 @@ Intended for use from startup scripts (i.e. from systemd units).
## dump-journal
`vitastor-disk dump-journal [OPTIONS] <osd_device>`
`vitastor-disk dump-journal [OPTIONS] <journal_file> <journal_block_size> <offset> <size>`
Dump journal in human-readable or JSON (if `--json` is specified) format.
You can specify any OSD device (data, metadata or journal), or the layout manually.
Options:
```
@@ -233,23 +266,35 @@ Options:
## write-journal
`vitastor-disk write-journal <osd_device>`
`vitastor-disk write-journal <journal_file> <journal_block_size> <bitmap_size> <offset> <size>`
Write journal from JSON taken from standard input in the same format as produced by
`dump-journal --json --format data`.
You can specify any OSD device (data, metadata or journal), or the layout manually.
## dump-meta
`vitastor-disk dump-meta <osd_device>`
`vitastor-disk dump-meta <meta_file> <meta_block_size> <offset> <size>`
Dump metadata in JSON format.
You can specify any OSD device (data, metadata or journal), or the layout manually.
## write-meta
`vitastor-disk write-meta <osd_device>`
`vitastor-disk write-meta <meta_file> <offset> <size>`
Write metadata from JSON taken from standard input in the same format as produced by `dump-meta`.
You can specify any OSD device (data, metadata or journal), or the layout manually.
## simple-offsets
`vitastor-disk simple-offsets <device>`

View File

@@ -13,6 +13,7 @@ vitastor-disk - инструмент командной строки для уп
- [prepare](#prepare)
- [upgrade-simple](#upgrade-simple)
- [resize](#resize)
- [raw-resize](#raw-resize)
- [start/stop/restart/enable/disable](#start/stop/restart/enable/disable)
- [purge](#purge)
- [read-sb](#read-sb)
@@ -50,12 +51,17 @@ vitastor-disk - инструмент командной строки для уп
--osd_per_disk <N>
Создавать по несколько (<N>) OSD на каждом диске (по умолчанию 1)
--hybrid
Инициализировать гибридные (HDD+SSD) OSD на указанных дисках. SSD будут
использованы для журналов и метаданных, а HDD - для данных. Разделы для журналов
и метаданных будут созданы автоматически. Является ли диск SSD или HDD, определяется
по флагу `/sys/block/.../queue/rotational`. В гибридном режиме по умолчанию
используется размер объекта 1 МБ вместо 128 КБ, размер журнала 1 ГБ вместо 32 МБ
и включённый throttle_small_writes.
Инициализировать гибридные (HDD+SSD, NVMe+SATA и т.п.) OSD на указанных дисках.
По умолчанию, SSD будут использованы для журналов и метаданных, а HDD - для данных,
но вы можете поменять это поведение опцией --fast-devices. Разделы для журналов
и метаданных будут созданы автоматически. В режиме по умолчанию SSD и HDD-диски
различаются по флагу `/sys/block/.../queue/rotational`. Когда в гибридном режиме
для данных используются HDD, по умолчанию размер блока устанавливается 1 МБ вместо
128 КБ, размер журнала 1 ГБ вместо 32 МБ, и throttle_small_writes включается по
умолчанию.
--fast-devices /dev/nvmeX,/dev/nvmeY
Использовать данные диски для журналов и метаданных в гибридном режиме вместо их
автоопределения и извлечения из основного списка [devices...].
--disable_data_fsync auto
Отключать кэш и fsync-и для устройств данных. (1/yes/true = да, по умолчанию автоопределение)
--disable_meta_fsync auto
@@ -129,27 +135,51 @@ throttle_target_mbs, throttle_target_parallelism, throttle_threshold_us.
## resize
`vitastor-disk resize <ALL_OSD_PARAMETERS> <NEW_LAYOUT> [--iodepth 32]`
`vitastor-disk resize <osd_num>|<osd_device> [OPTIONS]`
Изменить размер области данных и/или переместить журнал и метаданные.
Изменить размер области данных и/или переместить журнал и метаданные:
В `ALL_OSD_PARAMETERS` нужно указать все относящиеся к диску параметры OSD
| <!-- --> | <!-- --> |
|-------------------------------|------------------------------------------------|
| `--move-journal ЦЕЛЬ` | переместить журнал на `ЦЕЛЬ` |
| `--move-meta ЦЕЛЬ` | переместить метаданные на `ЦЕЛЬ` |
| `--journal-size НОВЫЙ_РАЗМЕР` | изменить размер журнала на `НОВЫЙ_РАЗМЕР` |
| `--data-size НОВЫЙ_РАЗМЕР` | изменить размер диска данных на `НОВЫЙ_РАЗМЕР` |
| `--dry-run` | показать новые параметры, но не применять их |
`НОВЫЙ_РАЗМЕР` может быть указан с суффиксами k/m/g/t (кило/мега/гига/терабайт).
`ЦЕЛЬ` может быть одним из:
| <!-- --> | <!-- --> |
|-----------------|-------------------------------------------------------------------------------------|
| `<раздел>` | переместить журнал/метаданные на существующий GPT-раздел |
| `<полный_диск>` | создать GPT-раздел на диске `<полный_диск>` и переместить журнал/метаданные на него |
| `""` | (пустая строка) переместить журнал/метаданные обратно на диск данных |
## raw-resize
`vitastor-disk raw-resize <ВСЕАРАМЕТРЫ_OSD> <НОВЫЕ_РАЗМЕРЫ> [--iodepth 32]`
Изменить размер области данных и/или переместить журнал и метаданные (ручной формат).
В `ВСЕАРАМЕТРЫ_OSD` нужно указать все относящиеся к диску параметры OSD
из суперблока OSD или из файла сервиса systemd (в старых версиях).
В `NEW_LAYOUT` нужно указать новые параметры расположения данных:
В `НОВЫЕ_РАЗМЕРЫ` нужно указать новые параметры расположения данных:
```
--new_data_offset РАЗМЕР сдвинуть начало области данных на РАЗМЕР байт
--new_data_len РАЗМЕР изменить размер области данных до РАЗМЕР байт
--new_meta_device ПУТЬ использовать ПУТЬ как новое устройство метаданных
--new_meta_offset РАЗМЕР разместить новые метаданные по смещению РАЗМЕР байт
--new_meta_len РАЗМЕР сделать новые метаданные размером РАЗМЕР байт
--new_journal_device ПУТЬ использовать ПУТЬ как новое устройство журнала
--new_journal_offset РАЗМЕР разместить новый журнал по смещению РАЗМЕР байт
--new_journal_len РАЗМЕР сделать новый журнал размером РАЗМЕР байт
```
| <!-- --> | <!-- --> |
|-------------------------------|-------------------------------------------------------|
| `--new_data_offset РАЗМЕР` | сдвинуть начало области данных на `РАЗМЕР` байт |
| `--new_data_len РАЗМЕР` | изменить размер области данных до `РАЗМЕР` байт |
| `--new_meta_device ПУТЬ` | использовать `ПУТЬ` как новое устройство метаданных |
| `--new_meta_offset РАЗМЕР` | разместить новые метаданные по смещению `РАЗМЕР` байт |
| `--new_meta_len РАЗМЕР` | сделать новые метаданные размером `РАЗМЕР` байт |
| `--new_journal_device ПУТЬ` | использовать `ПУТЬ` как новое устройство журнала |
| `--new_journal_offset РАЗМЕР` | разместить новый журнал по смещению `РАЗМЕР` байт |
| `--new_journal_len РАЗМЕР` | сделать новый журнал размером `РАЗМЕР` байт |
РАЗМЕР может быть указан с суффиксами k/m/g/t. Если любой из новых параметров
`РАЗМЕР` может быть указан с суффиксами k/m/g/t. Если любой из новых параметров
расположения не указан, он принимается равным старому значению.
## start/stop/restart/enable/disable
@@ -224,10 +254,15 @@ OSD отключены fsync-и.
## dump-journal
`vitastor-disk dump-journal <osd_device>`
`vitastor-disk dump-journal [OPTIONS] <journal_file> <journal_block_size> <offset> <size>`
Вывести журнал в человекочитаемом или в JSON (с опцией `--json`) виде.
Вы можете указать любой раздел OSD - данных, журнала или метаданных - либо указать все
параметры расположения вручную.
Опции:
```
@@ -240,22 +275,37 @@ OSD отключены fsync-и.
## write-journal
`vitastor-disk write-journal <osd_device>`
`vitastor-disk write-journal <journal_file> <journal_block_size> <bitmap_size> <offset> <size>`
Записать журнал из JSON со стандартного ввода в формате, аналогичном `dump-journal --json --format data`.
Вы можете указать любой раздел OSD - данных, журнала или метаданных - либо указать все
параметры расположения вручную.
## dump-meta
`vitastor-disk dump-meta <osd_device>`
`vitastor-disk dump-meta <meta_file> <meta_block_size> <offset> <size>`
Вывести метаданные в формате JSON.
Вы можете указать любой раздел OSD - данных, журнала или метаданных - либо указать все
параметры расположения вручную.
## write-meta
`vitastor-disk write-meta <osd_device>`
`vitastor-disk write-meta <meta_file> <offset> <size>`
Записать метаданные из JSON со стандартного ввода в формате, аналогичном `dump-meta`.
Вы можете указать любой раздел OSD - данных, журнала или метаданных - либо указать все
параметры расположения вручную.
## simple-offsets
`vitastor-disk simple-offsets <device>`

View File

@@ -36,7 +36,7 @@ It will output a block device name like /dev/nbd0 which you can then use as a no
You can also use `--pool <POOL> --inode <INODE> --size <SIZE>` instead of `--image <IMAGE>` if you want.
vitastor-nbd supports all usual Vitastor configuration options like `--config_file <path_to_config>` plus NBD-specific:
vitastor-nbd supports all usual Vitastor configuration options like `--config_path <path_to_config>` plus NBD-specific:
* `--nbd_timeout 0` \
Timeout for I/O operations in seconds after exceeding which the kernel stops the device.
@@ -54,16 +54,18 @@ vitastor-nbd supports all usual Vitastor configuration options like `--config_fi
Stay in foreground, do not daemonize.
Note that `nbd_timeout`, `nbd_max_devices` and `nbd_max_part` options may also be specified
in `/etc/vitastor/vitastor.conf` or in other configuration file specified with `--config_file`.
in `/etc/vitastor/vitastor.conf` or in other configuration file specified with `--config_path`.
## unmap
To unmap the device run:
```
vitastor-nbd unmap /dev/nbd0
vitastor-nbd unmap [--force] /dev/nbd0
```
If `--force` is specified, `vitastor-nbd` doesn't check if the device is actually mapped.
## ls
```
@@ -96,7 +98,7 @@ Example output (JSON format):
vitastor-nbd netlink-map [/dev/nbdN] (--image <image> | --pool <pool> --inode <inode> --size <size in bytes>)
```
On recent kernel versions it's also possinle to map NBD devices using netlink interface.
On recent kernel versions it's also possible to map NBD devices using netlink interface.
This is an experimental feature because it doesn't solve all issues of NBD. Differences from regular ioctl-based 'map':

View File

@@ -41,7 +41,7 @@ vitastor-nbd map [/dev/nbdN] --image testimg
Для обращения по номеру инода, аналогично другим командам, можно использовать опции
`--pool <POOL> --inode <INODE> --size <SIZE>` вместо `--image testimg`.
vitastor-nbd поддерживает все обычные опции Vitastor, например, `--config_file <path_to_config>`,
vitastor-nbd поддерживает все обычные опции Vitastor, например, `--config_path <path_to_config>`,
плюс специфичные для NBD:
* `--nbd_timeout 0` \
@@ -62,16 +62,19 @@ vitastor-nbd поддерживает все обычные опции Vitastor,
Обратите внимание, что опции `nbd_timeout`, `nbd_max_devices` и `nbd_max_part` можно
также задавать в `/etc/vitastor/vitastor.conf` или в другом файле конфигурации,
заданном опцией `--config_file`.
заданном опцией `--config_path`.
## unmap
Для отключения устройства выполните:
```
vitastor-nbd unmap /dev/nbd0
vitastor-nbd unmap [--force] /dev/nbd0
```
Если задана опция `--force`, `vitastor-nbd` не проверяет, подключено ли устройство,
перед попыткой его отключить.
## ls
```

View File

@@ -11,6 +11,8 @@ Vitastor has two file system implementations. Both can be used via `vitastor-nfs
Commands:
- [mount](#mount)
- [start](#start)
- [upgrade](#upgrade)
- [defrag](#defrag)
## Pseudo-FS
@@ -86,10 +88,6 @@ POSIX features currently not implemented in VitastorFS:
- Modification time (`mtime`) is updated lazily every second (like `-o lazytime`)
Other notable missing features which should be addressed in the future:
- Defragmentation of "shared" inodes. Files smaller than pool object size (block_size
multiplied by data part count if pool is EC) are internally stored in large block
volumes sequentially, one after another, and leave garbage after deleting or resizing.
Defragmentator will be implemented to collect this garbage.
- Inode ID reuse. Currently inode IDs always grow, the limit is 2^48 inodes, so
in theory you may hit it if you create and delete a very large number of files
- Compaction of the key-value B-Tree. Current implementation never merges or deletes
@@ -113,6 +111,21 @@ settings, because Vitastor NFS proxy doesn't keep uncommitted data in memory
with these settings. But it may even work without `immediate_commit=all` because
the Linux NFS client repeats all uncommitted writes if it loses the connection.
## RDMA
vitastor-nfs supports NFS over RDMA, which, in theory, should also allow to use
VitastorFS from GPUDirect.
You can test NFS-RDMA even if you don't have an RDMA NIC using SoftROCE:
1. First, add SoftROCE device on both servers: `rdma link add rxe0 type rxe netdev eth0`.
Here, `rdma` utility is a part the iproute2 package, and `eth0` should be replaced with
the name of your Ethernet NIC.
2. Start vitastor-nfs with RDMA: `vitastor-nfs start (--fs <NAME> | --block) --pool <POOL> --port 20049 --nfs_rdma 20049 --portmap 0`
3. Mount the FS: `mount 192.168.0.10:/mnt/test/ /mnt/vita/ -o port=20049,mountport=20049,nfsvers=3,soft,nolock,rdma`
## Commands
### mount
@@ -133,11 +146,47 @@ The server will be automatically stopped when the FS is unmounted.
Start network NFS server. Options:
| <!-- --> | <!-- --> |
|-----------------|------------------------------------------------------------|
| `--bind <IP>` | bind service to \<IP> address (default 0.0.0.0) |
| `--port <PORT>` | use port \<PORT> for NFS services (default is 2049) |
| `--portmap 0` | do not listen on port 111 (portmap/rpcbind, requires root) |
| <!-- --> | <!-- --> |
|------------------------|-----------------------------------------------------------------------------------------------------------------------------|
| `--bind <IP>` | bind service to \<IP> address (default 0.0.0.0) |
| `--port <PORT>` | use port \<PORT> for NFS services (default is 2049). Specify "auto" to auto-select and print port |
| `--portmap 0` | do not listen on port 111 (portmap/rpcbind, requires root) |
| `--nfs_rdma <PORT>` | enable NFS-RDMA at RDMA-CM port \<PORT> (you can try 20049). If RDMA is enabled and --port is set to 0, TCP will be disabled |
| `--nfs_rdma_credit 16` | maximum operation credit for RDMA clients (max iodepth) |
| `--nfs_rdma_send 1024` | maximum RDMA send operation count (should be larger than iodepth) |
| `--nfs_rdma_alloc 1M` | RDMA memory allocation rounding |
| `--nfs_rdma_gc 64M` | maximum unused RDMA buffers |
### upgrade
`vitastor-nfs --fs <NAME> upgrade`
Upgrade FS metadata. Can be run online, but server(s) should be restarted after upgrade.
### defrag
`vitastor-nfs --fs <NAME> defrag [OPTIONS] [--dry-run]`
Defragment volumes used for small file storage having more than \<defrag_percent> %
of data removed. Can be run online.
In VitastorFS, small files are stored in large "volumes" / "shared inodes" one
after another. When you delete or extend such files, they are moved and garbage is left
behind. Defragmentation removes garbage and moves data still in use to new volumes.
Options:
| <!-- --> | <!-- --> |
|----------------------------|------------------------------------------------------------------------ |
| `--volume_untouched 86400` | Defragment volumes last appended to at least this number of seconds ago |
| `--defrag_percent 50` | Defragment volumes with at least this % of removed data |
| `--defrag_block_count 16` | Read this number of pool blocks at once during defrag |
| `--defrag_iodepth 16` | Move up to this number of files in parallel during defrag |
| `--trace` | Print verbose defragmentation status |
| `--dry-run` | Skip modifications, only print status |
| `--recalc-stats` | Recalculate all volume statistics |
| `--include-empty` | Include old and empty volumes; make sure to restart NFS servers before using it |
| `--no-rm` | Move, but do not delete data |
## Common options

View File

@@ -11,6 +11,8 @@
Команды:
- [mount](#mount)
- [start](#start)
- [upgrade](#upgrade)
- [defrag](#defrag)
## Псевдо-ФС
@@ -88,11 +90,6 @@ JSON-формате :-). Для инспекции содержимого БД
- Времена модификации (`mtime`) отслеживаются асинхронно (как будто ФС смонтирована с `-o lazytime`)
Другие недостающие функции, которые нужно добавить в будущем:
- Дефрагментация "общих инодов". На уровне реализации ФС файлы, меньшие, чем размер
объекта пула (block_size умножить на число частей данных, если пул EC),
упаковываются друг за другом в большие "общие" иноды/тома. Если такие файлы удалять
или увеличивать, они перемещаются и оставляют за собой "мусор", вот тут-то и нужен
дефрагментатор.
- Переиспользование номеров инодов. В текущей реализации номера инодов всё время
увеличиваются, так что в теории вы можете упереться в лимит, если насоздаёте
и наудаляете больше, чем 2^48 файлов.
@@ -119,6 +116,21 @@ JSON-формате :-). Для инспекции содержимого БД
даже без `immediate_commit=all`, потому что NFS-клиент ядра Linux повторяет все
незафиксированные запросы при потере соединения.
## RDMA
vitastor-nfs поддерживает NFS через RDMA. В теории это также должно позволять использовать
VitastorFS из GPUDirect.
Вы можете протестировать NFS-RDMA, даже если у вас нет RDMA-карты, с помощью SoftROCE:
1. Сначала создайте SoftROCE устройства на обоих тестовых серверах: `rdma link add rxe0 type rxe netdev eth0`.
Утилита `rdma` входит в состав пакета iproute2, а `eth0` вам нужно заменить на имя своей
сетевой карты.
2. Запустите vitastor-nfs с RDMA: `vitastor-nfs start (--fs <NAME> | --block) --pool <POOL> --port 20049 --nfs_rdma 20049 --portmap 0`
3. Смонтируйте ФС: `mount 192.168.0.10:/mnt/test/ /mnt/vita/ -o port=20049,mountport=20049,nfsvers=3,soft,nolock,rdma`
## Команды
### mount
@@ -139,11 +151,50 @@ JSON-формате :-). Для инспекции содержимого БД
Запустить сетевой NFS-сервер. Опции:
| <!-- --> | <!-- --> |
|-----------------|-----------------------------------------------------------------------|
| `--bind <IP>` | принимать соединения по адресу \<IP> (по умолчанию 0.0.0.0 - на всех) |
| `--port <PORT>` | использовать порт \<PORT> для NFS-сервисов (по умолчанию 2049) |
| `--portmap 0` | отключить сервис portmap/rpcbind на порту 111 (по умолчанию включён и требует root привилегий) |
| <!-- --> | <!-- --> |
|------------------------|-----------------------------------------------------------------------------------------------------------------------------|
| `--bind <IP>` | принимать соединения по адресу \<IP> (по умолчанию 0.0.0.0 - на всех) |
| `--port <PORT>` | использовать порт \<PORT> для NFS-сервисов (по умолчанию 2049). Укажите "auto", чтобы выбрать и напечатать случайный порт |
| `--portmap 0` | отключить сервис portmap/rpcbind на порту 111 (по умолчанию включён и требует root привилегий) |
| `--nfs_rdma <PORT>` | включить NFS-RDMA на порту RDMA-CM \<PORT> (попробуйте 20049). Если RDMA включено и указано `--port 0`, TCP будет отключено |
| `--nfs_rdma_credit 16` | максимальный "кредит", глубина очереди для NFS-клиентов |
| `--nfs_rdma_send 1024` | максимальное число операций RDMA отправки (должно быть больше nfs_rdma_credit) |
| `--nfs_rdma_alloc 1M` | округление выделения памяти для RDMA-клиентов |
| `--nfs_rdma_gc 64M` | максимальный объём неиспользуемой памяти RDMA-клиентом перед освобождением |
### upgrade
`vitastor-nfs --fs <NAME> upgrade`
Обновить метаданные ФС. Можно запускать онлайн (при запущенных серверах NFS), но после выполнения их всё
же желательно перезапустить.
### defrag
`vitastor-nfs --fs <NAME> defrag [OPTIONS] [--dry-run]`
Дефрагментировать тома, используемые для хранения мелких файлов, в которых более, чем
<defrag_percent> процентов данных удалено. Можно запускать онлайн.
На уровне реализации ФС файлы, меньшие, чем размер объекта пула (block_size умножить на число
частей данных, если пул EC), упаковываются друг за другом в большие "тома" / "общие иноды".
Когда такие файлы удаляются или увеличиваются, они перемещаются и оставляют за собой "мусор".
При дефрагментации мусор удаляется, а всё ещё используемые данные перемещаются в новые тома.
Опции:
| <!-- --> | <!-- --> |
|----------------------------|------------------------------------------------------------------------ |
| `--volume_untouched 86400` | Дефрагментировать только тома, в которые уже не писали это число секунд |
| `--defrag_percent 50` | Дефрагментировать только тома, в которых этот % данных удалён |
| `--defrag_block_count 16` | Читать это количество блоков пула за один раз |
| `--defrag_iodepth 16` | Перемещать одновременно до этого числа файлов |
| `--trace` | Печатать детальную статистику дефрагментации |
| `--dry-run` | Не производить никаких изменений, только описать выполняемые действия |
| `--recalc-stats` | Пересчитать и сохранить статистику всех томов |
| `--include-empty` | Дефрагментировать старые и пустые тома; обязательно перезапустите NFS-сервера после использования этой опции |
| `--no-rm` | Перемещать, но не удалять данные |
## Общие опции

View File

@@ -151,9 +151,9 @@ Example performance comparison:
To try VDUSE you need at least Linux 5.15, built with VDUSE support
(CONFIG_VDPA=m, CONFIG_VDPA_USER=m, CONFIG_VIRTIO_VDPA=m).
Debian Linux kernels have these options disabled by now, so if you want to try it on Debian,
use a kernel from Ubuntu [kernel-ppa/mainline](https://kernel.ubuntu.com/~kernel-ppa/mainline/), Proxmox,
or build modules for Debian kernel manually:
Debian Linux kernels had these options disabled until 6.6, so make sure you install a newer kernel
(from bookworm-backports, trixie or newer Debian version) if you want to try VDUSE. You can also
build modules for an existing kernel manually:
```
mkdir build

Some files were not shown because too many files have changed in this diff Show More