Compare commits

..

33 Commits

Author SHA1 Message Date
Vitaliy Filippov f285cfc483 Fix eviction when random_pos selects the end
Test / test_move_reappear (push) Successful in 18s Details
Test / test_rm (push) Successful in 13s Details
Test / test_snapshot_chain (push) Successful in 1m1s Details
Test / test_snapshot_down (push) Successful in 20s Details
Test / test_snapshot_ec (push) Failing after 3m5s Details
Test / test_splitbrain (push) Successful in 12s Details
Test / test_snapshot_chain_ec (push) Failing after 3m6s Details
Test / test_snapshot_down_ec (push) Failing after 3m6s Details
Test / test_rebalance_verify_ec (push) Failing after 45s Details
Test / test_rebalance_verify (push) Successful in 2m34s Details
Test / test_rebalance_verify_imm (push) Successful in 2m5s Details
Test / test_write (push) Successful in 54s Details
Test / test_write_no_same (push) Successful in 12s Details
Test / test_rebalance_verify_ec_imm (push) Successful in 3m7s Details
Test / test_write_xor (push) Failing after 3m6s Details
Test / test_interrupted_rebalance_ec (push) Failing after 10m6s Details
Test / test_heal_pg_size_2 (push) Failing after 10m10s Details
Test / test_heal_ec (push) Failing after 10m7s Details
Test / test_heal_csum_32k_dmj (push) Failing after 10m10s Details
Test / test_heal_csum_32k_dj (push) Failing after 10m15s Details
Test / test_heal_csum_32k (push) Failing after 3m29s Details
Test / test_scrub (push) Successful in 1m9s Details
Test / test_scrub_zero_osd_2 (push) Successful in 1m18s Details
Test / test_heal_csum_4k_dmj (push) Failing after 5m56s Details
Test / test_scrub_pg_size_3 (push) Successful in 31s Details
Test / test_scrub_xor (push) Failing after 3m12s Details
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Failing after 3m6s Details
Test / test_heal_csum_4k_dj (push) Failing after 10m10s Details
Test / test_scrub_ec (push) Failing after 3m6s Details
Test / test_heal_csum_4k (push) Failing after 10m21s Details
2023-12-01 01:43:03 +03:00
Vitaliy Filippov 12b50b421d Implement min/max list_count to make listings during performance test reasonable 2023-12-01 01:17:04 +03:00
Vitaliy Filippov 9f6d09428d Fix and improve parallel allocation
- Do not try to allocate more DB blocks in an inode block until it's "confirmed" and "locked" by the first write
- Do not recheck for new zero DB blocks on first write into an inode block - a CAS failure means someone else is already writing into it
- Throw new allocation blocks away regardless of whether the known_version is 0 on a CAS failure
2023-12-01 01:17:04 +03:00
Vitaliy Filippov 580025cfc9 Implement key_prefix for K/V stress test 2023-12-01 01:17:04 +03:00
Vitaliy Filippov 13e2d3ce7c More fixes
- do not overwrite a block with older version if known version is newer
  (read may start before update and end after update)
- invalidated block versions can't be remembered and trusted
- right boundary for split blocks is right_half when diving down, not key_lt
- restart update also when block is "invalidated", not just on version mismatch
- copy callback in listings to avoid closure destruction bugs too
2023-12-01 01:17:04 +03:00
Vitaliy Filippov c5b00f897a Add logging and one more assert 2023-12-01 01:17:04 +03:00
Vitaliy Filippov e847e26912 Make get_block() wait for updating when unrelated block is found along the path 2023-12-01 01:17:04 +03:00
Vitaliy Filippov 3393463466 Fix a race condition where changed blocks were parsed over existing cached blocks and getting a mix of data 2023-12-01 01:17:04 +03:00
Vitaliy Filippov bd96a6194a Simplify code by removing an unneeded "optimisation" 2023-12-01 01:17:04 +03:00
Vitaliy Filippov 601fe10c28 Add kv_log_level, print warnings on level 1, trace ops on level 10 2023-12-01 01:17:04 +03:00
Vitaliy Filippov 63dbc9ca85 Fix duplicate keys in listings on parallel updates -- do not rewind key "iterator position" 2023-12-01 01:17:04 +03:00
Vitaliy Filippov aa0c363c39 Implement key suffix to avoid collisions of multiple test workers 2023-12-01 01:17:04 +03:00
Vitaliy Filippov ce52c5589e Do not complain on empty first block 2023-12-01 01:17:04 +03:00
Vitaliy Filippov aee20ab1ee Add JSON output for stress-tester 2023-12-01 01:17:04 +03:00
Vitaliy Filippov bb81992fac Print total stats 2023-12-01 01:17:04 +03:00
Vitaliy Filippov a28f401aff Do not send more than op_count operations (fix segfault on finish) 2023-12-01 01:17:04 +03:00
Vitaliy Filippov 4ac7e096fd Add some more resiliency to serialize() 2023-12-01 01:17:04 +03:00
Vitaliy Filippov b6171a4599 Invalidate blocks being updated too 2023-12-01 01:17:03 +03:00
Vitaliy Filippov 28045f230c Change new block allocation method: make each writer choose multiple empty PG blocks and place blocks in them 2023-12-01 01:17:03 +03:00
Vitaliy Filippov 10e867880f Remove blocks from cache on unsuccessful updates 2023-12-01 01:17:03 +03:00
Vitaliy Filippov 012462171a Allow to track multiple updates per block (it should never happen though) 2023-12-01 01:17:03 +03:00
Vitaliy Filippov 904793cdab Do not call stop_updating after failed write_new_block and after clear_block (both delete the item) 2023-12-01 01:17:03 +03:00
Vitaliy Filippov 45c01db2de Track versions of parent blocks and recheck if changed during update 2023-12-01 01:17:03 +03:00
Vitaliy Filippov 8c9206cecd Fix resume_split condition (key_lt can also be "") 2023-12-01 01:17:03 +03:00
Vitaliy Filippov e8c46ededa Experiment: transform offsets for better sharding 2023-12-01 01:17:03 +03:00
Vitaliy Filippov e9b321a0e0 More post-stress-test fixes
- Prevent _split types of new blocks
- Stop updating new blocks only after the whole update, otherwise pointers
  may become invalid
- Use recheck_none for updates initially
- Use UINT64_MAX as initial block version when postponing ops, otherwise the
  check fails when the block is initially empty. This for example leads to
  writing both leaf items & block pointers (which is incorrect) into the root
  block when starting stress-test with --parallelism 32
- Fix -EINTR comparison
2023-12-01 01:17:03 +03:00
Vitaliy Filippov 09a77991ae Print operation statistics 2023-12-01 01:17:03 +03:00
Vitaliy Filippov 29d8c9b6f3 K/V fixes after stress-test :-)
- track block versions correctly - per inode block (128kb) instead of tree block (4kb)
- prevent multiple parallel CAS writes of the same inode block
- add logging for EILSEQ which means invalid data in the tree
- fix get_block updated flag which was true for blocks already in cache and was leading to infinite loops on "unrelated block" errors
- apply changes to blocks in cache only after successful writes (using "virtual changes")
- do not replace cached block with an older version from disk
- recheck "unrelated blocks" (read/update collisions) until data stops changing
- track tree path correctly - do not treat split block as parent of its right half
- correctly move blocks when finding new empty place on disk
- restart updates from the beginning when one of blocks is changed by a parallel update
- fix delete using SET opcode and setting key to the empty value instead
- prevent changing the same key more than 1 time in parallel
- fix listing verification
- resume continue_updates in update_find (required because it uses continue_update itself)
- add allow_old_cached parameter to get()
2023-12-01 01:17:03 +03:00
Vitaliy Filippov 20321aaaef Implement K/V DB stress tester 2023-12-01 01:17:03 +03:00
Vitaliy Filippov 987b005356 Evict blocks based on memory limit & block usage 2023-12-01 01:17:03 +03:00
Vitaliy Filippov 41754b748b Track blocks per level 2023-12-01 01:17:03 +03:00
Vitaliy Filippov 31913256f3 Track block level 2023-12-01 01:17:03 +03:00
Vitaliy Filippov 0ee36baed7 Experimental B-Tree Vitastor embedded K/V database implementation! 2023-12-01 01:17:03 +03:00
99 changed files with 523 additions and 2572 deletions

115
CLA-en.md
View File

@ -1,115 +0,0 @@
## Contributor License Agreement
> This Agreement is made in the Russian and English languages. **The English
text of Agreement is for informational purposes only** and is not binding
for the Parties.
>
> In the event of a conflict between the provisions of the Russian and
English versions of this Agreement, the **Russian version shall prevail**.
>
> Russian version is published at https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/CLA-ru.md
This document represents the offer of Filippov Vitaliy Vladimirovich
("Author"), author and copyright holder of Vitastor software ("Program"),
acknowledged by a certificate of Federal Service for Intellectual
Property of Russian Federation (Rospatent) # 2021617829 dated 20 May 2021,
to "Contributors" to conclude this license agreement as follows
("Agreement" or "Offer").
In accordance with Art. 435, Art. 438 of the Civil Code of the Russian
Federation, this Agreement is an offer and in case of acceptance of the
offer, an agreement is considered concluded on the conditions specified
in the offer.
1. Applicable Terms. \
1.1. "Official Repository" shall mean the computer storage, operated by
the Author, containing all prior and future versions of the Source
Code of the Program, at Internet addresses https://git.yourcmc.ru/vitalif/vitastor/
or https://github.com/vitalif/vitastor/. \
1.2. "Contributions" shall mean results of intellectual activity
(including, but not limited to, source code, libraries, components,
texts, documentation) which can be software or elements of the software
and which are provided by Contributors to the Author for inclusion
in the Program. \
1.3. "Contributor" shall mean a person who provides Contributions to
the Author and agrees with all provisions of this Agreement.
A Сontributor can be: 1) an individual; or 2) a legal entity or an
individual entrepreneur in case when an individual provides Contributions
on behalf of third parties, including on behalf of his employer.
2. Subject of the Agreement. \
2.1. Subject of the Agreement shall be the Contributions sent to the Author by Contributors.
2.2. The Contributor grants to the Author the right to use Contributions at his own
discretion and without any necessity to get a prior approval from Contributor or
any other third party in any way, under a simple (non-exclusive), royalty-free,
irrevocable license throughout the world by all means not contrary to law, in whole
or as a part of the Program, or other open-source or closed-source computer programs,
products or services (hereinafter -- the "License"), including, but not limited to: \
2.2.1. to execute Contributions and use them for any tasks; \
2.2.2. to publish and distribute Contributions in modified or unmodified form and/or to rent them; \
2.2.3. to modify Contributions, add comments, illustrations or any explanations to Contributions while using them; \
2.2.4. to create other results of intellectual activity based on Contributions, including derivative works and composite works; \
2.2.5. to translate Contributions into other languages, including other programming languages; \
2.2.6. to carry out rental and public display of Contributions; \
2.2.7. to use Contributions under the trade name and/or any trademark or any other label, or without it, as the Author thinks fit; \
2.3. The Contributor grants to the Author the right to sublicense any of the aforementioned
rights to third parties on any terms at the Author's discretion. \
2.4. The License is provided for the entire duration of Contributor's
exclusive intellectual property rights to the Contributions. \
2.5. The Contributor grants to the Author the right to decide how and where to mention,
or to not mention at all, the fact of his authorship, name, nickname and/or company
details when including Contributions into the Program or in any other computer
programs, products or services.
3. Acceptance of the Offer \
3.1. The Contributor may provide Contributions to the Author in the form of
a "Pull Request" in an Official Repository of the Program or by any
other electronic means of communication, including, but not limited to,
E-mail or messenger applications. \
3.2. The acceptance of the Offer shall be the fact of provision of Contributions
to the Author by the Contributor by any means with the following remark:
“I accept Vitastor CLA agreement: https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/CLA-en.md”
or “Я принимаю соглашение Vitastor CLA: https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/CLA-ru.md”. \
3.3. Date of acceptance of the Offer shall be the date of such provision.
4. Rights and obligations of the parties. \
4.1. The Contributor reserves the right to use Contributions by any lawful means
not contrary to this Agreement. \
4.2. The Author has the right to refuse to include Contributions into the Program
at any moment with no explanation to the Contributor.
5. Representations and Warranties. \
5.1. The person providing Contributions for the purpose of their inclusion
in the Program represents and warrants that he is the Contributor
or legally acts on the Contributor's behalf. Name or company details
of the Contributor shall be provided with the Contribution at the moment
of their provision to the Author. \
5.2. The Contributor represents and warrants that he legally owns exclusive
intellectual property rights to the Contributions. \
5.3. The Contributor represents and warrants that any further use of \
Contributions by the Author as provided by Contributor under the terms
of the Agreement does not infringe on intellectual and other rights and
legitimate interests of third parties. \
5.4. The Contributor represents and warrants that he has all rights and legal
capacity needed to accept this Offer; \
5.5. The Contributor represents and warrants that Contributions don't
contain malware or any information considered illegal under the law
of Russian Federation.
6. Termination of the Agreement \
6.1. The Agreement may be terminated at will of both Author and Contributor,
formalised in the written form or if the Agreement is terminated on
reasons prescribed by the law of Russian Federation.
7. Final Clauses \
7.1. The Contributor may optionally sign the Agreement in the written form. \
7.2. The Agreement is deemed to become effective from the Date of signing of
the Agreement and until the expiration of Contributor's exclusive
intellectual property rights to the Contributions. \
7.3. The Author may unilaterally alter the Agreement without informing Contributors.
The new version of the document shall come into effect 3 (three) days after
being published in the Official Repository of the Program at Internet address
[https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/CLA-en.md](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/CLA-en.md).
Contributors should keep informed about the actual version of the Agreement themselves. \
7.4. If the Author and the Contributor fail to agree on disputable issues,
disputes shall be referred to the Moscow Arbitration court.

108
CLA-ru.md
View File

@ -1,108 +0,0 @@
## Лицензионное соглашение с участником
> Данная Оферта написана в Русской и Английской версиях. **Версия на английском
языке предоставляется в информационных целях** и не связывает стороны договора.
>
> В случае несоответствий между положениями Русской и Английской версий Договора,
**Русская версия имеет приоритет**.
>
> Английская версия опубликована по адресу https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/CLA-en.md
Настоящий договор-оферта (далее по тексту Оферта, Договор) адресована физическим
и юридическим лицам (далее Участникам) и является официальным публичным предложением
Филиппова Виталия Владимировича (далее Автора) программного обеспечения Vitastor,
свидетельство Федеральной службы по интеллектуальной собственности (Роспатент) № 2021617829
от 20 мая 2021 г. (далее Программа) о нижеследующем:
1. Термины и определения \
1.1. Репозиторий электронное хранилище, содержащее исходный код Программы. \
1.2. Доработка результат интеллектуальной деятельности Участника, включающий
в себя изменения или дополнения к исходному коду Программы, которые Участник
желает включить в состав Программы для дальнейшего использования и распространения
Автором и для этого направляет их Автору. \
1.3. Участник физическое или юридическое лицо, вносящее Доработки в код Программы. \
1.4. ГК РФ Гражданский кодекс Российской Федерации.
2. Предмет оферты \
2.1. Предметом настоящей оферты являются Доработки, отправляемые Участником Автору. \
2.2. Участник предоставляет Автору право использовать Доработки по собственному усмотрению
и без необходимости предварительного согласования с Участником или иным третьим лицом
на условиях простой (неисключительной) безвозмездной безотзывной лицензии, полностью
или фрагментарно, в составе Программы или других программ, продуктов или сервисов
как с открытым, так и с закрытым исходным кодом, любыми способами, не противоречащими
закону, включая, но не ограничиваясь следующими: \
2.2.1. Запускать и использовать Доработки для выполнения любых задач; \
2.2.2. Распространять, импортировать и доводить Доработки до всеобщего сведения; \
2.2.3. Вносить в Доработки изменения, сокращения и дополнения, снабжать Доработки
при их использовании комментариями, иллюстрациями или пояснениями; \
2.2.4. Создавать на основе Доработок иные результаты интеллектуальной деятельности,
в том числе производные и составные произведения; \
2.2.5. Переводить Доработки на другие языки, в том числе на другие языки программирования; \
2.2.6. Осуществлять прокат и публичный показ Доработок; \
2.2.7. Использовать Доработки под любым фирменным наименованием, товарным знаком
(знаком обслуживания) или иным обозначением, или без такового. \
2.3. Участник предоставляет Автору право сублицензировать полученные права на Доработки
третьим лицам на любых условиях на усмотрение Автора. \
2.4. Участник предоставляет Автору права на Доработки на территории всего мира. \
2.5. Участник предоставляет Автору права на весь срок действия исключительного права
Участника на Доработки. \
2.6. Участник предоставляет Автору права на Доработки на безвозмездной основе. \
2.7. Участник разрешает Автору самостоятельно определять порядок, способ и
место указания его имени, реквизитов и/или псевдонима при включении
Доработок в состав Программы или других программ, продуктов или сервисов.
3. Акцепт Оферты \
3.1. Участник может передавать Доработки в адрес Автора через зеркала официального
Репозитория Программы по адресам https://git.yourcmc.ru/vitalif/vitastor/ или
https://github.com/vitalif/vitastor/ в виде “запроса на слияние” (pull request),
либо в письменном виде или с помощью любых других электронных средств коммуникации,
например, электронной почты или мессенджеров. \
3.2. Факт передачи Участником Доработок в адрес Автора любым способом с одной из пометок
“I accept Vitastor CLA agreement: https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/CLA-en.md”
или “Я принимаю соглашение Vitastor CLA: https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/CLA-ru.md”
является полным и безоговорочным акцептом (принятием) Участником условий настоящей
Оферты, т.е. Участник считается ознакомившимся с настоящим публичным договором и
в соответствии с ГК РФ признается лицом, вступившим с Автором в договорные отношения
на основании настоящей Оферты. \
3.3. Датой акцептирования настоящей Оферты считается дата такой передачи.
4. Права и обязанности Сторон \
4.1. Участник сохраняет за собой право использовать Доработки любым законным
способом, не противоречащим настоящему Договору. \
4.2. Автор вправе отказать Участнику во включении Доработок в состав
Программы без объяснения причин в любой момент по своему усмотрению.
5. Гарантии и заверения \
5.1. Лицо, направляющее Доработки для целей их включения в состав Программы,
гарантирует, что является Участником или представителем Участника. Имя или реквизиты
Участника должны быть указаны при их передаче в адрес Автора Программы. \
5.2. Участник гарантирует, что является законным обладателем исключительных прав
на Доработки. \
5.3. Участник гарантирует, что на момент акцептирования настоящей Оферты ему
ничего не известно (и не могло быть известно) о правах третьих лиц на
передаваемые Автору Доработки или их часть, которые могут быть нарушены
в связи с передачей Доработок по настоящему Договору. \
5.4. Участник гарантирует, что является дееспособным лицом и обладает всеми
необходимыми правами для заключения Договора. \
5.5. Участник гарантирует, что Доработки не содержат вредоносного ПО, а также
любой другой информации, запрещённой к распространению по законам Российской
Федерации.
6. Прекращение действия оферты \
6.1. Действие настоящего договора может быть прекращено по соглашению сторон,
оформленному в письменном виде, а также вследствие его расторжения по основаниям,
предусмотренным законом.
7. Заключительные положения \
7.1. Участник вправе по желанию подписать настоящий Договор в письменном виде. \
7.2. Настоящий договор действует с момента его заключения и до истечения срока
действия исключительных прав Участника на Доработки. \
7.3. Автор имеет право в одностороннем порядке вносить изменения и дополнения в договор
без специального уведомления об этом Участников. Новая редакция документа вступает
в силу через 3 (Три) календарных дня со дня опубликования в официальном Репозитории
Программы по адресу в сети Интернет
[https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/CLA-ru.md](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/CLA-ru.md).
Участники самостоятельно отслеживают действующие условия Оферты. \
7.4. Все споры, возникающие между сторонами в процессе их взаимодействия по настоящему
договору, решаются путём переговоров. В случае невозможности урегулирования споров
переговорным порядком стороны разрешают их в Арбитражном суде г.Москвы.

View File

@ -2,6 +2,6 @@ cmake_minimum_required(VERSION 2.8.12)
project(vitastor) project(vitastor)
set(VERSION "1.3.1") set(VERSION "1.2.0")
add_subdirectory(src) add_subdirectory(src)

View File

@ -1,15 +1,14 @@
# Compile stage # Compile stage
FROM golang:bookworm AS build FROM golang:buster AS build
ADD go.sum go.mod /app/ ADD go.sum go.mod /app/
RUN cd /app; CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go mod download -x RUN cd /app; CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go mod download -x
ADD . /app ADD . /app
RUN perl -i -e '$/ = undef; while(<>) { s/\n\s*(\{\s*\n)/$1\n/g; s/\}(\s*\n\s*)else\b/$1} else/g; print; }' `find /app -name '*.go'` && \ RUN perl -i -e '$/ = undef; while(<>) { s/\n\s*(\{\s*\n)/$1\n/g; s/\}(\s*\n\s*)else\b/$1} else/g; print; }' `find /app -name '*.go'`
cd /app && \ RUN cd /app; CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -o vitastor-csi
CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -o vitastor-csi
# Final stage # Final stage
FROM debian:bookworm FROM debian:buster
LABEL maintainers="Vitaliy Filippov <vitalif@yourcmc.ru>" LABEL maintainers="Vitaliy Filippov <vitalif@yourcmc.ru>"
LABEL description="Vitastor CSI Driver" LABEL description="Vitastor CSI Driver"
@ -19,30 +18,19 @@ ENV CSI_ENDPOINT=""
RUN apt-get update && \ RUN apt-get update && \
apt-get install -y wget && \ apt-get install -y wget && \
(echo deb http://deb.debian.org/debian buster-backports main > /etc/apt/sources.list.d/backports.list) && \
(echo "APT::Install-Recommends false;" > /etc/apt/apt.conf) && \ (echo "APT::Install-Recommends false;" > /etc/apt/apt.conf) && \
apt-get update && \ apt-get update && \
apt-get install -y e2fsprogs xfsprogs kmod iproute2 \ apt-get install -y e2fsprogs xfsprogs kmod && \
# dependencies of qemu-storage-daemon
libnuma1 liburing2 libglib2.0-0 libfuse3-3 libaio1 libzstd1 libnettle8 \
libgmp10 libhogweed6 libp11-kit0 libidn2-0 libunistring2 libtasn1-6 libpcre2-8-0 libffi8 && \
apt-get clean && \ apt-get clean && \
(echo options nbd nbds_max=128 > /etc/modprobe.d/nbd.conf) (echo options nbd nbds_max=128 > /etc/modprobe.d/nbd.conf)
COPY --from=build /app/vitastor-csi /bin/ COPY --from=build /app/vitastor-csi /bin/
RUN (echo deb http://vitastor.io/debian bookworm main > /etc/apt/sources.list.d/vitastor.list) && \ RUN (echo deb http://vitastor.io/debian buster main > /etc/apt/sources.list.d/vitastor.list) && \
((echo 'Package: *'; echo 'Pin: origin "vitastor.io"'; echo 'Pin-Priority: 1000') > /etc/apt/preferences.d/vitastor.pref) && \
wget -q -O /etc/apt/trusted.gpg.d/vitastor.gpg https://vitastor.io/debian/pubkey.gpg && \ wget -q -O /etc/apt/trusted.gpg.d/vitastor.gpg https://vitastor.io/debian/pubkey.gpg && \
apt-get update && \ apt-get update && \
apt-get install -y vitastor-client && \ apt-get install -y vitastor-client && \
wget https://vitastor.io/archive/qemu/qemu-bookworm-8.1.2%2Bds-1%2Bvitastor1/qemu-utils_8.1.2%2Bds-1%2Bvitastor1_amd64.deb && \
wget https://vitastor.io/archive/qemu/qemu-bookworm-8.1.2%2Bds-1%2Bvitastor1/qemu-block-extra_8.1.2%2Bds-1%2Bvitastor1_amd64.deb && \
dpkg -x qemu-utils*.deb tmp1 && \
dpkg -x qemu-block-extra*.deb tmp1 && \
cp -a tmp1/usr/bin/qemu-storage-daemon /usr/bin/ && \
mkdir -p /usr/lib/x86_64-linux-gnu/qemu && \
cp -a tmp1/usr/lib/x86_64-linux-gnu/qemu/block-vitastor.so /usr/lib/x86_64-linux-gnu/qemu/ && \
rm -rf tmp1 *.deb && \
apt-get clean apt-get clean
ENTRYPOINT ["/bin/vitastor-csi"] ENTRYPOINT ["/bin/vitastor-csi"]

View File

@ -1,4 +1,4 @@
VERSION ?= v1.3.1 VERSION ?= v1.2.0
all: build push all: build push

View File

@ -2,7 +2,6 @@
apiVersion: v1 apiVersion: v1
kind: ConfigMap kind: ConfigMap
data: data:
# You can add multiple configuration files here to use a multi-cluster setup
vitastor.conf: |- vitastor.conf: |-
{"etcd_address":"http://192.168.7.2:2379","etcd_prefix":"/vitastor"} {"etcd_address":"http://192.168.7.2:2379","etcd_prefix":"/vitastor"}
metadata: metadata:

View File

@ -49,7 +49,7 @@ spec:
capabilities: capabilities:
add: ["SYS_ADMIN"] add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true allowPrivilegeEscalation: true
image: vitalif/vitastor-csi:v1.3.1 image: vitalif/vitastor-csi:v1.2.0
args: args:
- "--node=$(NODE_ID)" - "--node=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)" - "--endpoint=$(CSI_ENDPOINT)"
@ -82,8 +82,6 @@ spec:
name: host-sys name: host-sys
- mountPath: /run/mount - mountPath: /run/mount
name: host-mount name: host-mount
- mountPath: /run/vitastor-csi
name: run-vitastor-csi
- mountPath: /lib/modules - mountPath: /lib/modules
name: lib-modules name: lib-modules
readOnly: true readOnly: true
@ -134,9 +132,6 @@ spec:
- name: host-mount - name: host-mount
hostPath: hostPath:
path: /run/mount path: /run/mount
- name: run-vitastor-csi
hostPath:
path: /run/vitastor-csi
- name: lib-modules - name: lib-modules
hostPath: hostPath:
path: /lib/modules path: /lib/modules

View File

@ -121,7 +121,7 @@ spec:
privileged: true privileged: true
capabilities: capabilities:
add: ["SYS_ADMIN"] add: ["SYS_ADMIN"]
image: vitalif/vitastor-csi:v1.3.1 image: vitalif/vitastor-csi:v1.2.0
args: args:
- "--node=$(NODE_ID)" - "--node=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)" - "--endpoint=$(CSI_ENDPOINT)"

View File

@ -12,6 +12,9 @@ parameters:
etcdVolumePrefix: "" etcdVolumePrefix: ""
poolId: "1" poolId: "1"
# you can choose other configuration file if you have it in the config map # you can choose other configuration file if you have it in the config map
# different etcd URLs and prefixes should also be put in the config
#configPath: "/etc/vitastor/vitastor.conf" #configPath: "/etc/vitastor/vitastor.conf"
# you can also specify etcdUrl here, maybe to connect to another Vitastor cluster
# multiple etcdUrls may be specified, delimited by comma
#etcdUrl: "http://192.168.7.2:2379"
#etcdPrefix: "/vitastor"
allowVolumeExpansion: true allowVolumeExpansion: true

View File

@ -5,7 +5,7 @@ package vitastor
const ( const (
vitastorCSIDriverName = "csi.vitastor.io" vitastorCSIDriverName = "csi.vitastor.io"
vitastorCSIDriverVersion = "1.3.1" vitastorCSIDriverVersion = "1.2.0"
) )
// Config struct fills the parameters of request or user input // Config struct fills the parameters of request or user input

View File

@ -62,7 +62,7 @@ func NewControllerServer(driver *Driver) *ControllerServer
} }
} }
func GetConnectionParams(params map[string]string) (map[string]string, error) func GetConnectionParams(params map[string]string) (map[string]string, []string, string)
{ {
ctxVars := make(map[string]string) ctxVars := make(map[string]string)
configPath := params["configPath"] configPath := params["configPath"]
@ -75,69 +75,71 @@ func GetConnectionParams(params map[string]string) (map[string]string, error)
ctxVars["configPath"] = configPath ctxVars["configPath"] = configPath
} }
config := make(map[string]interface{}) config := make(map[string]interface{})
configFD, err := os.Open(configPath) if configFD, err := os.Open(configPath); err == nil
if (err != nil)
{ {
return nil, err
}
defer configFD.Close() defer configFD.Close()
data, _ := ioutil.ReadAll(configFD) data, _ := ioutil.ReadAll(configFD)
json.Unmarshal(data, &config) json.Unmarshal(data, &config)
// Check etcd URL in the config, but do not use the explicit etcdUrl }
// parameter for CLI calls, otherwise users won't be able to later // Try to load prefix & etcd URL from the config
// change them - storage class parameters are saved in volume IDs
var etcdUrl []string var etcdUrl []string
switch config["etcd_address"].(type) if (params["etcdUrl"] != "")
{ {
case string: ctxVars["etcdUrl"] = params["etcdUrl"]
url := strings.TrimSpace(config["etcd_address"].(string)) etcdUrl = strings.Split(params["etcdUrl"], ",")
if (url != "")
{
etcdUrl = strings.Split(url, ",")
}
case []string:
etcdUrl = config["etcd_address"].([]string)
case []interface{}:
for _, url := range config["etcd_address"].([]interface{})
{
s, ok := url.(string)
if (ok)
{
etcdUrl = append(etcdUrl, s)
}
}
} }
if (len(etcdUrl) == 0) if (len(etcdUrl) == 0)
{ {
return nil, status.Error(codes.InvalidArgument, "etcd_address is missing in "+configPath) switch config["etcd_address"].(type)
}
return ctxVars, nil
}
func system(program string, args ...string) ([]byte, []byte, error)
{ {
klog.Infof("Running "+program+" "+strings.Join(args, " ")) case string:
c := exec.Command(program, args...) etcdUrl = strings.Split(config["etcd_address"].(string), ",")
var stdout, stderr bytes.Buffer case []string:
c.Stdout, c.Stderr = &stdout, &stderr etcdUrl = config["etcd_address"].([]string)
err := c.Run()
if (err != nil)
{
stdoutStr, stderrStr := string(stdout.Bytes()), string(stderr.Bytes())
klog.Errorf(program+" "+strings.Join(args, " ")+" failed: %s, status %s\n", stdoutStr+stderrStr, err)
return nil, nil, status.Error(codes.Internal, stdoutStr+stderrStr+" (status "+err.Error()+")")
} }
return stdout.Bytes(), stderr.Bytes(), nil }
etcdPrefix := params["etcdPrefix"]
if (etcdPrefix == "")
{
etcdPrefix, _ = config["etcd_prefix"].(string)
if (etcdPrefix == "")
{
etcdPrefix = "/vitastor"
}
}
else
{
ctxVars["etcdPrefix"] = etcdPrefix
}
return ctxVars, etcdUrl, etcdPrefix
} }
func invokeCLI(ctxVars map[string]string, args []string) ([]byte, error) func invokeCLI(ctxVars map[string]string, args []string) ([]byte, error)
{ {
if (ctxVars["etcdUrl"] != "")
{
args = append(args, "--etcd_address", ctxVars["etcdUrl"])
}
if (ctxVars["etcdPrefix"] != "")
{
args = append(args, "--etcd_prefix", ctxVars["etcdPrefix"])
}
if (ctxVars["configPath"] != "") if (ctxVars["configPath"] != "")
{ {
args = append(args, "--config_path", ctxVars["configPath"]) args = append(args, "--config_path", ctxVars["configPath"])
} }
stdout, _, err := system("/usr/bin/vitastor-cli", args...) c := exec.Command("/usr/bin/vitastor-cli", args...)
return stdout, err var stdout, stderr bytes.Buffer
c.Stdout = &stdout
c.Stderr = &stderr
err := c.Run()
stderrStr := string(stderr.Bytes())
if (err != nil)
{
klog.Errorf("vitastor-cli %s failed: %s, status %s\n", strings.Join(args, " "), stderrStr, err)
return nil, status.Error(codes.Internal, stderrStr+" (status "+err.Error()+")")
}
return stdout.Bytes(), nil
} }
// Create the volume // Create the volume
@ -172,10 +174,10 @@ func (cs *ControllerServer) CreateVolume(ctx context.Context, req *csi.CreateVol
volSize = ((capRange.GetRequiredBytes() + MB - 1) / MB) * MB volSize = ((capRange.GetRequiredBytes() + MB - 1) / MB) * MB
} }
ctxVars, err := GetConnectionParams(req.Parameters) ctxVars, etcdUrl, _ := GetConnectionParams(req.Parameters)
if (err != nil) if (len(etcdUrl) == 0)
{ {
return nil, err return nil, status.Error(codes.InvalidArgument, "no etcdUrl in storage class configuration and no etcd_address in vitastor.conf")
} }
args := []string{ "create", volName, "-s", fmt.Sprintf("%v", volSize), "--pool", fmt.Sprintf("%v", poolId) } args := []string{ "create", volName, "-s", fmt.Sprintf("%v", volSize), "--pool", fmt.Sprintf("%v", poolId) }
@ -205,7 +207,7 @@ func (cs *ControllerServer) CreateVolume(ctx context.Context, req *csi.CreateVol
} }
// Create image using vitastor-cli // Create image using vitastor-cli
_, err = invokeCLI(ctxVars, args) _, err := invokeCLI(ctxVars, args)
if (err != nil) if (err != nil)
{ {
if (strings.Index(err.Error(), "already exists") > 0) if (strings.Index(err.Error(), "already exists") > 0)
@ -255,11 +257,7 @@ func (cs *ControllerServer) DeleteVolume(ctx context.Context, req *csi.DeleteVol
} }
volName := volVars["name"] volName := volVars["name"]
ctxVars, err := GetConnectionParams(volVars) ctxVars, _, _ := GetConnectionParams(volVars)
if (err != nil)
{
return nil, err
}
_, err = invokeCLI(ctxVars, []string{ "rm", volName }) _, err = invokeCLI(ctxVars, []string{ "rm", volName })
if (err != nil) if (err != nil)
@ -471,11 +469,7 @@ func (cs *ControllerServer) DeleteSnapshot(ctx context.Context, req *csi.DeleteS
volName := volVars["name"] volName := volVars["name"]
snapName := volVars["snapshot"] snapName := volVars["snapshot"]
ctxVars, err := GetConnectionParams(volVars) ctxVars, _, _ := GetConnectionParams(volVars)
if (err != nil)
{
return nil, err
}
_, err = invokeCLI(ctxVars, []string{ "rm", volName+"@"+snapName }) _, err = invokeCLI(ctxVars, []string{ "rm", volName+"@"+snapName })
if (err != nil) if (err != nil)
@ -502,11 +496,7 @@ func (cs *ControllerServer) ListSnapshots(ctx context.Context, req *csi.ListSnap
return nil, status.Error(codes.Internal, "volume ID not in JSON format") return nil, status.Error(codes.Internal, "volume ID not in JSON format")
} }
volName := volVars["name"] volName := volVars["name"]
ctxVars, err := GetConnectionParams(volVars) ctxVars, _, _ := GetConnectionParams(volVars)
if (err != nil)
{
return nil, err
}
inodeCfg, err := invokeList(ctxVars, volName+"@*", false) inodeCfg, err := invokeList(ctxVars, volName+"@*", false)
if (err != nil) if (err != nil)
@ -565,11 +555,7 @@ func (cs *ControllerServer) ControllerExpandVolume(ctx context.Context, req *csi
return nil, status.Error(codes.Internal, "volume ID not in JSON format") return nil, status.Error(codes.Internal, "volume ID not in JSON format")
} }
volName := volVars["name"] volName := volVars["name"]
ctxVars, err := GetConnectionParams(volVars) ctxVars, _, _ := GetConnectionParams(volVars)
if (err != nil)
{
return nil, err
}
inodeCfg, err := invokeList(ctxVars, volName, true) inodeCfg, err := invokeList(ctxVars, volName, true)
if (err != nil) if (err != nil)

View File

@ -5,16 +5,11 @@ package vitastor
import ( import (
"context" "context"
"errors"
"encoding/json"
"fmt"
"os" "os"
"os/exec" "os/exec"
"path/filepath" "encoding/json"
"strconv"
"strings" "strings"
"syscall" "bytes"
"time"
"google.golang.org/grpc/codes" "google.golang.org/grpc/codes"
"google.golang.org/grpc/status" "google.golang.org/grpc/status"
@ -30,102 +25,16 @@ import (
type NodeServer struct type NodeServer struct
{ {
*Driver *Driver
useVduse bool
stateDir string
mounter mount.Interface mounter mount.Interface
restartInterval time.Duration
}
type DeviceState struct
{
ConfigPath string `json:"configPath"`
VdpaId string `json:"vdpaId"`
Image string `json:"image"`
Blockdev string `json:"blockdev"`
Readonly bool `json:"readonly"`
PidFile string `json:"pidFile"`
} }
// NewNodeServer create new instance node // NewNodeServer create new instance node
func NewNodeServer(driver *Driver) *NodeServer func NewNodeServer(driver *Driver) *NodeServer
{ {
stateDir := os.Getenv("STATE_DIR") return &NodeServer{
if (stateDir == "")
{
stateDir = "/run/vitastor-csi"
}
if (stateDir[len(stateDir)-1] != '/')
{
stateDir += "/"
}
ns := &NodeServer{
Driver: driver, Driver: driver,
useVduse: checkVduseSupport(),
stateDir: stateDir,
mounter: mount.New(""), mounter: mount.New(""),
} }
if (ns.useVduse)
{
ns.restoreVduseDaemons()
dur, err := time.ParseDuration(os.Getenv("RESTART_INTERVAL"))
if (err != nil)
{
dur = 10 * time.Second
}
ns.restartInterval = dur
if (ns.restartInterval != time.Duration(0))
{
go ns.restarter()
}
}
return ns
}
func checkVduseSupport() bool
{
// Check VDUSE support (vdpa, vduse, virtio-vdpa kernel modules)
vduse := true
for _, mod := range []string{"vdpa", "vduse", "virtio-vdpa"}
{
_, err := os.Stat("/sys/module/"+mod)
if (err != nil)
{
if (!errors.Is(err, os.ErrNotExist))
{
klog.Errorf("failed to check /sys/module/%s: %v", mod, err)
}
c := exec.Command("/sbin/modprobe", mod)
c.Stdout = os.Stderr
c.Stderr = os.Stderr
err := c.Run()
if (err != nil)
{
klog.Errorf("/sbin/modprobe %s failed: %v", mod, err)
vduse = false
break
}
}
}
// Check that vdpa tool functions
if (vduse)
{
c := exec.Command("/sbin/vdpa", "-j", "dev")
c.Stderr = os.Stderr
err := c.Run()
if (err != nil)
{
klog.Errorf("/sbin/vdpa -j dev failed: %v", err)
vduse = false
}
}
if (!vduse)
{
klog.Errorf(
"Your host apparently has no VDUSE support. VDUSE support disabled, NBD will be used to map devices."+
" For VDUSE you need at least Linux 5.15 and the following kernel modules: vdpa, virtio-vdpa, vduse.",
)
}
return vduse
} }
// NodeStageVolume mounts the volume to a staging path on the node. // NodeStageVolume mounts the volume to a staging path on the node.
@ -152,318 +61,6 @@ func Contains(list []string, s string) bool
return false return false
} }
func (ns *NodeServer) mapNbd(volName string, ctxVars map[string]string, readonly bool) (string, error)
{
// Map NBD device
// FIXME: Check if already mapped
args := []string{
"map", "--image", volName,
}
if (ctxVars["configPath"] != "")
{
args = append(args, "--config_path", ctxVars["configPath"])
}
if (readonly)
{
args = append(args, "--readonly", "1")
}
stdout, stderr, err := system("/usr/bin/vitastor-nbd", args...)
dev := strings.TrimSpace(string(stdout))
if (dev == "")
{
return "", fmt.Errorf("vitastor-nbd did not return the name of NBD device. output: %s", stderr)
}
return dev, err
}
func (ns *NodeServer) unmapNbd(devicePath string)
{
// unmap NBD device
unmapOut, unmapErr := exec.Command("/usr/bin/vitastor-nbd", "unmap", devicePath).CombinedOutput()
if (unmapErr != nil)
{
klog.Errorf("failed to unmap NBD device %s: %s, error: %v", devicePath, unmapOut, unmapErr)
}
}
func findByPidFile(pidFile string) (*os.Process, error)
{
klog.Infof("killing process with PID from file %s", pidFile)
pidBuf, err := os.ReadFile(pidFile)
if (err != nil)
{
return nil, err
}
pid, err := strconv.ParseInt(strings.TrimSpace(string(pidBuf)), 0, 64)
if (err != nil)
{
return nil, err
}
proc, err := os.FindProcess(int(pid))
if (err != nil)
{
return nil, err
}
return proc, nil
}
func killByPidFile(pidFile string) error
{
proc, err := findByPidFile(pidFile)
if (err != nil)
{
return err
}
return proc.Signal(syscall.SIGTERM)
}
func startStorageDaemon(vdpaId, volName, pidFile, configPath string, readonly bool) error
{
// Start qemu-storage-daemon
blockSpec := map[string]interface{}{
"node-name": "disk1",
"driver": "vitastor",
"image": volName,
"cache": map[string]bool{
"direct": true,
"no-flush": false,
},
"discard": "unmap",
}
if (configPath != "")
{
blockSpec["config-path"] = configPath
}
blockSpecJson, _ := json.Marshal(blockSpec)
writable := "true"
if (readonly)
{
writable = "false"
}
_, _, err := system(
"/usr/bin/qemu-storage-daemon", "--daemonize", "--pidfile", pidFile, "--blockdev", string(blockSpecJson),
"--export", "vduse-blk,id="+vdpaId+",node-name=disk1,name="+vdpaId+",num-queues=16,queue-size=128,writable="+writable,
)
return err
}
func (ns *NodeServer) mapVduse(volName string, ctxVars map[string]string, readonly bool) (string, string, error)
{
// Generate state file
stateFd, err := os.CreateTemp(ns.stateDir, "vitastor-vduse-*.json")
if (err != nil)
{
return "", "", err
}
stateFile := stateFd.Name()
stateFd.Close()
vdpaId := filepath.Base(stateFile)
vdpaId = vdpaId[0:len(vdpaId)-5] // remove ".json"
pidFile := ns.stateDir + vdpaId + ".pid"
// Map VDUSE device via qemu-storage-daemon
err = startStorageDaemon(vdpaId, volName, pidFile, ctxVars["configPath"], readonly)
if (err == nil)
{
// Add device to VDPA bus
_, _, err = system("/sbin/vdpa", "-j", "dev", "add", "name", vdpaId, "mgmtdev", "vduse")
if (err == nil)
{
// Find block device name
var matches []string
matches, err = filepath.Glob("/sys/bus/vdpa/devices/"+vdpaId+"/virtio*/block/*")
if (err == nil && len(matches) == 0)
{
err = errors.New("/sys/bus/vdpa/devices/"+vdpaId+"/virtio*/block/* is not found")
}
if (err == nil)
{
blockdev := "/dev/"+filepath.Base(matches[0])
_, err = os.Stat(blockdev)
if (err == nil)
{
// Generate state file
stateJSON, _ := json.Marshal(&DeviceState{
ConfigPath: ctxVars["configPath"],
VdpaId: vdpaId,
Image: volName,
Blockdev: blockdev,
Readonly: readonly,
PidFile: pidFile,
})
err = os.WriteFile(stateFile, stateJSON, 0600)
if (err == nil)
{
return blockdev, vdpaId, nil
}
}
}
}
killErr := killByPidFile(pidFile)
if (killErr != nil)
{
klog.Errorf("Failed to kill started qemu-storage-daemon: %v", killErr)
}
os.Remove(stateFile)
os.Remove(pidFile)
}
return "", "", err
}
func (ns *NodeServer) unmapVduse(devicePath string)
{
if (len(devicePath) < 6 || devicePath[0:6] != "/dev/v")
{
klog.Errorf("%s does not start with /dev/v", devicePath)
return
}
vduseDev, err := os.Readlink("/sys/block/"+devicePath[5:])
if (err != nil)
{
klog.Errorf("%s is not a symbolic link to VDUSE device (../devices/virtual/vduse/xxx): %v", devicePath, err)
return
}
vdpaId := ""
p := strings.Index(vduseDev, "/vduse/")
if (p >= 0)
{
vduseDev = vduseDev[p+7:]
p = strings.Index(vduseDev, "/")
if (p >= 0)
{
vdpaId = vduseDev[0:p]
}
}
if (vdpaId == "")
{
klog.Errorf("%s is not a symbolic link to VDUSE device (../devices/virtual/vduse/xxx), but is %v", devicePath, vduseDev)
return
}
ns.unmapVduseById(vdpaId)
}
func (ns *NodeServer) unmapVduseById(vdpaId string)
{
_, err := os.Stat("/sys/bus/vdpa/devices/"+vdpaId)
if (err != nil)
{
klog.Errorf("failed to stat /sys/bus/vdpa/devices/"+vdpaId+": %v", err)
}
else
{
_, _, _ = system("/sbin/vdpa", "-j", "dev", "del", vdpaId)
}
stateFile := ns.stateDir + vdpaId + ".json"
os.Remove(stateFile)
pidFile := ns.stateDir + vdpaId + ".pid"
_, err = os.Stat(pidFile)
if (os.IsNotExist(err))
{
// ok, already killed
}
else if (err != nil)
{
klog.Errorf("Failed to stat %v: %v", pidFile, err)
return
}
else
{
err = killByPidFile(pidFile)
if (err != nil)
{
klog.Errorf("Failed to kill started qemu-storage-daemon: %v", err)
}
os.Remove(pidFile)
}
}
func (ns *NodeServer) restarter()
{
// Restart dead VDUSE daemons at regular intervals
// Otherwise volume I/O may hang in case of a qemu-storage-daemon crash
// Moreover, it may lead to a kernel panic of the kernel is configured to
// panic on hung tasks
ticker := time.NewTicker(ns.restartInterval)
defer ticker.Stop()
for
{
<-ticker.C
ns.restoreVduseDaemons()
}
}
func (ns *NodeServer) restoreVduseDaemons()
{
pattern := ns.stateDir+"vitastor-vduse-*.json"
matches, err := filepath.Glob(pattern)
if (err != nil)
{
klog.Errorf("failed to list %s: %v", pattern, err)
}
if (len(matches) == 0)
{
return
}
devList := make(map[string]interface{})
// example output: {"dev":{"test1":{"type":"block","mgmtdev":"vduse","vendor_id":0,"max_vqs":16,"max_vq_size":128}}}
devListJSON, _, err := system("/sbin/vdpa", "-j", "dev", "list")
if (err != nil)
{
return
}
err = json.Unmarshal(devListJSON, &devList)
devs, ok := devList["dev"].(map[string]interface{})
if (err != nil || !ok)
{
klog.Errorf("/sbin/vdpa -j dev list returned bad JSON (error %v): %v", err, string(devListJSON))
return
}
for _, stateFile := range matches
{
vdpaId := filepath.Base(stateFile)
vdpaId = vdpaId[0:len(vdpaId)-5]
// Check if VDPA device is still added to the bus
if (devs[vdpaId] != nil)
{
// Check if the storage daemon is still active
pidFile := ns.stateDir + vdpaId + ".pid"
exists := false
proc, err := findByPidFile(pidFile)
if (err == nil)
{
exists = proc.Signal(syscall.Signal(0)) == nil
}
if (!exists)
{
// Restart daemon
stateJSON, err := os.ReadFile(stateFile)
if (err != nil)
{
klog.Warningf("error reading state file %v: %v", stateFile, err)
}
else
{
var state DeviceState
err := json.Unmarshal(stateJSON, &state)
if (err != nil)
{
klog.Warningf("state file %v contains invalid JSON (error %v): %v", stateFile, err, string(stateJSON))
}
else
{
klog.Warningf("restarting storage daemon for volume %v (VDPA ID %v)", state.Image, vdpaId)
_ = startStorageDaemon(vdpaId, state.Image, pidFile, state.ConfigPath, state.Readonly)
}
}
}
}
else
{
// Unused, clean it up
ns.unmapVduseById(vdpaId)
}
}
}
// NodePublishVolume mounts the volume mounted to the staging path to the target path // NodePublishVolume mounts the volume mounted to the staging path to the target path
func (ns *NodeServer) NodePublishVolume(ctx context.Context, req *csi.NodePublishVolumeRequest) (*csi.NodePublishVolumeResponse, error) func (ns *NodeServer) NodePublishVolume(ctx context.Context, req *csi.NodePublishVolumeRequest) (*csi.NodePublishVolumeResponse, error)
{ {
@ -484,13 +81,13 @@ func (ns *NodeServer) NodePublishVolume(ctx context.Context, req *csi.NodePublis
if (err != nil) if (err != nil)
{ {
klog.Errorf("failed to create block device mount target %s with error: %v", targetPath, err) klog.Errorf("failed to create block device mount target %s with error: %v", targetPath, err)
return nil, err return nil, status.Error(codes.Internal, err.Error())
} }
err = pathFile.Close() err = pathFile.Close()
if (err != nil) if (err != nil)
{ {
klog.Errorf("failed to close %s with error: %v", targetPath, err) klog.Errorf("failed to close %s with error: %v", targetPath, err)
return nil, err return nil, status.Error(codes.Internal, err.Error())
} }
} }
else else
@ -499,13 +96,13 @@ func (ns *NodeServer) NodePublishVolume(ctx context.Context, req *csi.NodePublis
if (err != nil) if (err != nil)
{ {
klog.Errorf("failed to create fs mount target %s with error: %v", targetPath, err) klog.Errorf("failed to create fs mount target %s with error: %v", targetPath, err)
return nil, err return nil, status.Error(codes.Internal, err.Error())
} }
} }
} }
else else
{ {
return nil, err return nil, status.Error(codes.Internal, err.Error())
} }
} }
@ -517,25 +114,38 @@ func (ns *NodeServer) NodePublishVolume(ctx context.Context, req *csi.NodePublis
} }
volName := ctxVars["name"] volName := ctxVars["name"]
_, err = GetConnectionParams(ctxVars) _, etcdUrl, etcdPrefix := GetConnectionParams(ctxVars)
if (err != nil) if (len(etcdUrl) == 0)
{ {
return nil, err return nil, status.Error(codes.InvalidArgument, "no etcdUrl in storage class configuration and no etcd_address in vitastor.conf")
} }
var devicePath, vdpaId string // Map NBD device
if (!ns.useVduse) // FIXME: Check if already mapped
args := []string{
"map", "--etcd_address", strings.Join(etcdUrl, ","),
"--etcd_prefix", etcdPrefix,
"--image", volName,
};
if (ctxVars["configPath"] != "")
{ {
devicePath, err = ns.mapNbd(volName, ctxVars, req.GetReadonly()) args = append(args, "--config_path", ctxVars["configPath"])
} }
else if (req.GetReadonly())
{ {
devicePath, vdpaId, err = ns.mapVduse(volName, ctxVars, req.GetReadonly()) args = append(args, "--readonly", "1")
} }
c := exec.Command("/usr/bin/vitastor-nbd", args...)
var stdout, stderr bytes.Buffer
c.Stdout, c.Stderr = &stdout, &stderr
err = c.Run()
stdoutStr, stderrStr := string(stdout.Bytes()), string(stderr.Bytes())
if (err != nil) if (err != nil)
{ {
return nil, err klog.Errorf("vitastor-nbd map failed: %s, status %s\n", stdoutStr+stderrStr, err)
return nil, status.Error(codes.Internal, stdoutStr+stderrStr+" (status "+err.Error()+")")
} }
devicePath := strings.TrimSpace(stdoutStr)
diskMounter := &mount.SafeFormatAndMount{Interface: ns.mounter, Exec: utilexec.New()} diskMounter := &mount.SafeFormatAndMount{Interface: ns.mounter, Exec: utilexec.New()}
if (isBlock) if (isBlock)
@ -617,15 +227,13 @@ func (ns *NodeServer) NodePublishVolume(ctx context.Context, req *csi.NodePublis
return &csi.NodePublishVolumeResponse{}, nil return &csi.NodePublishVolumeResponse{}, nil
unmap: unmap:
if (!ns.useVduse || len(devicePath) >= 8 && devicePath[0:8] == "/dev/nbd") // unmap NBD device
unmapOut, unmapErr := exec.Command("/usr/bin/vitastor-nbd", "unmap", devicePath).CombinedOutput()
if (unmapErr != nil)
{ {
ns.unmapNbd(devicePath) klog.Errorf("failed to unmap NBD device %s: %s, error: %v", devicePath, unmapOut, unmapErr)
} }
else return nil, status.Error(codes.Internal, err.Error())
{
ns.unmapVduseById(vdpaId)
}
return nil, err
} }
// NodeUnpublishVolume unmounts the volume from the target path // NodeUnpublishVolume unmounts the volume from the target path
@ -640,31 +248,25 @@ func (ns *NodeServer) NodeUnpublishVolume(ctx context.Context, req *csi.NodeUnpu
{ {
return nil, status.Error(codes.NotFound, "Target path not found") return nil, status.Error(codes.NotFound, "Target path not found")
} }
return nil, err return nil, status.Error(codes.Internal, err.Error())
} }
if (devicePath == "") if (devicePath == "")
{ {
// volume not mounted return nil, status.Error(codes.NotFound, "Volume not mounted")
klog.Warningf("%s is not a mountpoint, deleting", targetPath)
os.Remove(targetPath)
return &csi.NodeUnpublishVolumeResponse{}, nil
} }
// unmount // unmount
err = mount.CleanupMountPoint(targetPath, ns.mounter, false) err = mount.CleanupMountPoint(targetPath, ns.mounter, false)
if (err != nil) if (err != nil)
{ {
return nil, err return nil, status.Error(codes.Internal, err.Error())
} }
// unmap NBD device // unmap NBD device
if (refCount == 1) if (refCount == 1)
{ {
if (!ns.useVduse) unmapOut, unmapErr := exec.Command("/usr/bin/vitastor-nbd", "unmap", devicePath).CombinedOutput()
if (unmapErr != nil)
{ {
ns.unmapNbd(devicePath) klog.Errorf("failed to unmap NBD device %s: %s, error: %v", devicePath, unmapOut, unmapErr)
}
else
{
ns.unmapVduse(devicePath)
} }
} }
return &csi.NodeUnpublishVolumeResponse{}, nil return &csi.NodeUnpublishVolumeResponse{}, nil

4
debian/changelog vendored
View File

@ -1,10 +1,10 @@
vitastor (1.3.1-1) unstable; urgency=medium vitastor (1.2.0-1) unstable; urgency=medium
* Bugfixes * Bugfixes
-- Vitaliy Filippov <vitalif@yourcmc.ru> Fri, 03 Jun 2022 02:09:44 +0300 -- Vitaliy Filippov <vitalif@yourcmc.ru> Fri, 03 Jun 2022 02:09:44 +0300
vitastor (0.7.0-1) unstable; urgency=medium vitastor (1.2.0-1) unstable; urgency=medium
* Implement NFS proxy * Implement NFS proxy
* Add documentation * Add documentation

View File

@ -7,7 +7,7 @@ ARG REL=
WORKDIR /root WORKDIR /root
RUN if [ "$REL" = "buster" -o "$REL" = "bullseye" -o "$REL" = "bookworm" ]; then \ RUN if [ "$REL" = "buster" -o "$REL" = "bullseye" ]; then \
echo "deb http://deb.debian.org/debian $REL-backports main" >> /etc/apt/sources.list; \ echo "deb http://deb.debian.org/debian $REL-backports main" >> /etc/apt/sources.list; \
echo >> /etc/apt/preferences; \ echo >> /etc/apt/preferences; \
echo 'Package: *' >> /etc/apt/preferences; \ echo 'Package: *' >> /etc/apt/preferences; \
@ -45,7 +45,7 @@ RUN set -e; \
rm -rf /root/packages/qemu-$REL/*; \ rm -rf /root/packages/qemu-$REL/*; \
cd /root/packages/qemu-$REL; \ cd /root/packages/qemu-$REL; \
dpkg-source -x /root/qemu*.dsc; \ dpkg-source -x /root/qemu*.dsc; \
QEMU_VER=$(ls -d qemu*/ | perl -pe 's!^.*?(\d+\.\d+).*!$1!'); \ QEMU_VER=$(ls -d qemu*/ | perl -pe 's!^.*(\d+\.\d+).*!$1!'); \
D=$(ls -d qemu*/); \ D=$(ls -d qemu*/); \
cp /root/vitastor/patches/qemu-$QEMU_VER-vitastor.patch ./qemu-*/debian/patches; \ cp /root/vitastor/patches/qemu-$QEMU_VER-vitastor.patch ./qemu-*/debian/patches; \
echo qemu-$QEMU_VER-vitastor.patch >> $D/debian/patches/series; \ echo qemu-$QEMU_VER-vitastor.patch >> $D/debian/patches/series; \

View File

@ -35,8 +35,8 @@ RUN set -e -x; \
mkdir -p /root/packages/vitastor-$REL; \ mkdir -p /root/packages/vitastor-$REL; \
rm -rf /root/packages/vitastor-$REL/*; \ rm -rf /root/packages/vitastor-$REL/*; \
cd /root/packages/vitastor-$REL; \ cd /root/packages/vitastor-$REL; \
cp -r /root/vitastor vitastor-1.3.1; \ cp -r /root/vitastor vitastor-1.2.0; \
cd vitastor-1.3.1; \ cd vitastor-1.2.0; \
ln -s /root/fio-build/fio-*/ ./fio; \ ln -s /root/fio-build/fio-*/ ./fio; \
FIO=$(head -n1 fio/debian/changelog | perl -pe 's/^.*\((.*?)\).*$/$1/'); \ FIO=$(head -n1 fio/debian/changelog | perl -pe 's/^.*\((.*?)\).*$/$1/'); \
ls /usr/include/linux/raw.h || cp ./debian/raw.h /usr/include/linux/raw.h; \ ls /usr/include/linux/raw.h || cp ./debian/raw.h /usr/include/linux/raw.h; \
@ -49,8 +49,8 @@ RUN set -e -x; \
rm -rf a b; \ rm -rf a b; \
echo "dep:fio=$FIO" > debian/fio_version; \ echo "dep:fio=$FIO" > debian/fio_version; \
cd /root/packages/vitastor-$REL; \ cd /root/packages/vitastor-$REL; \
tar --sort=name --mtime='2020-01-01' --owner=0 --group=0 --exclude=debian -cJf vitastor_1.3.1.orig.tar.xz vitastor-1.3.1; \ tar --sort=name --mtime='2020-01-01' --owner=0 --group=0 --exclude=debian -cJf vitastor_1.2.0.orig.tar.xz vitastor-1.2.0; \
cd vitastor-1.3.1; \ cd vitastor-1.2.0; \
V=$(head -n1 debian/changelog | perl -pe 's/^.*\((.*?)\).*$/$1/'); \ V=$(head -n1 debian/changelog | perl -pe 's/^.*\((.*?)\).*$/$1/'); \
DEBFULLNAME="Vitaliy Filippov <vitalif@yourcmc.ru>" dch -D $REL -v "$V""$REL" "Rebuild for $REL"; \ DEBFULLNAME="Vitaliy Filippov <vitalif@yourcmc.ru>" dch -D $REL -v "$V""$REL" "Rebuild for $REL"; \
DEB_BUILD_OPTIONS=nocheck dpkg-buildpackage --jobs=auto -sa; \ DEB_BUILD_OPTIONS=nocheck dpkg-buildpackage --jobs=auto -sa; \

View File

@ -6,8 +6,8 @@
# Client Parameters # Client Parameters
These parameters apply only to Vitastor clients (QEMU, fio, NBD and so on) and These parameters apply only to clients and affect their interaction with
affect their interaction with the cluster. the cluster.
- [client_max_dirty_bytes](#client_max_dirty_bytes) - [client_max_dirty_bytes](#client_max_dirty_bytes)
- [client_max_dirty_ops](#client_max_dirty_ops) - [client_max_dirty_ops](#client_max_dirty_ops)
@ -15,9 +15,6 @@ affect their interaction with the cluster.
- [client_max_buffered_bytes](#client_max_buffered_bytes) - [client_max_buffered_bytes](#client_max_buffered_bytes)
- [client_max_buffered_ops](#client_max_buffered_ops) - [client_max_buffered_ops](#client_max_buffered_ops)
- [client_max_writeback_iodepth](#client_max_writeback_iodepth) - [client_max_writeback_iodepth](#client_max_writeback_iodepth)
- [nbd_timeout](#nbd_timeout)
- [nbd_max_devices](#nbd_max_devices)
- [nbd_max_part](#nbd_max_part)
## client_max_dirty_bytes ## client_max_dirty_bytes
@ -104,34 +101,3 @@ Multiple consecutive modified data regions are counted as 1 write here.
- Can be changed online: yes - Can be changed online: yes
Maximum number of parallel writes when flushing buffered data to the server. Maximum number of parallel writes when flushing buffered data to the server.
## nbd_timeout
- Type: seconds
- Default: 300
Timeout for I/O operations for [NBD](../usage/nbd.en.md). If an operation
executes for longer than this timeout, including when your cluster is just
temporarily down for more than timeout, the NBD device will detach by itself
(and possibly break the mounted file system).
You can set timeout to 0 to never detach, but in that case you won't be
able to remove the kernel device at all if the NBD process dies - you'll have
to reboot the host.
## nbd_max_devices
- Type: integer
- Default: 64
Maximum number of NBD devices in the system. This value is passed as
`nbds_max` parameter for the nbd kernel module when vitastor-nbd autoloads it.
## nbd_max_part
- Type: integer
- Default: 3
Maximum number of partitions per NBD device. This value is passed as
`max_part` parameter for the nbd kernel module when vitastor-nbd autoloads it.
Note that (nbds_max)*(1+max_part) usually can't exceed 256.

View File

@ -6,7 +6,7 @@
# Параметры клиентского кода # Параметры клиентского кода
Данные параметры применяются только к клиентам Vitastor (QEMU, fio, NBD и т.п.) и Данные параметры применяются только к клиентам Vitastor (QEMU, fio, NBD) и
затрагивают логику их работы с кластером. затрагивают логику их работы с кластером.
- [client_max_dirty_bytes](#client_max_dirty_bytes) - [client_max_dirty_bytes](#client_max_dirty_bytes)
@ -15,9 +15,6 @@
- [client_max_buffered_bytes](#client_max_buffered_bytes) - [client_max_buffered_bytes](#client_max_buffered_bytes)
- [client_max_buffered_ops](#client_max_buffered_ops) - [client_max_buffered_ops](#client_max_buffered_ops)
- [client_max_writeback_iodepth](#client_max_writeback_iodepth) - [client_max_writeback_iodepth](#client_max_writeback_iodepth)
- [nbd_timeout](#nbd_timeout)
- [nbd_max_devices](#nbd_max_devices)
- [nbd_max_part](#nbd_max_part)
## client_max_dirty_bytes ## client_max_dirty_bytes
@ -104,34 +101,3 @@
- Можно менять на лету: да - Можно менять на лету: да
Максимальное число параллельных операций записи при сбросе буферов на сервер. Максимальное число параллельных операций записи при сбросе буферов на сервер.
## nbd_timeout
- Тип: секунды
- Значение по умолчанию: 300
Таймаут для операций чтения/записи через [NBD](../usage/nbd.ru.md). Если
операция выполняется дольше таймаута, включая временную недоступность
кластера на время, большее таймаута, NBD-устройство отключится само собой
(и, возможно, сломает примонтированную ФС).
Вы можете установить таймаут в 0, чтобы никогда не отключать устройство по
таймауту, но в этом случае вы вообще не сможете удалить устройство, если
процесс NBD умрёт - вам придётся перезагружать сервер.
## nbd_max_devices
- Тип: целое число
- Значение по умолчанию: 64
Максимальное число NBD-устройств в системе. Данное значение передаётся
модулю ядра nbd как параметр `nbds_max`, когда его загружает vitastor-nbd.
## nbd_max_part
- Тип: целое число
- Значение по умолчанию: 3
Максимальное число разделов на одном NBD-устройстве. Данное значение передаётся
модулю ядра nbd как параметр `max_part`, когда его загружает vitastor-nbd.
Имейте в виду, что (nbds_max)*(1+max_part) обычно не может превышать 256.

View File

@ -19,7 +19,6 @@ them, even without restarting by updating configuration in etcd.
- [autosync_interval](#autosync_interval) - [autosync_interval](#autosync_interval)
- [autosync_writes](#autosync_writes) - [autosync_writes](#autosync_writes)
- [recovery_queue_depth](#recovery_queue_depth) - [recovery_queue_depth](#recovery_queue_depth)
- [recovery_sleep_us](#recovery_sleep_us)
- [recovery_pg_switch](#recovery_pg_switch) - [recovery_pg_switch](#recovery_pg_switch)
- [recovery_sync_batch](#recovery_sync_batch) - [recovery_sync_batch](#recovery_sync_batch)
- [readonly](#readonly) - [readonly](#readonly)
@ -52,13 +51,6 @@ them, even without restarting by updating configuration in etcd.
- [scrub_list_limit](#scrub_list_limit) - [scrub_list_limit](#scrub_list_limit)
- [scrub_find_best](#scrub_find_best) - [scrub_find_best](#scrub_find_best)
- [scrub_ec_max_bruteforce](#scrub_ec_max_bruteforce) - [scrub_ec_max_bruteforce](#scrub_ec_max_bruteforce)
- [recovery_tune_interval](#recovery_tune_interval)
- [recovery_tune_util_low](#recovery_tune_util_low)
- [recovery_tune_util_high](#recovery_tune_util_high)
- [recovery_tune_client_util_low](#recovery_tune_client_util_low)
- [recovery_tune_client_util_high](#recovery_tune_client_util_high)
- [recovery_tune_agg_interval](#recovery_tune_agg_interval)
- [recovery_tune_sleep_min_us](#recovery_tune_sleep_min_us)
## etcd_report_interval ## etcd_report_interval
@ -143,24 +135,12 @@ operations before issuing an fsync operation internally.
## recovery_queue_depth ## recovery_queue_depth
- Type: integer - Type: integer
- Default: 1 - Default: 4
- Can be changed online: yes - Can be changed online: yes
Maximum recovery and rebalance operations initiated by each OSD in parallel. Maximum recovery operations per one primary OSD at any given moment of time.
Note that each OSD talks to a lot of other OSDs so actual number of parallel Currently it's the only parameter available to tune the speed or recovery
recovery operations per each OSD is greater than just recovery_queue_depth. and rebalancing, but it's planned to implement more.
Increasing this parameter can speedup recovery if [auto-tuning](#recovery_tune_interval)
allows it or if it is disabled.
## recovery_sleep_us
- Type: microseconds
- Default: 0
- Can be changed online: yes
Delay for all recovery- and rebalance- related operations. If non-zero,
such operations are artificially slowed down to reduce the impact on
client I/O.
## recovery_pg_switch ## recovery_pg_switch
@ -528,81 +508,3 @@ the variant with most available equal copies is correct. For example, if
you have 3 replicas and 1 of them differs, this one is considered to be you have 3 replicas and 1 of them differs, this one is considered to be
corrupted. But if there is no "best" version with more copies than all corrupted. But if there is no "best" version with more copies than all
others have then the object is also marked as inconsistent. others have then the object is also marked as inconsistent.
## recovery_tune_interval
- Type: seconds
- Default: 1
- Can be changed online: yes
Interval at which OSD re-considers client and recovery load and automatically
adjusts [recovery_sleep_us](#recovery_sleep_us). Recovery auto-tuning is
disabled if recovery_tune_interval is set to 0.
Auto-tuning targets utilization. Utilization is a measure of load and is
equal to the product of iops and average latency (so it may be greater
than 1). You set "low" and "high" client utilization thresholds and two
corresponding target recovery utilization levels. OSD calculates desired
recovery utilization from client utilization using linear interpolation
and auto-tunes recovery operation delay to make actual recovery utilization
match desired.
This allows to reduce recovery/rebalance impact on client operations. It is
of course impossible to remove it completely, but it should become adequate.
In some tests rebalance could earlier drop client write speed from 1.5 GB/s
to 50-100 MB/s, with default auto-tuning settings it now only reduces
to ~1 GB/s.
## recovery_tune_util_low
- Type: number
- Default: 0.1
- Can be changed online: yes
Desired recovery/rebalance utilization when client load is high, i.e. when
it is at or above recovery_tune_client_util_high.
## recovery_tune_util_high
- Type: number
- Default: 1
- Can be changed online: yes
Desired recovery/rebalance utilization when client load is low, i.e. when
it is at or below recovery_tune_client_util_low.
## recovery_tune_client_util_low
- Type: number
- Default: 0
- Can be changed online: yes
Client utilization considered "low".
## recovery_tune_client_util_high
- Type: number
- Default: 0.5
- Can be changed online: yes
Client utilization considered "high".
## recovery_tune_agg_interval
- Type: integer
- Default: 10
- Can be changed online: yes
The number of last auto-tuning iterations to use for calculating the
delay as average. Lower values result in quicker response to client
load change, higher values result in more stable delay. Default value of 10
is usually fine.
## recovery_tune_sleep_min_us
- Type: microseconds
- Default: 10
- Can be changed online: yes
Minimum possible value for auto-tuned recovery_sleep_us. Values lower
than this value are changed to 0.

View File

@ -20,7 +20,6 @@
- [autosync_interval](#autosync_interval) - [autosync_interval](#autosync_interval)
- [autosync_writes](#autosync_writes) - [autosync_writes](#autosync_writes)
- [recovery_queue_depth](#recovery_queue_depth) - [recovery_queue_depth](#recovery_queue_depth)
- [recovery_sleep_us](#recovery_sleep_us)
- [recovery_pg_switch](#recovery_pg_switch) - [recovery_pg_switch](#recovery_pg_switch)
- [recovery_sync_batch](#recovery_sync_batch) - [recovery_sync_batch](#recovery_sync_batch)
- [readonly](#readonly) - [readonly](#readonly)
@ -53,13 +52,6 @@
- [scrub_list_limit](#scrub_list_limit) - [scrub_list_limit](#scrub_list_limit)
- [scrub_find_best](#scrub_find_best) - [scrub_find_best](#scrub_find_best)
- [scrub_ec_max_bruteforce](#scrub_ec_max_bruteforce) - [scrub_ec_max_bruteforce](#scrub_ec_max_bruteforce)
- [recovery_tune_interval](#recovery_tune_interval)
- [recovery_tune_util_low](#recovery_tune_util_low)
- [recovery_tune_util_high](#recovery_tune_util_high)
- [recovery_tune_client_util_low](#recovery_tune_client_util_low)
- [recovery_tune_client_util_high](#recovery_tune_client_util_high)
- [recovery_tune_agg_interval](#recovery_tune_agg_interval)
- [recovery_tune_sleep_min_us](#recovery_tune_sleep_min_us)
## etcd_report_interval ## etcd_report_interval
@ -146,25 +138,13 @@ OSD, чтобы успевать очищать журнал - без них OSD
## recovery_queue_depth ## recovery_queue_depth
- Тип: целое число - Тип: целое число
- Значение по умолчанию: 1 - Значение по умолчанию: 4
- Можно менять на лету: да - Можно менять на лету: да
Максимальное число параллельных операций восстановления, инициируемых одним Максимальное число операций восстановления на одном первичном OSD в любой
OSD в любой момент времени. Имейте в виду, что каждый OSD обычно работает с момент времени. На данный момент единственный параметр, который можно менять
многими другими OSD, так что на практике параллелизм восстановления больше, для ускорения или замедления восстановления и перебалансировки данных, но
чем просто recovery_queue_depth. Увеличение значения этого параметра может в планах реализация других параметров.
ускорить восстановление если [автотюнинг скорости](#recovery_tune_interval)
разрешает это или если он отключён.
## recovery_sleep_us
- Тип: микросекунды
- Значение по умолчанию: 0
- Можно менять на лету: да
Delay for all recovery- and rebalance- related operations. If non-zero,
such operations are artificially slowed down to reduce the impact on
client I/O.
## recovery_pg_switch ## recovery_pg_switch
@ -555,83 +535,3 @@ EC (кодов коррекции ошибок) с более, чем 1 диск
считается некорректной. Однако, если "лучшую" версию с числом доступных считается некорректной. Однако, если "лучшую" версию с числом доступных
копий большим, чем у всех других версий, найти невозможно, то объект тоже копий большим, чем у всех других версий, найти невозможно, то объект тоже
маркируется неконсистентным. маркируется неконсистентным.
## recovery_tune_interval
- Тип: секунды
- Значение по умолчанию: 1
- Можно менять на лету: да
Интервал, с которым OSD пересматривает клиентскую нагрузку и нагрузку
восстановления и автоматически подстраивает [recovery_sleep_us](#recovery_sleep_us).
Автотюнинг (автоподстройка) отключается, если recovery_tune_interval
устанавливается в значение 0.
Автотюнинг регулирует утилизацию. Утилизация является мерой нагрузки
и равна произведению числа операций в секунду и средней задержки
(то есть, она может быть выше 1). Вы задаёте два уровня клиентской
утилизации - "низкий" и "высокий" (low и high) и два соответствующих
целевых уровня утилизации операциями восстановления. OSD рассчитывает
желаемый уровень утилизации восстановления линейной интерполяцией от
клиентской утилизации и подстраивает задержку операций восстановления
так, чтобы фактическая утилизация восстановления совпадала с желаемой.
Это позволяет снизить влияние восстановления и ребаланса на клиентские
операции. Конечно, невозможно исключить такое влияние полностью, но оно
должно становиться адекватнее. В некоторых тестах перебалансировка могла
снижать клиентскую скорость записи с 1.5 ГБ/с до 50-100 МБ/с, а теперь, с
настройками автотюнинга по умолчанию, она снижается только до ~1 ГБ/с.
## recovery_tune_util_low
- Тип: число
- Значение по умолчанию: 0.1
- Можно менять на лету: да
Желаемая утилизация восстановления в моменты, когда клиентская нагрузка
высокая, то есть, находится на уровне или выше recovery_tune_client_util_high.
## recovery_tune_util_high
- Тип: число
- Значение по умолчанию: 1
- Можно менять на лету: да
Желаемая утилизация восстановления в моменты, когда клиентская нагрузка
низкая, то есть, находится на уровне или ниже recovery_tune_client_util_low.
## recovery_tune_client_util_low
- Тип: число
- Значение по умолчанию: 0
- Можно менять на лету: да
Клиентская утилизация, которая считается "низкой".
## recovery_tune_client_util_high
- Тип: число
- Значение по умолчанию: 0.5
- Можно менять на лету: да
Клиентская утилизация, которая считается "высокой".
## recovery_tune_agg_interval
- Тип: целое число
- Значение по умолчанию: 10
- Можно менять на лету: да
Число последних итераций автоподстройки для расчёта задержки как среднего
значения. Меньшие значения параметра ускоряют отклик на изменение нагрузки,
большие значения делают задержку стабильнее. Значение по умолчанию 10
обычно нормальное и не требует изменений.
## recovery_tune_sleep_min_us
- Тип: микросекунды
- Значение по умолчанию: 10
- Можно менять на лету: да
Минимальное возможное значение авто-подстроенного recovery_sleep_us.
Значения ниже данного заменяются на 0.

View File

@ -1,4 +1,4 @@
# Client Parameters # Client Parameters
These parameters apply only to Vitastor clients (QEMU, fio, NBD and so on) and These parameters apply only to clients and affect their interaction with
affect their interaction with the cluster. the cluster.

View File

@ -1,4 +1,4 @@
# Параметры клиентского кода # Параметры клиентского кода
Данные параметры применяются только к клиентам Vitastor (QEMU, fio, NBD и т.п.) и Данные параметры применяются только к клиентам Vitastor (QEMU, fio, NBD) и
затрагивают логику их работы с кластером. затрагивают логику их работы с кластером.

View File

@ -122,47 +122,3 @@
Maximum number of parallel writes when flushing buffered data to the server. Maximum number of parallel writes when flushing buffered data to the server.
info_ru: | info_ru: |
Максимальное число параллельных операций записи при сбросе буферов на сервер. Максимальное число параллельных операций записи при сбросе буферов на сервер.
- name: nbd_timeout
type: sec
default: 300
online: false
info: |
Timeout for I/O operations for [NBD](../usage/nbd.en.md). If an operation
executes for longer than this timeout, including when your cluster is just
temporarily down for more than timeout, the NBD device will detach by itself
(and possibly break the mounted file system).
You can set timeout to 0 to never detach, but in that case you won't be
able to remove the kernel device at all if the NBD process dies - you'll have
to reboot the host.
info_ru: |
Таймаут для операций чтения/записи через [NBD](../usage/nbd.ru.md). Если
операция выполняется дольше таймаута, включая временную недоступность
кластера на время, большее таймаута, NBD-устройство отключится само собой
(и, возможно, сломает примонтированную ФС).
Вы можете установить таймаут в 0, чтобы никогда не отключать устройство по
таймауту, но в этом случае вы вообще не сможете удалить устройство, если
процесс NBD умрёт - вам придётся перезагружать сервер.
- name: nbd_max_devices
type: int
default: 64
online: false
info: |
Maximum number of NBD devices in the system. This value is passed as
`nbds_max` parameter for the nbd kernel module when vitastor-nbd autoloads it.
info_ru: |
Максимальное число NBD-устройств в системе. Данное значение передаётся
модулю ядра nbd как параметр `nbds_max`, когда его загружает vitastor-nbd.
- name: nbd_max_part
type: int
default: 3
online: false
info: |
Maximum number of partitions per NBD device. This value is passed as
`max_part` parameter for the nbd kernel module when vitastor-nbd autoloads it.
Note that (nbds_max)*(1+max_part) usually can't exceed 256.
info_ru: |
Максимальное число разделов на одном NBD-устройстве. Данное значение передаётся
модулю ядра nbd как параметр `max_part`, когда его загружает vitastor-nbd.
Имейте в виду, что (nbds_max)*(1+max_part) обычно не может превышать 256.

View File

@ -38,7 +38,6 @@ const types = {
bool: 'boolean', bool: 'boolean',
int: 'integer', int: 'integer',
sec: 'seconds', sec: 'seconds',
float: 'number',
ms: 'milliseconds', ms: 'milliseconds',
us: 'microseconds', us: 'microseconds',
}, },
@ -47,7 +46,6 @@ const types = {
bool: 'булево (да/нет)', bool: 'булево (да/нет)',
int: 'целое число', int: 'целое число',
sec: 'секунды', sec: 'секунды',
float: 'число',
ms: 'миллисекунды', ms: 'миллисекунды',
us: 'микросекунды', us: 'микросекунды',
}, },

View File

@ -107,29 +107,17 @@
принудительной отправкой fsync-а. принудительной отправкой fsync-а.
- name: recovery_queue_depth - name: recovery_queue_depth
type: int type: int
default: 1 default: 4
online: true online: true
info: | info: |
Maximum recovery and rebalance operations initiated by each OSD in parallel. Maximum recovery operations per one primary OSD at any given moment of time.
Note that each OSD talks to a lot of other OSDs so actual number of parallel Currently it's the only parameter available to tune the speed or recovery
recovery operations per each OSD is greater than just recovery_queue_depth. and rebalancing, but it's planned to implement more.
Increasing this parameter can speedup recovery if [auto-tuning](#recovery_tune_interval)
allows it or if it is disabled.
info_ru: | info_ru: |
Максимальное число параллельных операций восстановления, инициируемых одним Максимальное число операций восстановления на одном первичном OSD в любой
OSD в любой момент времени. Имейте в виду, что каждый OSD обычно работает с момент времени. На данный момент единственный параметр, который можно менять
многими другими OSD, так что на практике параллелизм восстановления больше, для ускорения или замедления восстановления и перебалансировки данных, но
чем просто recovery_queue_depth. Увеличение значения этого параметра может в планах реализация других параметров.
ускорить восстановление если [автотюнинг скорости](#recovery_tune_interval)
разрешает это или если он отключён.
- name: recovery_sleep_us
type: us
default: 0
online: true
info: |
Delay for all recovery- and rebalance- related operations. If non-zero,
such operations are artificially slowed down to reduce the impact on
client I/O.
- name: recovery_pg_switch - name: recovery_pg_switch
type: int type: int
default: 128 default: 128
@ -638,101 +626,3 @@
считается некорректной. Однако, если "лучшую" версию с числом доступных считается некорректной. Однако, если "лучшую" версию с числом доступных
копий большим, чем у всех других версий, найти невозможно, то объект тоже копий большим, чем у всех других версий, найти невозможно, то объект тоже
маркируется неконсистентным. маркируется неконсистентным.
- name: recovery_tune_interval
type: sec
default: 1
online: true
info: |
Interval at which OSD re-considers client and recovery load and automatically
adjusts [recovery_sleep_us](#recovery_sleep_us). Recovery auto-tuning is
disabled if recovery_tune_interval is set to 0.
Auto-tuning targets utilization. Utilization is a measure of load and is
equal to the product of iops and average latency (so it may be greater
than 1). You set "low" and "high" client utilization thresholds and two
corresponding target recovery utilization levels. OSD calculates desired
recovery utilization from client utilization using linear interpolation
and auto-tunes recovery operation delay to make actual recovery utilization
match desired.
This allows to reduce recovery/rebalance impact on client operations. It is
of course impossible to remove it completely, but it should become adequate.
In some tests rebalance could earlier drop client write speed from 1.5 GB/s
to 50-100 MB/s, with default auto-tuning settings it now only reduces
to ~1 GB/s.
info_ru: |
Интервал, с которым OSD пересматривает клиентскую нагрузку и нагрузку
восстановления и автоматически подстраивает [recovery_sleep_us](#recovery_sleep_us).
Автотюнинг (автоподстройка) отключается, если recovery_tune_interval
устанавливается в значение 0.
Автотюнинг регулирует утилизацию. Утилизация является мерой нагрузки
и равна произведению числа операций в секунду и средней задержки
(то есть, она может быть выше 1). Вы задаёте два уровня клиентской
утилизации - "низкий" и "высокий" (low и high) и два соответствующих
целевых уровня утилизации операциями восстановления. OSD рассчитывает
желаемый уровень утилизации восстановления линейной интерполяцией от
клиентской утилизации и подстраивает задержку операций восстановления
так, чтобы фактическая утилизация восстановления совпадала с желаемой.
Это позволяет снизить влияние восстановления и ребаланса на клиентские
операции. Конечно, невозможно исключить такое влияние полностью, но оно
должно становиться адекватнее. В некоторых тестах перебалансировка могла
снижать клиентскую скорость записи с 1.5 ГБ/с до 50-100 МБ/с, а теперь, с
настройками автотюнинга по умолчанию, она снижается только до ~1 ГБ/с.
- name: recovery_tune_util_low
type: float
default: 0.1
online: true
info: |
Desired recovery/rebalance utilization when client load is high, i.e. when
it is at or above recovery_tune_client_util_high.
info_ru: |
Желаемая утилизация восстановления в моменты, когда клиентская нагрузка
высокая, то есть, находится на уровне или выше recovery_tune_client_util_high.
- name: recovery_tune_util_high
type: float
default: 1
online: true
info: |
Desired recovery/rebalance utilization when client load is low, i.e. when
it is at or below recovery_tune_client_util_low.
info_ru: |
Желаемая утилизация восстановления в моменты, когда клиентская нагрузка
низкая, то есть, находится на уровне или ниже recovery_tune_client_util_low.
- name: recovery_tune_client_util_low
type: float
default: 0
online: true
info: Client utilization considered "low".
info_ru: Клиентская утилизация, которая считается "низкой".
- name: recovery_tune_client_util_high
type: float
default: 0.5
online: true
info: Client utilization considered "high".
info_ru: Клиентская утилизация, которая считается "высокой".
- name: recovery_tune_agg_interval
type: int
default: 10
online: true
info: |
The number of last auto-tuning iterations to use for calculating the
delay as average. Lower values result in quicker response to client
load change, higher values result in more stable delay. Default value of 10
is usually fine.
info_ru: |
Число последних итераций автоподстройки для расчёта задержки как среднего
значения. Меньшие значения параметра ускоряют отклик на изменение нагрузки,
большие значения делают задержку стабильнее. Значение по умолчанию 10
обычно нормальное и не требует изменений.
- name: recovery_tune_sleep_min_us
type: us
default: 10
online: true
info: |
Minimum possible value for auto-tuned recovery_sleep_us. Values lower
than this value are changed to 0.
info_ru: |
Минимальное возможное значение авто-подстроенного recovery_sleep_us.
Значения ниже данного заменяются на 0.

View File

@ -19,14 +19,6 @@ for i in ./???-*.yaml; do kubectl apply -f $i; done
After that you'll be able to create PersistentVolumes. After that you'll be able to create PersistentVolumes.
**Important:** For best experience, use Linux kernel at least 5.15 with [VDUSE](../usage/qemu.en.md#vduse)
kernel modules enabled (vdpa, vduse, virtio-vdpa). If your distribution doesn't
have them pre-built - build them yourself ([instructions](../usage/qemu.en.md#vduse)),
I promise it's worth it :-). When VDUSE is unavailable, CSI driver uses [NBD](../usage/nbd.en.md)
to map Vitastor devices. NBD is slower and prone to timeout issues: if Vitastor
cluster becomes unresponsible for more than [nbd_timeout](../config/client.en.md#nbd_timeout),
the NBD device detaches and breaks pods using it.
## Features ## Features
Vitastor CSI supports: Vitastor CSI supports:
@ -35,8 +27,5 @@ Vitastor CSI supports:
- Raw block RWX (ReadWriteMany) volumes. Example: [PVC](../../csi/deploy/example-pvc-block.yaml), [pod](../../csi/deploy/example-test-pod-block.yaml) - Raw block RWX (ReadWriteMany) volumes. Example: [PVC](../../csi/deploy/example-pvc-block.yaml), [pod](../../csi/deploy/example-test-pod-block.yaml)
- Volume expansion - Volume expansion
- Volume snapshots. Example: [snapshot class](../../csi/deploy/example-snapshot-class.yaml), [snapshot](../../csi/deploy/example-snapshot.yaml), [clone](../../csi/deploy/example-snapshot-clone.yaml) - Volume snapshots. Example: [snapshot class](../../csi/deploy/example-snapshot-class.yaml), [snapshot](../../csi/deploy/example-snapshot.yaml), [clone](../../csi/deploy/example-snapshot-clone.yaml)
- [VDUSE](../usage/qemu.en.md#vduse) (preferred) and [NBD](../usage/nbd.en.md) device mapping methods
- Upgrades with VDUSE - new handler processes are restarted when CSI pods are restarted themselves
- Multiple clusters by using multiple configuration files in ConfigMap.
Remember that to use snapshots with CSI you also have to install [Snapshot Controller and CRDs](https://kubernetes-csi.github.io/docs/snapshot-controller.html#deployment). Remember that to use snapshots with CSI you also have to install [Snapshot Controller and CRDs](https://kubernetes-csi.github.io/docs/snapshot-controller.html#deployment).

View File

@ -19,14 +19,6 @@ for i in ./???-*.yaml; do kubectl apply -f $i; done
После этого вы сможете создавать PersistentVolume. После этого вы сможете создавать PersistentVolume.
**Важно:** Лучше всего использовать ядро Linux версии не менее 5.15 с включёнными модулями
[VDUSE](../usage/qemu.ru.md#vduse) (vdpa, vduse, virtio-vdpa). Если в вашем дистрибутиве
они не собраны из коробки - соберите их сами, обещаю, что это стоит того ([инструкция](../usage/qemu.ru.md#vduse)) :-).
Когда VDUSE недоступно, CSI-плагин использует [NBD](../usage/nbd.ru.md) для подключения
дисков, а NBD медленнее и имеет проблему таймаута - если кластер остаётся недоступным
дольше, чем [nbd_timeout](../config/client.ru.md#nbd_timeout), NBD-устройство отключается
и ломает поды, использующие его.
## Возможности ## Возможности
CSI-плагин Vitastor поддерживает: CSI-плагин Vitastor поддерживает:
@ -35,8 +27,5 @@ CSI-плагин Vitastor поддерживает:
- Сырые блочные RWX (ReadWriteMany) тома. Пример: [PVC](../../csi/deploy/example-pvc-block.yaml), [под](../../csi/deploy/example-test-pod-block.yaml) - Сырые блочные RWX (ReadWriteMany) тома. Пример: [PVC](../../csi/deploy/example-pvc-block.yaml), [под](../../csi/deploy/example-test-pod-block.yaml)
- Расширение размера томов - Расширение размера томов
- Снимки томов. Пример: [класс снимков](../../csi/deploy/example-snapshot-class.yaml), [снимок](../../csi/deploy/example-snapshot.yaml), [клон снимка](../../csi/deploy/example-snapshot-clone.yaml) - Снимки томов. Пример: [класс снимков](../../csi/deploy/example-snapshot-class.yaml), [снимок](../../csi/deploy/example-snapshot.yaml), [клон снимка](../../csi/deploy/example-snapshot-clone.yaml)
- Способы подключения устройств [VDUSE](../usage/qemu.ru.md#vduse) (предпочитаемый) и [NBD](../usage/nbd.ru.md)
- Обновление при использовании VDUSE - новые процессы-обработчики устройств успешно перезапускаются вместе с самими подами CSI
- Несколько кластеров через задание нескольких файлов конфигурации в ConfigMap.
Не забывайте, что для использования снимков нужно сначала установить [контроллер снимков и CRD](https://kubernetes-csi.github.io/docs/snapshot-controller.html#deployment). Не забывайте, что для использования снимков нужно сначала установить [контроллер снимков и CRD](https://kubernetes-csi.github.io/docs/snapshot-controller.html#deployment).

View File

@ -18,7 +18,7 @@
stable version from 0.9.x branch instead of 1.x stable version from 0.9.x branch instead of 1.x
- For Debian 10 (Buster) also enable backports repository: - For Debian 10 (Buster) also enable backports repository:
`deb http://deb.debian.org/debian buster-backports main` `deb http://deb.debian.org/debian buster-backports main`
- Install packages: `apt update; apt install vitastor lp-solve etcd linux-image-amd64 qemu-system-x86` - Install packages: `apt update; apt install vitastor lp-solve etcd linux-image-amd64 qemu`
## CentOS ## CentOS

View File

@ -18,7 +18,7 @@
установить последнюю стабильную версию из ветки 0.9.x вместо 1.x установить последнюю стабильную версию из ветки 0.9.x вместо 1.x
- Для Debian 10 (Buster) также включите репозиторий backports: - Для Debian 10 (Buster) также включите репозиторий backports:
`deb http://deb.debian.org/debian buster-backports main` `deb http://deb.debian.org/debian buster-backports main`
- Установите пакеты: `apt update; apt install vitastor lp-solve etcd linux-image-amd64 qemu-system-x86` - Установите пакеты: `apt update; apt install vitastor lp-solve etcd linux-image-amd64 qemu`
## CentOS ## CentOS

View File

@ -6,10 +6,10 @@
# Proxmox VE # Proxmox VE
To enable Vitastor support in Proxmox Virtual Environment (6.4-8.1 are supported): To enable Vitastor support in Proxmox Virtual Environment (6.4-8.0 are supported):
- Add the corresponding Vitastor Debian repository into sources.list on Proxmox hosts: - Add the corresponding Vitastor Debian repository into sources.list on Proxmox hosts:
bookworm for 8.1, pve8.0 for 8.0, bullseye for 7.4, pve7.3 for 7.3, pve7.2 for 7.2, pve7.1 for 7.1, buster for 6.4 bookworm for 8.0, bullseye for 7.4, pve7.3 for 7.3, pve7.2 for 7.2, pve7.1 for 7.1, buster for 6.4
- Install vitastor-client, pve-qemu-kvm, pve-storage-vitastor (* or see note) packages from Vitastor repository - Install vitastor-client, pve-qemu-kvm, pve-storage-vitastor (* or see note) packages from Vitastor repository
- Define storage in `/etc/pve/storage.cfg` (see below) - Define storage in `/etc/pve/storage.cfg` (see below)
- Block network access from VMs to Vitastor network (to OSDs and etcd), - Block network access from VMs to Vitastor network (to OSDs and etcd),
@ -25,7 +25,7 @@ vitastor: vitastor
vitastor_pool testpool vitastor_pool testpool
# path to the configuration file # path to the configuration file
vitastor_config_path /etc/vitastor/vitastor.conf vitastor_config_path /etc/vitastor/vitastor.conf
# etcd address(es), OPTIONAL, required only if missing in the configuration file # etcd address(es), required only if missing in the configuration file
vitastor_etcd_address 192.168.7.2:2379/v3 vitastor_etcd_address 192.168.7.2:2379/v3
# prefix for keys in etcd # prefix for keys in etcd
vitastor_etcd_prefix /vitastor vitastor_etcd_prefix /vitastor

View File

@ -6,10 +6,10 @@
# Proxmox VE # Proxmox VE
Чтобы подключить Vitastor к Proxmox Virtual Environment (поддерживаются версии 6.4-8.1): Чтобы подключить Vitastor к Proxmox Virtual Environment (поддерживаются версии 6.4-8.0):
- Добавьте соответствующий Debian-репозиторий Vitastor в sources.list на хостах Proxmox: - Добавьте соответствующий Debian-репозиторий Vitastor в sources.list на хостах Proxmox:
bookworm для 8.1, pve8.0 для 8.0, bullseye для 7.4, pve7.3 для 7.3, pve7.2 для 7.2, pve7.1 для 7.1, buster для 6.4 bookworm для 8.0, bullseye для 7.4, pve7.3 для 7.3, pve7.2 для 7.2, pve7.1 для 7.1, buster для 6.4
- Установите пакеты vitastor-client, pve-qemu-kvm, pve-storage-vitastor (* или см. сноску) из репозитория Vitastor - Установите пакеты vitastor-client, pve-qemu-kvm, pve-storage-vitastor (* или см. сноску) из репозитория Vitastor
- Определите тип хранилища в `/etc/pve/storage.cfg` (см. ниже) - Определите тип хранилища в `/etc/pve/storage.cfg` (см. ниже)
- Обязательно заблокируйте доступ от виртуальных машин к сети Vitastor (OSD и etcd), т.к. Vitastor (пока) не поддерживает аутентификацию - Обязательно заблокируйте доступ от виртуальных машин к сети Vitastor (OSD и etcd), т.к. Vitastor (пока) не поддерживает аутентификацию
@ -24,7 +24,7 @@ vitastor: vitastor
vitastor_pool testpool vitastor_pool testpool
# Путь к файлу конфигурации # Путь к файлу конфигурации
vitastor_config_path /etc/vitastor/vitastor.conf vitastor_config_path /etc/vitastor/vitastor.conf
# Адрес(а) etcd, ОПЦИОНАЛЬНЫ, нужны, только если не указаны в vitastor.conf # Адрес(а) etcd, нужны, только если не указаны в vitastor.conf
vitastor_etcd_address 192.168.7.2:2379/v3 vitastor_etcd_address 192.168.7.2:2379/v3
# Префикс ключей метаданных в etcd # Префикс ключей метаданных в etcd
vitastor_etcd_prefix /vitastor vitastor_etcd_prefix /vitastor

View File

@ -54,8 +54,7 @@
виртуальные диски, их снимки и клоны. виртуальные диски, их снимки и клоны.
- **Драйвер QEMU** — подключаемый модуль QEMU, позволяющий QEMU/KVM виртуальным машинам работать - **Драйвер QEMU** — подключаемый модуль QEMU, позволяющий QEMU/KVM виртуальным машинам работать
с виртуальными дисками Vitastor напрямую из пространства пользователя с помощью клиентской с виртуальными дисками Vitastor напрямую из пространства пользователя с помощью клиентской
библиотеки, без необходимости отображения дисков в виде блочных устройств. Тот же драйвер библиотеки, без необходимости отображения дисков в виде блочных устройств.
позволяет подключать диски в систему через [VDUSE](../usage/qemu.ru.md#vduse).
- **vitastor-nbd** — утилита, позволяющая монтировать образы Vitastor в виде блочных устройств - **vitastor-nbd** — утилита, позволяющая монтировать образы Vitastor в виде блочных устройств
с помощью NBD (Network Block Device), на самом деле скорее работающего как "BUSE" с помощью NBD (Network Block Device), на самом деле скорее работающего как "BUSE"
(Block Device In Userspace). Модуля ядра Linux для выполнения той же задачи в Vitastor нет (Block Device In Userspace). Модуля ядра Linux для выполнения той же задачи в Vitastor нет

View File

@ -32,7 +32,6 @@
- [Scrubbing](../config/osd.en.md#auto_scrub) (verification of copies) - [Scrubbing](../config/osd.en.md#auto_scrub) (verification of copies)
- [Checksums](../config/layout-osd.en.md#data_csum_type) - [Checksums](../config/layout-osd.en.md#data_csum_type)
- [Client write-back cache](../config/client.en.md#client_enable_writeback) - [Client write-back cache](../config/client.en.md#client_enable_writeback)
- [Intelligent recovery auto-tuning](../config/osd.en.md#recovery_tune_interval)
## Plugins and tools ## Plugins and tools

View File

@ -34,7 +34,6 @@
- [Фоновая проверка целостности](../config/osd.ru.md#auto_scrub) (сверка копий) - [Фоновая проверка целостности](../config/osd.ru.md#auto_scrub) (сверка копий)
- [Контрольные суммы](../config/layout-osd.ru.md#data_csum_type) - [Контрольные суммы](../config/layout-osd.ru.md#data_csum_type)
- [Буферизация записи на стороне клиента](../config/client.ru.md#client_enable_writeback) - [Буферизация записи на стороне клиента](../config/client.ru.md#client_enable_writeback)
- [Интеллектуальная автоподстройка скорости восстановления](../config/osd.ru.md#recovery_tune_interval)
## Драйверы и инструменты ## Драйверы и инструменты

View File

@ -28,8 +28,7 @@ It supports the following commands:
Global options: Global options:
``` ```
--config_file FILE Path to Vitastor configuration file --etcd_address ADDR Etcd connection address
--etcd_address URL Etcd connection address
--iodepth N Send N operations in parallel to each OSD when possible (default 32) --iodepth N Send N operations in parallel to each OSD when possible (default 32)
--parallel_osds M Work with M osds in parallel when possible (default 4) --parallel_osds M Work with M osds in parallel when possible (default 4)
--progress 1|0 Report progress (default 1) --progress 1|0 Report progress (default 1)

View File

@ -27,8 +27,7 @@ vitastor-cli - интерфейс командной строки для адм
Глобальные опции: Глобальные опции:
``` ```
--config_file FILE Путь к файлу конфигурации Vitastor --etcd_address ADDR Адрес соединения с etcd
--etcd_address URL Адрес соединения с etcd
--iodepth N Отправлять параллельно N операций на каждый OSD (по умолчанию 32) --iodepth N Отправлять параллельно N операций на каждый OSD (по умолчанию 32)
--parallel_osds M Работать параллельно с M OSD (по умолчанию 4) --parallel_osds M Работать параллельно с M OSD (по умолчанию 4)
--progress 1|0 Печатать прогресс выполнения (по умолчанию 1) --progress 1|0 Печатать прогресс выполнения (по умолчанию 1)

View File

@ -17,7 +17,6 @@ It supports the following commands:
- [purge](#purge) - [purge](#purge)
- [read-sb](#read-sb) - [read-sb](#read-sb)
- [write-sb](#write-sb) - [write-sb](#write-sb)
- [update-sb](#update-sb)
- [udev](#udev) - [udev](#udev)
- [exec-osd](#exec-osd) - [exec-osd](#exec-osd)
- [pre-exec](#pre-exec) - [pre-exec](#pre-exec)
@ -183,14 +182,6 @@ Try to read Vitastor OSD superblock from `<device>` and print it in JSON format.
Read JSON from STDIN and write it into Vitastor OSD superblock on `<device>`. Read JSON from STDIN and write it into Vitastor OSD superblock on `<device>`.
## update-sb
`vitastor-disk update-sb <device> [--force] [--<parameter> <value>] [...]`
Read Vitastor OSD superblock from <device>, update parameters in it and write it back.
`--force` allows to ignore validation errors.
## udev ## udev
`vitastor-disk udev <device>` `vitastor-disk udev <device>`

View File

@ -17,7 +17,6 @@ vitastor-disk - инструмент командной строки для уп
- [purge](#purge) - [purge](#purge)
- [read-sb](#read-sb) - [read-sb](#read-sb)
- [write-sb](#write-sb) - [write-sb](#write-sb)
- [update-sb](#update-sb)
- [udev](#udev) - [udev](#udev)
- [exec-osd](#exec-osd) - [exec-osd](#exec-osd)
- [pre-exec](#pre-exec) - [pre-exec](#pre-exec)
@ -188,15 +187,6 @@ throttle_target_mbs, throttle_target_parallelism, throttle_threshold_us.
Прочитать JSON со стандартного ввода и записать его в суперблок OSD на диск `<device>`. Прочитать JSON со стандартного ввода и записать его в суперблок OSD на диск `<device>`.
## update-sb
`vitastor-disk update-sb <device> [--force] [--<параметр> <значение>] [...]`
Прочитать суперблок OSD с диска `<device>`, изменить в нём заданные параметры и записать обратно.
Опция `--force` позволяет читать суперблок, даже если он считается некорректным
из-за ошибок валидации.
## udev ## udev
`vitastor-disk udev <device>` `vitastor-disk udev <device>`

View File

@ -14,13 +14,10 @@ Vitastor has a fio driver which can be installed from the package vitastor-fio.
Use the following command as an example to run tests with fio against a Vitastor cluster: Use the following command as an example to run tests with fio against a Vitastor cluster:
``` ```
fio -thread -ioengine=libfio_vitastor.so -name=test -bs=4M -direct=1 -iodepth=16 -rw=write -image=testimg fio -thread -ioengine=libfio_vitastor.so -name=test -bs=4M -direct=1 -iodepth=16 -rw=write -etcd=10.115.0.10:2379/v3 -image=testimg
``` ```
If you don't want to access your image by name, you can specify pool number, inode number and size If you don't want to access your image by name, you can specify pool number, inode number and size
(`-pool=1 -inode=1 -size=400G`) instead of the image name (`-image=testimg`). (`-pool=1 -inode=1 -size=400G`) instead of the image name (`-image=testimg`).
You can also specify etcd address(es) explicitly by adding `-etcd=10.115.0.10:2379/v3`, or you See exact fio commands to use for benchmarking [here](../performance/understanding.en.md#команды-fio).
can override configuration file path by adding `-conf=/etc/vitastor/vitastor.conf`.
See exact fio commands to use for benchmarking [here](../performance/understanding.en.md#fio-commands).

View File

@ -14,13 +14,10 @@
Используйте следующую команду как пример для запуска тестов кластера Vitastor через fio: Используйте следующую команду как пример для запуска тестов кластера Vitastor через fio:
``` ```
fio -thread -ioengine=libfio_vitastor.so -name=test -bs=4M -direct=1 -iodepth=16 -rw=write -image=testimg fio -thread -ioengine=libfio_vitastor.so -name=test -bs=4M -direct=1 -iodepth=16 -rw=write -etcd=10.115.0.10:2379/v3 -image=testimg
``` ```
Вместо обращения к образу по имени (`-image=testimg`) можно указать номер пула, номер инода и размер: Вместо обращения к образу по имени (`-image=testimg`) можно указать номер пула, номер инода и размер:
`-pool=1 -inode=1 -size=400G`. `-pool=1 -inode=1 -size=400G`.
Вы также можете задать адрес(а) подключения к etcd явно, добавив `-etcd=10.115.0.10:2379/v3`,
или переопределить путь к файлу конфигурации, добавив `-conf=/etc/vitastor/vitastor.conf`.
Конкретные команды fio для тестирования производительности можно посмотреть [здесь](../performance/understanding.ru.md#команды-fio). Конкретные команды fio для тестирования производительности можно посмотреть [здесь](../performance/understanding.ru.md#команды-fio).

View File

@ -11,25 +11,25 @@ NBD stands for "Network Block Device", but in fact it also functions as "BUSE"
NBD slighly lowers the performance due to additional overhead, but performance still NBD slighly lowers the performance due to additional overhead, but performance still
remains decent (see an example [here](../performance/comparison1.en.md#vitastor-0-4-0-nbd)). remains decent (see an example [here](../performance/comparison1.en.md#vitastor-0-4-0-nbd)).
See also [VDUSE](qemu.en.md#vduse) as a better alternative to NBD. Vitastor Kubernetes CSI driver is based on NBD.
Vitastor Kubernetes CSI driver uses NBD when VDUSE is unavailable. See also [VDUSE](qemu.en.md#vduse).
## Map image ## Map image
To create a local block device for a Vitastor image run: To create a local block device for a Vitastor image run:
``` ```
vitastor-nbd map --image testimg vitastor-nbd map --etcd_address 10.115.0.10:2379/v3 --image testimg
``` ```
It will output a block device name like /dev/nbd0 which you can then use as a normal disk. It will output a block device name like /dev/nbd0 which you can then use as a normal disk.
You can also use `--pool <POOL> --inode <INODE> --size <SIZE>` instead of `--image <IMAGE>` if you want. You can also use `--pool <POOL> --inode <INODE> --size <SIZE>` instead of `--image <IMAGE>` if you want.
vitastor-nbd supports all usual Vitastor configuration options like `--config_file <path_to_config>` plus NBD-specific: Additional options for map command:
* `--nbd_timeout 300` \ * `--nbd_timeout 30` \
Timeout for I/O operations in seconds after exceeding which the kernel stops Timeout for I/O operations in seconds after exceeding which the kernel stops
the device. You can set it to 0 to disable the timeout, but beware that you the device. You can set it to 0 to disable the timeout, but beware that you
won't be able to stop the device at all if vitastor-nbd process dies. won't be able to stop the device at all if vitastor-nbd process dies.
@ -44,9 +44,6 @@ vitastor-nbd supports all usual Vitastor configuration options like `--config_fi
* `--foreground 1` \ * `--foreground 1` \
Stay in foreground, do not daemonize. Stay in foreground, do not daemonize.
Note that `nbd_timeout`, `nbd_max_devices` and `nbd_max_part` options may also be specified
in `/etc/vitastor/vitastor.conf` or in other configuration file specified with `--config_file`.
## Unmap image ## Unmap image
To unmap the device run: To unmap the device run:

View File

@ -14,16 +14,16 @@ NBD на данный момент необходимо, чтобы монтир
NBD немного снижает производительность из-за дополнительных копирований памяти, NBD немного снижает производительность из-за дополнительных копирований памяти,
но она всё равно остаётся на неплохом уровне (см. для примера [тест](../performance/comparison1.ru.md#vitastor-0-4-0-nbd)). но она всё равно остаётся на неплохом уровне (см. для примера [тест](../performance/comparison1.ru.md#vitastor-0-4-0-nbd)).
Смотрите также [VDUSE](qemu.ru.md#vduse), как лучшую альтернативу NBD. CSI-драйвер Kubernetes Vitastor основан на NBD.
CSI-драйвер Kubernetes Vitastor использует NBD, когда VDUSE недоступен. Смотрите также [VDUSE](qemu.ru.md#vduse).
## Подключить устройство ## Подключить устройство
Чтобы создать локальное блочное устройство для образа, выполните команду: Чтобы создать локальное блочное устройство для образа, выполните команду:
``` ```
vitastor-nbd map --image testimg vitastor-nbd map --etcd_address 10.115.0.10:2379/v3 --image testimg
``` ```
Команда напечатает название блочного устройства вида /dev/nbd0, которое потом можно Команда напечатает название блочного устройства вида /dev/nbd0, которое потом можно
@ -32,8 +32,7 @@ vitastor-nbd map --image testimg
Для обращения по номеру инода, аналогично другим командам, можно использовать опции Для обращения по номеру инода, аналогично другим командам, можно использовать опции
`--pool <POOL> --inode <INODE> --size <SIZE>` вместо `--image testimg`. `--pool <POOL> --inode <INODE> --size <SIZE>` вместо `--image testimg`.
vitastor-nbd поддерживает все обычные опции Vitastor, например, `--config_file <path_to_config>`, Дополнительные опции для команды подключения NBD-устройства:
плюс специфичные для NBD:
* `--nbd_timeout 30` \ * `--nbd_timeout 30` \
Максимальное время выполнения любой операции чтения/записи в секундах, при Максимальное время выполнения любой операции чтения/записи в секундах, при
@ -54,10 +53,6 @@ vitastor-nbd поддерживает все обычные опции Vitastor,
* `--foreground 1` \ * `--foreground 1` \
Не уводить процесс в фоновый режим. Не уводить процесс в фоновый режим.
Обратите внимание, что опции `nbd_timeout`, `nbd_max_devices` и `nbd_max_part` можно
также задавать в `/etc/vitastor/vitastor.conf` или в другом файле конфигурации,
заданном опцией `--config_file`.
## Отключить устройство ## Отключить устройство
Для отключения устройства выполните: Для отключения устройства выполните:

View File

@ -23,7 +23,7 @@ balancer or any failover method you want to in that case.
vitastor-nfs usage: vitastor-nfs usage:
``` ```
vitastor-nfs [STANDARD OPTIONS] [OTHER OPTIONS] vitastor-nfs [--etcd_address ADDR] [OTHER OPTIONS]
--subdir <DIR> export images prefixed <DIR>/ (default empty - export all images) --subdir <DIR> export images prefixed <DIR>/ (default empty - export all images)
--portmap 0 do not listen on port 111 (portmap/rpcbind, requires root) --portmap 0 do not listen on port 111 (portmap/rpcbind, requires root)
@ -34,7 +34,7 @@ vitastor-nfs [STANDARD OPTIONS] [OTHER OPTIONS]
--foreground 1 stay in foreground, do not daemonize --foreground 1 stay in foreground, do not daemonize
``` ```
Example start and mount commands (etcd_address is optional): Example start and mount commands:
``` ```
vitastor-nfs --etcd_address 192.168.5.10:2379 --portmap 0 --port 2050 --pool testpool vitastor-nfs --etcd_address 192.168.5.10:2379 --portmap 0 --port 2050 --pool testpool

View File

@ -22,7 +22,7 @@
Использование vitastor-nfs: Использование vitastor-nfs:
``` ```
vitastor-nfs [СТАНДАРТНЫЕ ОПЦИИ] [ДРУГИЕ ОПЦИИ] vitastor-nfs [--etcd_address ADDR] [ДРУГИЕ ОПЦИИ]
--subdir <DIR> экспортировать "поддиректорию" - образы с префиксом имени <DIR>/ (по умолчанию пусто - экспортировать все образы) --subdir <DIR> экспортировать "поддиректорию" - образы с префиксом имени <DIR>/ (по умолчанию пусто - экспортировать все образы)
--portmap 0 отключить сервис portmap/rpcbind на порту 111 (по умолчанию включён и требует root привилегий) --portmap 0 отключить сервис portmap/rpcbind на порту 111 (по умолчанию включён и требует root привилегий)
@ -33,7 +33,7 @@ vitastor-nfs [СТАНДАРТНЫЕ ОПЦИИ] [ДРУГИЕ ОПЦИИ]
--foreground 1 не уходить в фон после запуска --foreground 1 не уходить в фон после запуска
``` ```
Пример монтирования Vitastor через NFS (etcd_address необязателен): Пример монтирования Vitastor через NFS:
``` ```
vitastor-nfs --etcd_address 192.168.5.10:2379 --portmap 0 --port 2050 --pool testpool vitastor-nfs --etcd_address 192.168.5.10:2379 --portmap 0 --port 2050 --pool testpool

View File

@ -16,16 +16,13 @@ Old syntax (-drive):
``` ```
qemu-system-x86_64 -enable-kvm -m 1024 \ qemu-system-x86_64 -enable-kvm -m 1024 \
-drive 'file=vitastor:image=debian9', -drive 'file=vitastor:etcd_host=192.168.7.2\:2379/v3:image=debian9',
format=raw,if=none,id=drive-virtio-disk0,cache=none \ format=raw,if=none,id=drive-virtio-disk0,cache=none \
-device 'virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0, -device 'virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,
id=virtio-disk0,bootindex=1,write-cache=off' \ id=virtio-disk0,bootindex=1,write-cache=off' \
-vnc 0.0.0.0:0 -vnc 0.0.0.0:0
``` ```
Etcd address may be specified explicitly by adding `:etcd_host=192.168.7.2\:2379/v3` to `file=`.
Configuration file path may be overriden by adding `:config_path=/etc/vitastor/vitastor.conf`.
New syntax (-blockdev): New syntax (-blockdev):
``` ```
@ -53,12 +50,12 @@ You can also specify inode ID, pool and size manually instead of `:image=<IMAGE>
## qemu-img ## qemu-img
For qemu-img, you should use `vitastor:image=<IMAGE>[:etcd_host=<HOST>]` as filename. For qemu-img, you should use `vitastor:etcd_host=<HOST>:image=<IMAGE>` as filename.
For example, to upload a VM image into Vitastor, run: For example, to upload a VM image into Vitastor, run:
``` ```
qemu-img convert -f qcow2 debian10.qcow2 -p -O raw 'vitastor:image=debian10' qemu-img convert -f qcow2 debian10.qcow2 -p -O raw 'vitastor:etcd_host=192.168.7.2\:2379/v3:image=debian10'
``` ```
You can also specify `:pool=<POOL>:inode=<INODE>:size=<SIZE>` instead of `:image=<IMAGE>` You can also specify `:pool=<POOL>:inode=<INODE>:size=<SIZE>` instead of `:image=<IMAGE>`
@ -75,10 +72,10 @@ the snapshot separately using the following commands (key points are using `skip
`-B backing_file` option): `-B backing_file` option):
``` ```
qemu-img convert -f raw 'vitastor:image=testimg@0' \ qemu-img convert -f raw 'vitastor:etcd_host=192.168.7.2\:2379/v3:image=testimg@0' \
-O qcow2 testimg_0.qcow2 -O qcow2 testimg_0.qcow2
qemu-img convert -f raw 'vitastor:image=testimg:skip-parents=1' \ qemu-img convert -f raw 'vitastor:etcd_host=192.168.7.2\:2379/v3:image=testimg:skip-parents=1' \
-O qcow2 -o 'cluster_size=4k' -B testimg_0.qcow2 testimg.qcow2 -O qcow2 -o 'cluster_size=4k' -B testimg_0.qcow2 testimg.qcow2
``` ```
@ -149,7 +146,7 @@ Example performance comparison:
| 4k random read Q1 | 9600 iops | 7640 iops | 7780 iops | | 4k random read Q1 | 9600 iops | 7640 iops | 7780 iops |
To try VDUSE you need at least Linux 5.15, built with VDUSE support To try VDUSE you need at least Linux 5.15, built with VDUSE support
(CONFIG_VDPA=m, CONFIG_VDPA_USER=m, CONFIG_VIRTIO_VDPA=m). (CONFIG_VIRTIO_VDPA=m, CONFIG_VDPA_USER=m, CONFIG_VIRTIO_VDPA=m).
Debian Linux kernels have these options disabled by now, so if you want to try it on Debian, Debian Linux kernels have these options disabled by now, so if you want to try it on Debian,
use a kernel from Ubuntu [kernel-ppa/mainline](https://kernel.ubuntu.com/~kernel-ppa/mainline/), Proxmox, use a kernel from Ubuntu [kernel-ppa/mainline](https://kernel.ubuntu.com/~kernel-ppa/mainline/), Proxmox,

View File

@ -18,16 +18,13 @@
``` ```
qemu-system-x86_64 -enable-kvm -m 1024 \ qemu-system-x86_64 -enable-kvm -m 1024 \
-drive 'file=vitastor:image=debian9', -drive 'file=vitastor:etcd_host=192.168.7.2\:2379/v3:image=debian9',
format=raw,if=none,id=drive-virtio-disk0,cache=none \ format=raw,if=none,id=drive-virtio-disk0,cache=none \
-device 'virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0, -device 'virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,
id=virtio-disk0,bootindex=1,write-cache=off' \ id=virtio-disk0,bootindex=1,write-cache=off' \
-vnc 0.0.0.0:0 -vnc 0.0.0.0:0
``` ```
Адрес подключения etcd можно задать явно, если добавить `:etcd_host=192.168.7.2\:2379/v3` к `file=`.
Путь к файлу конфигурации можно переопределить, добавив `:config_path=/etc/vitastor/vitastor.conf`.
Новый синтаксис (-blockdev): Новый синтаксис (-blockdev):
``` ```
@ -55,12 +52,12 @@ qemu-system-x86_64 -enable-kvm -m 1024 \
## qemu-img ## qemu-img
Для qemu-img используйте строку `vitastor:image=<IMAGE>[:etcd_host=<HOST>]` в качестве имени файла диска. Для qemu-img используйте строку `vitastor:etcd_host=<HOST>:image=<IMAGE>` в качестве имени файла диска.
Например, чтобы загрузить образ диска в Vitastor: Например, чтобы загрузить образ диска в Vitastor:
``` ```
qemu-img convert -f qcow2 debian10.qcow2 -p -O raw 'vitastor:image=testimg' qemu-img convert -f qcow2 debian10.qcow2 -p -O raw 'vitastor:etcd_host=10.115.0.10\:2379/v3:image=testimg'
``` ```
Если вы не хотите обращаться к образу по имени, вместо `:image=<IMAGE>` можно указать номер пула, номер инода и размер: Если вы не хотите обращаться к образу по имени, вместо `:image=<IMAGE>` можно указать номер пула, номер инода и размер:
@ -76,10 +73,10 @@ qemu-img convert -f qcow2 debian10.qcow2 -p -O raw 'vitastor:image=testimg'
с помощью следующих команд (ключевые моменты - использование `skip-parents=1` и опции `-B backing_file.qcow2`): с помощью следующих команд (ключевые моменты - использование `skip-parents=1` и опции `-B backing_file.qcow2`):
``` ```
qemu-img convert -f raw 'vitastor:image=testimg@0' \ qemu-img convert -f raw 'vitastor:etcd_host=192.168.7.2\:2379/v3:image=testimg@0' \
-O qcow2 testimg_0.qcow2 -O qcow2 testimg_0.qcow2
qemu-img convert -f raw 'vitastor:image=testimg:skip-parents=1' \ qemu-img convert -f raw 'vitastor:etcd_host=192.168.7.2\:2379/v3:image=testimg:skip-parents=1' \
-O qcow2 -o 'cluster_size=4k' -B testimg_0.qcow2 testimg.qcow2 -O qcow2 -o 'cluster_size=4k' -B testimg_0.qcow2 testimg.qcow2
``` ```
@ -152,7 +149,7 @@ VDUSE - на данный момент лучший интерфейс для п
| 4k случайное чтение Q1 | 9600 iops | 7640 iops | 7780 iops | | 4k случайное чтение Q1 | 9600 iops | 7640 iops | 7780 iops |
Чтобы попробовать VDUSE, вам нужно ядро Linux как минимум версии 5.15, собранное с поддержкой Чтобы попробовать VDUSE, вам нужно ядро Linux как минимум версии 5.15, собранное с поддержкой
VDUSE (CONFIG_VDPA=m, CONFIG_VDPA_USER=m, CONFIG_VIRTIO_VDPA=m). VDUSE (CONFIG_VIRTIO_VDPA=m, CONFIG_VDPA_USER=m, CONFIG_VIRTIO_VDPA=m).
В ядрах в Debian Linux поддержка пока отключена по умолчанию, так что чтобы попробовать VDUSE В ядрах в Debian Linux поддержка пока отключена по умолчанию, так что чтобы попробовать VDUSE
на Debian, поставьте ядро из Ubuntu [kernel-ppa/mainline](https://kernel.ubuntu.com/~kernel-ppa/mainline/), на Debian, поставьте ядро из Ubuntu [kernel-ppa/mainline](https://kernel.ubuntu.com/~kernel-ppa/mainline/),

View File

@ -3,7 +3,6 @@
module.exports = { module.exports = {
scale_pg_count, scale_pg_count,
scale_pg_history,
}; };
function add_pg_history(new_pg_history, new_pg, prev_pgs, prev_pg_history, old_pg) function add_pg_history(new_pg_history, new_pg, prev_pgs, prev_pg_history, old_pg)
@ -44,18 +43,16 @@ function finish_pg_history(merged_history)
merged_history.all_peers = Object.values(merged_history.all_peers); merged_history.all_peers = Object.values(merged_history.all_peers);
} }
function scale_pg_history(prev_pg_history, prev_pgs, new_pgs) function scale_pg_count(prev_pgs, real_prev_pgs, prev_pg_history, new_pg_history, new_pg_count)
{ {
const new_pg_history = []; const old_pg_count = real_prev_pgs.length;
const old_pg_count = prev_pgs.length;
const new_pg_count = new_pgs.length;
// Add all possibly intersecting PGs to the history of new PGs // Add all possibly intersecting PGs to the history of new PGs
if (!(new_pg_count % old_pg_count)) if (!(new_pg_count % old_pg_count))
{ {
// New PG count is a multiple of old PG count // New PG count is a multiple of old PG count
for (let i = 0; i < new_pg_count; i++) for (let i = 0; i < new_pg_count; i++)
{ {
add_pg_history(new_pg_history, i, prev_pgs, prev_pg_history, i % old_pg_count); add_pg_history(new_pg_history, i, real_prev_pgs, prev_pg_history, i % old_pg_count);
finish_pg_history(new_pg_history[i]); finish_pg_history(new_pg_history[i]);
} }
} }
@ -67,7 +64,7 @@ function scale_pg_history(prev_pg_history, prev_pgs, new_pgs)
{ {
for (let j = 0; j < mul; j++) for (let j = 0; j < mul; j++)
{ {
add_pg_history(new_pg_history, i, prev_pgs, prev_pg_history, i+j*new_pg_count); add_pg_history(new_pg_history, i, real_prev_pgs, prev_pg_history, i+j*new_pg_count);
} }
finish_pg_history(new_pg_history[i]); finish_pg_history(new_pg_history[i]);
} }
@ -79,7 +76,7 @@ function scale_pg_history(prev_pg_history, prev_pgs, new_pgs)
let merged_history = {}; let merged_history = {};
for (let i = 0; i < old_pg_count; i++) for (let i = 0; i < old_pg_count; i++)
{ {
add_pg_history(merged_history, 1, prev_pgs, prev_pg_history, i); add_pg_history(merged_history, 1, real_prev_pgs, prev_pg_history, i);
} }
finish_pg_history(merged_history[1]); finish_pg_history(merged_history[1]);
for (let i = 0; i < new_pg_count; i++) for (let i = 0; i < new_pg_count; i++)
@ -92,12 +89,6 @@ function scale_pg_history(prev_pg_history, prev_pgs, new_pgs)
{ {
new_pg_history[i] = null; new_pg_history[i] = null;
} }
return new_pg_history;
}
function scale_pg_count(prev_pgs, new_pg_count)
{
const old_pg_count = prev_pgs.length;
// Just for the lp_solve optimizer - pick a "previous" PG for each "new" one // Just for the lp_solve optimizer - pick a "previous" PG for each "new" one
if (prev_pgs.length < new_pg_count) if (prev_pgs.length < new_pg_count)
{ {

View File

@ -59,7 +59,6 @@ const etcd_tree = {
etcd_mon_timeout: 1000, // ms. min: 0 etcd_mon_timeout: 1000, // ms. min: 0
etcd_mon_retries: 5, // min: 0 etcd_mon_retries: 5, // min: 0
mon_change_timeout: 1000, // ms. min: 100 mon_change_timeout: 1000, // ms. min: 100
mon_retry_change_timeout: 50, // ms. min: 10
mon_stats_timeout: 1000, // ms. min: 100 mon_stats_timeout: 1000, // ms. min: 100
osd_out_time: 600, // seconds. min: 0 osd_out_time: 600, // seconds. min: 0
placement_levels: { datacenter: 1, rack: 2, host: 3, osd: 4, ... }, placement_levels: { datacenter: 1, rack: 2, host: 3, osd: 4, ... },
@ -111,15 +110,7 @@ const etcd_tree = {
autosync_interval: 5, autosync_interval: 5,
autosync_writes: 128, autosync_writes: 128,
client_queue_depth: 128, // unused client_queue_depth: 128, // unused
recovery_queue_depth: 1, recovery_queue_depth: 4,
recovery_sleep_us: 0,
recovery_tune_util_low: 0.1,
recovery_tune_client_util_low: 0,
recovery_tune_util_high: 1.0,
recovery_tune_client_util_high: 0.5,
recovery_tune_interval: 1,
recovery_tune_agg_interval: 10, // 10 times recovery_tune_interval
recovery_tune_sleep_min_us: 10, // 10 microseconds
recovery_pg_switch: 128, recovery_pg_switch: 128,
recovery_sync_batch: 16, recovery_sync_batch: 16,
no_recovery: false, no_recovery: false,
@ -401,7 +392,7 @@ class Mon
this.parse_etcd_addresses(config.etcd_address||config.etcd_url); this.parse_etcd_addresses(config.etcd_address||config.etcd_url);
this.verbose = config.verbose || 0; this.verbose = config.verbose || 0;
this.initConfig = config; this.initConfig = config;
this.config = { ...config }; this.config = {};
this.etcd_prefix = config.etcd_prefix || '/vitastor'; this.etcd_prefix = config.etcd_prefix || '/vitastor';
this.etcd_prefix = this.etcd_prefix.replace(/\/\/+/g, '/').replace(/^\/?(.*[^\/])\/?$/, '/$1'); this.etcd_prefix = this.etcd_prefix.replace(/\/\/+/g, '/').replace(/^\/?(.*[^\/])\/?$/, '/$1');
this.etcd_start_timeout = (config.etcd_start_timeout || 5) * 1000; this.etcd_start_timeout = (config.etcd_start_timeout || 5) * 1000;
@ -499,11 +490,6 @@ class Mon
{ {
this.config.mon_change_timeout = 100; this.config.mon_change_timeout = 100;
} }
this.config.mon_retry_change_timeout = Number(this.config.mon_retry_change_timeout) || 50;
if (this.config.mon_retry_change_timeout < 50)
{
this.config.mon_retry_change_timeout = 50;
}
this.config.mon_stats_timeout = Number(this.config.mon_stats_timeout) || 1000; this.config.mon_stats_timeout = Number(this.config.mon_stats_timeout) || 1000;
if (this.config.mon_stats_timeout < 100) if (this.config.mon_stats_timeout < 100)
{ {
@ -620,7 +606,7 @@ class Mon
console.log('etcd websocket timed out, restarting it'); console.log('etcd websocket timed out, restarting it');
this.restart_watcher(cur_addr); this.restart_watcher(cur_addr);
} }
}, (Number(this.config.etcd_ws_keepalive_interval) || 30)*1000); }, (Number(this.config.etcd_keepalive_interval) || 30)*1000);
this.ws.on('error', () => this.restart_watcher(cur_addr)); this.ws.on('error', () => this.restart_watcher(cur_addr));
this.ws.send(JSON.stringify({ this.ws.send(JSON.stringify({
create_request: { create_request: {
@ -1236,89 +1222,6 @@ class Mon
return aff_osds; return aff_osds;
} }
async generate_pool_pgs(pool_id, osd_tree, levels)
{
const pool_cfg = this.state.config.pools[pool_id];
if (!this.validate_pool_cfg(pool_id, pool_cfg, false))
{
return null;
}
let pool_tree = osd_tree[pool_cfg.root_node || ''];
pool_tree = pool_tree ? pool_tree.children : [];
pool_tree = LPOptimizer.flatten_tree(pool_tree, levels, pool_cfg.failure_domain, 'osd');
this.filter_osds_by_tags(osd_tree, pool_tree, pool_cfg.osd_tags);
this.filter_osds_by_block_layout(
pool_tree,
pool_cfg.block_size || this.config.block_size || 131072,
pool_cfg.bitmap_granularity || this.config.bitmap_granularity || 4096,
pool_cfg.immediate_commit || this.config.immediate_commit || 'none'
);
// First try last_clean_pgs to minimize data movement
let prev_pgs = [];
for (const pg in ((this.state.history.last_clean_pgs.items||{})[pool_id]||{}))
{
prev_pgs[pg-1] = [ ...this.state.history.last_clean_pgs.items[pool_id][pg].osd_set ];
}
if (!prev_pgs.length)
{
// Fall back to config/pgs if it's empty
for (const pg in ((this.state.config.pgs.items||{})[pool_id]||{}))
{
prev_pgs[pg-1] = [ ...this.state.config.pgs.items[pool_id][pg].osd_set ];
}
}
const old_pg_count = prev_pgs.length;
const optimize_cfg = {
osd_tree: pool_tree,
pg_count: pool_cfg.pg_count,
pg_size: pool_cfg.pg_size,
pg_minsize: pool_cfg.pg_minsize,
max_combinations: pool_cfg.max_osd_combinations,
ordered: pool_cfg.scheme != 'replicated',
};
let optimize_result;
// Re-shuffle PGs if config/pgs.hash is empty
if (old_pg_count > 0 && this.state.config.pgs.hash)
{
if (prev_pgs.length != pool_cfg.pg_count)
{
// Scale PG count
// Do it even if old_pg_count is already equal to pool_cfg.pg_count,
// because last_clean_pgs may still contain the old number of PGs
PGUtil.scale_pg_count(prev_pgs, pool_cfg.pg_count);
}
for (const pg of prev_pgs)
{
while (pg.length < pool_cfg.pg_size)
{
pg.push(0);
}
}
optimize_result = await LPOptimizer.optimize_change({
prev_pgs,
...optimize_cfg,
});
}
else
{
optimize_result = await LPOptimizer.optimize_initial(optimize_cfg);
}
console.log(`Pool ${pool_id} (${pool_cfg.name || 'unnamed'}):`);
LPOptimizer.print_change_stats(optimize_result);
const pg_effsize = Math.min(pool_cfg.pg_size, Object.keys(pool_tree).length);
return {
pool_id,
pgs: optimize_result.int_pgs,
stats: {
total_raw_tb: optimize_result.space,
pg_real_size: pg_effsize || pool_cfg.pg_size,
raw_to_usable: (pg_effsize || pool_cfg.pg_size) / (pool_cfg.scheme === 'replicated'
? 1 : (pool_cfg.pg_size - (pool_cfg.parity_chunks||0))),
space_efficiency: optimize_result.space/(optimize_result.total_space||1),
},
};
}
async recheck_pgs() async recheck_pgs()
{ {
if (this.recheck_pgs_active) if (this.recheck_pgs_active)
@ -1333,47 +1236,158 @@ class Mon
const { up_osds, levels, osd_tree } = this.get_osd_tree(); const { up_osds, levels, osd_tree } = this.get_osd_tree();
const tree_cfg = { const tree_cfg = {
osd_tree, osd_tree,
levels,
pools: this.state.config.pools, pools: this.state.config.pools,
}; };
const tree_hash = sha1hex(stableStringify(tree_cfg)); const tree_hash = sha1hex(stableStringify(tree_cfg));
if (this.state.config.pgs.hash != tree_hash) if (this.state.config.pgs.hash != tree_hash)
{ {
// Something has changed // Something has changed
console.log('Pool configuration or OSD tree changed, re-optimizing'); const new_config_pgs = JSON.parse(JSON.stringify(this.state.config.pgs));
// First re-optimize PGs, but don't look at history yet const etcd_request = { compare: [], success: [] };
const optimize_results = await Promise.all(Object.keys(this.state.config.pools) for (const pool_id in (this.state.config.pgs||{}).items||{})
.map(pool_id => this.generate_pool_pgs(pool_id, osd_tree, levels)));
// Then apply the modification in the form of an optimistic transaction,
// each time considering new pg/history modifications (OSDs modify it during rebalance)
while (!await this.apply_pool_pgs(optimize_results, up_osds, osd_tree, tree_hash))
{ {
console.log( if (!this.state.config.pools[pool_id])
'Someone changed PG configuration while we also tried to change it.'+
' Retrying in '+this.config.mon_retry_change_timeout+' ms'
);
// Failed to apply - parallel change detected. Wait a bit and retry
const old_rev = this.etcd_watch_revision;
while (this.etcd_watch_revision === old_rev)
{ {
await new Promise(ok => setTimeout(ok, this.config.mon_retry_change_timeout)); // Pool deleted. Delete all PGs, but first stop them.
} if (!await this.stop_all_pgs(pool_id))
const new_ot = this.get_osd_tree();
const new_tcfg = {
osd_tree: new_ot.osd_tree,
levels: new_ot.levels,
pools: this.state.config.pools,
};
if (sha1hex(stableStringify(new_tcfg)) !== tree_hash)
{ {
// Configuration actually changed, restart from the beginning
this.recheck_pgs_active = false; this.recheck_pgs_active = false;
setImmediate(() => this.recheck_pgs().catch(this.die)); this.schedule_recheck();
return; return;
} }
// Configuration didn't change, PG history probably changed, so just retry const prev_pgs = [];
for (const pg in this.state.config.pgs.items[pool_id]||{})
{
prev_pgs[pg-1] = this.state.config.pgs.items[pool_id][pg].osd_set;
} }
console.log('PG configuration successfully changed'); // Also delete pool statistics
etcd_request.success.push({ requestDeleteRange: {
key: b64(this.etcd_prefix+'/pool/stats/'+pool_id),
} });
this.save_new_pgs_txn(new_config_pgs, etcd_request, pool_id, up_osds, osd_tree, prev_pgs, [], []);
}
}
for (const pool_id in this.state.config.pools)
{
const pool_cfg = this.state.config.pools[pool_id];
if (!this.validate_pool_cfg(pool_id, pool_cfg, false))
{
continue;
}
let pool_tree = osd_tree[pool_cfg.root_node || ''];
pool_tree = pool_tree ? pool_tree.children : [];
pool_tree = LPOptimizer.flatten_tree(pool_tree, levels, pool_cfg.failure_domain, 'osd');
this.filter_osds_by_tags(osd_tree, pool_tree, pool_cfg.osd_tags);
this.filter_osds_by_block_layout(
pool_tree,
pool_cfg.block_size || this.config.block_size || 131072,
pool_cfg.bitmap_granularity || this.config.bitmap_granularity || 4096,
pool_cfg.immediate_commit || this.config.immediate_commit || 'none'
);
// These are for the purpose of building history.osd_sets
const real_prev_pgs = [];
let pg_history = [];
for (const pg in ((this.state.config.pgs.items||{})[pool_id]||{}))
{
real_prev_pgs[pg-1] = this.state.config.pgs.items[pool_id][pg].osd_set;
if (this.state.pg.history[pool_id] &&
this.state.pg.history[pool_id][pg])
{
pg_history[pg-1] = this.state.pg.history[pool_id][pg];
}
}
// And these are for the purpose of minimizing data movement
let prev_pgs = [];
for (const pg in ((this.state.history.last_clean_pgs.items||{})[pool_id]||{}))
{
prev_pgs[pg-1] = this.state.history.last_clean_pgs.items[pool_id][pg].osd_set;
}
prev_pgs = JSON.parse(JSON.stringify(prev_pgs.length ? prev_pgs : real_prev_pgs));
const old_pg_count = real_prev_pgs.length;
const optimize_cfg = {
osd_tree: pool_tree,
pg_count: pool_cfg.pg_count,
pg_size: pool_cfg.pg_size,
pg_minsize: pool_cfg.pg_minsize,
max_combinations: pool_cfg.max_osd_combinations,
ordered: pool_cfg.scheme != 'replicated',
};
let optimize_result;
if (old_pg_count > 0)
{
if (old_pg_count != pool_cfg.pg_count)
{
// PG count changed. Need to bring all PGs down.
if (!await this.stop_all_pgs(pool_id))
{
this.recheck_pgs_active = false;
this.schedule_recheck();
return;
}
}
if (prev_pgs.length != pool_cfg.pg_count)
{
// Scale PG count
// Do it even if old_pg_count is already equal to pool_cfg.pg_count,
// because last_clean_pgs may still contain the old number of PGs
const new_pg_history = [];
PGUtil.scale_pg_count(prev_pgs, real_prev_pgs, pg_history, new_pg_history, pool_cfg.pg_count);
pg_history = new_pg_history;
}
for (const pg of prev_pgs)
{
while (pg.length < pool_cfg.pg_size)
{
pg.push(0);
}
}
if (!this.state.config.pgs.hash)
{
// Re-shuffle PGs
optimize_result = await LPOptimizer.optimize_initial(optimize_cfg);
}
else
{
optimize_result = await LPOptimizer.optimize_change({
prev_pgs,
...optimize_cfg,
});
}
}
else
{
optimize_result = await LPOptimizer.optimize_initial(optimize_cfg);
}
if (old_pg_count != optimize_result.int_pgs.length)
{
console.log(
`PG count for pool ${pool_id} (${pool_cfg.name || 'unnamed'})`+
` changed from: ${old_pg_count} to ${optimize_result.int_pgs.length}`
);
// Drop stats
etcd_request.success.push({ requestDeleteRange: {
key: b64(this.etcd_prefix+'/pg/stats/'+pool_id+'/'),
range_end: b64(this.etcd_prefix+'/pg/stats/'+pool_id+'0'),
} });
}
LPOptimizer.print_change_stats(optimize_result);
const pg_effsize = Math.min(pool_cfg.pg_size, Object.keys(pool_tree).length);
this.state.pool.stats[pool_id] = {
used_raw_tb: (this.state.pool.stats[pool_id]||{}).used_raw_tb || 0,
total_raw_tb: optimize_result.space,
pg_real_size: pg_effsize || pool_cfg.pg_size,
raw_to_usable: (pg_effsize || pool_cfg.pg_size) / (pool_cfg.scheme === 'replicated'
? 1 : (pool_cfg.pg_size - (pool_cfg.parity_chunks||0))),
space_efficiency: optimize_result.space/(optimize_result.total_space||1),
};
etcd_request.success.push({ requestPut: {
key: b64(this.etcd_prefix+'/pool/stats/'+pool_id),
value: b64(JSON.stringify(this.state.pool.stats[pool_id])),
} });
this.save_new_pgs_txn(new_config_pgs, etcd_request, pool_id, up_osds, osd_tree, real_prev_pgs, optimize_result.int_pgs, pg_history);
}
new_config_pgs.hash = tree_hash;
await this.save_pg_config(new_config_pgs, etcd_request);
} }
else else
{ {
@ -1420,81 +1434,8 @@ class Mon
this.recheck_pgs_active = false; this.recheck_pgs_active = false;
} }
async apply_pool_pgs(results, up_osds, osd_tree, tree_hash) async save_pg_config(new_config_pgs, etcd_request = { compare: [], success: [] })
{ {
for (const pool_id in (this.state.config.pgs||{}).items||{})
{
// We should stop all PGs when deleting a pool or changing its PG count
if (!this.state.config.pools[pool_id] ||
this.state.config.pgs.items[pool_id] && this.state.config.pools[pool_id].pg_count !=
Object.keys(this.state.config.pgs.items[pool_id]).reduce((a, c) => (a < (0|c) ? (0|c) : a), 0))
{
if (!await this.stop_all_pgs(pool_id))
{
return false;
}
}
}
const new_config_pgs = JSON.parse(JSON.stringify(this.state.config.pgs));
const etcd_request = { compare: [], success: [] };
for (const pool_id in (new_config_pgs||{}).items||{})
{
if (!this.state.config.pools[pool_id])
{
const prev_pgs = [];
for (const pg in new_config_pgs.items[pool_id]||{})
{
prev_pgs[pg-1] = new_config_pgs.items[pool_id][pg].osd_set;
}
// Also delete pool statistics
etcd_request.success.push({ requestDeleteRange: {
key: b64(this.etcd_prefix+'/pool/stats/'+pool_id),
} });
this.save_new_pgs_txn(new_config_pgs, etcd_request, pool_id, up_osds, osd_tree, prev_pgs, [], []);
}
}
for (const pool_res of results)
{
const pool_id = pool_res.pool_id;
const pool_cfg = this.state.config.pools[pool_id];
let pg_history = [];
for (const pg in ((this.state.config.pgs.items||{})[pool_id]||{}))
{
if (this.state.pg.history[pool_id] &&
this.state.pg.history[pool_id][pg])
{
pg_history[pg-1] = this.state.pg.history[pool_id][pg];
}
}
const real_prev_pgs = [];
for (const pg in ((this.state.config.pgs.items||{})[pool_id]||{}))
{
real_prev_pgs[pg-1] = [ ...this.state.config.pgs.items[pool_id][pg].osd_set ];
}
if (real_prev_pgs.length > 0 && real_prev_pgs.length != pool_res.pgs.length)
{
console.log(
`Changing PG count for pool ${pool_id} (${pool_cfg.name || 'unnamed'})`+
` from: ${real_prev_pgs.length} to ${pool_res.pgs.length}`
);
pg_history = PGUtil.scale_pg_history(pg_history, real_prev_pgs, pool_res.pgs);
// Drop stats
etcd_request.success.push({ requestDeleteRange: {
key: b64(this.etcd_prefix+'/pg/stats/'+pool_id+'/'),
range_end: b64(this.etcd_prefix+'/pg/stats/'+pool_id+'0'),
} });
}
const stats = {
used_raw_tb: (this.state.pool.stats[pool_id]||{}).used_raw_tb || 0,
...pool_res.stats,
};
etcd_request.success.push({ requestPut: {
key: b64(this.etcd_prefix+'/pool/stats/'+pool_id),
value: b64(JSON.stringify(stats)),
} });
this.save_new_pgs_txn(new_config_pgs, etcd_request, pool_id, up_osds, osd_tree, real_prev_pgs, pool_res.pgs, pg_history);
}
new_config_pgs.hash = tree_hash;
etcd_request.compare.push( etcd_request.compare.push(
{ key: b64(this.etcd_prefix+'/mon/master'), target: 'LEASE', lease: ''+this.etcd_lease_id }, { key: b64(this.etcd_prefix+'/mon/master'), target: 'LEASE', lease: ''+this.etcd_lease_id },
{ key: b64(this.etcd_prefix+'/config/pgs'), target: 'MOD', mod_revision: ''+this.etcd_watch_revision, result: 'LESS' }, { key: b64(this.etcd_prefix+'/config/pgs'), target: 'MOD', mod_revision: ''+this.etcd_watch_revision, result: 'LESS' },
@ -1502,8 +1443,14 @@ class Mon
etcd_request.success.push( etcd_request.success.push(
{ requestPut: { key: b64(this.etcd_prefix+'/config/pgs'), value: b64(JSON.stringify(new_config_pgs)) } }, { requestPut: { key: b64(this.etcd_prefix+'/config/pgs'), value: b64(JSON.stringify(new_config_pgs)) } },
); );
const txn_res = await this.etcd_call('/kv/txn', etcd_request, this.config.etcd_mon_timeout, 0); const res = await this.etcd_call('/kv/txn', etcd_request, this.config.etcd_mon_timeout, 0);
return txn_res.succeeded; if (!res.succeeded)
{
console.log('Someone changed PG configuration while we also tried to change it. Retrying in '+this.config.mon_change_timeout+' ms');
this.schedule_recheck();
return;
}
console.log('PG configuration successfully changed');
} }
// Schedule next recheck at least at <unixtime> // Schedule next recheck at least at <unixtime>

View File

@ -1,6 +1,6 @@
{ {
"name": "vitastor-mon", "name": "vitastor-mon",
"version": "1.3.1", "version": "1.2.0",
"description": "Vitastor SDS monitor service", "description": "Vitastor SDS monitor service",
"main": "mon-main.js", "main": "mon-main.js",
"scripts": { "scripts": {

View File

@ -110,6 +110,7 @@ sub properties
vitastor_etcd_address => { vitastor_etcd_address => {
description => 'IP address(es) of etcd.', description => 'IP address(es) of etcd.',
type => 'string', type => 'string',
format => 'pve-storage-portal-dns-list',
}, },
vitastor_etcd_prefix => { vitastor_etcd_prefix => {
description => 'Prefix for Vitastor etcd metadata', description => 'Prefix for Vitastor etcd metadata',

View File

@ -50,7 +50,7 @@ from cinder.volume import configuration
from cinder.volume import driver from cinder.volume import driver
from cinder.volume import volume_utils from cinder.volume import volume_utils
VERSION = '1.3.1' VERSION = '1.2.0'
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)

View File

@ -1,190 +0,0 @@
Index: pve-qemu-kvm-8.1.2/block/meson.build
===================================================================
--- pve-qemu-kvm-8.1.2.orig/block/meson.build
+++ pve-qemu-kvm-8.1.2/block/meson.build
@@ -123,6 +123,7 @@ foreach m : [
[libnfs, 'nfs', files('nfs.c')],
[libssh, 'ssh', files('ssh.c')],
[rbd, 'rbd', files('rbd.c')],
+ [vitastor, 'vitastor', files('vitastor.c')],
]
if m[0].found()
module_ss = ss.source_set()
Index: pve-qemu-kvm-8.1.2/meson.build
===================================================================
--- pve-qemu-kvm-8.1.2.orig/meson.build
+++ pve-qemu-kvm-8.1.2/meson.build
@@ -1303,6 +1303,26 @@ if not get_option('rbd').auto() or have_
endif
endif
+vitastor = not_found
+if not get_option('vitastor').auto() or have_block
+ libvitastor_client = cc.find_library('vitastor_client', has_headers: ['vitastor_c.h'],
+ required: get_option('vitastor'))
+ if libvitastor_client.found()
+ if cc.links('''
+ #include <vitastor_c.h>
+ int main(void) {
+ vitastor_c_create_qemu(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0);
+ return 0;
+ }''', dependencies: libvitastor_client)
+ vitastor = declare_dependency(dependencies: libvitastor_client)
+ elif get_option('vitastor').enabled()
+ error('could not link libvitastor_client')
+ else
+ warning('could not link libvitastor_client, disabling')
+ endif
+ endif
+endif
+
glusterfs = not_found
glusterfs_ftruncate_has_stat = false
glusterfs_iocb_has_stat = false
@@ -2123,6 +2143,7 @@ if numa.found()
endif
config_host_data.set('CONFIG_OPENGL', opengl.found())
config_host_data.set('CONFIG_RBD', rbd.found())
+config_host_data.set('CONFIG_VITASTOR', vitastor.found())
config_host_data.set('CONFIG_RDMA', rdma.found())
config_host_data.set('CONFIG_SAFESTACK', get_option('safe_stack'))
config_host_data.set('CONFIG_SDL', sdl.found())
@@ -4298,6 +4319,7 @@ summary_info += {'fdt support': fd
summary_info += {'libcap-ng support': libcap_ng}
summary_info += {'bpf support': libbpf}
summary_info += {'rbd support': rbd}
+summary_info += {'vitastor support': vitastor}
summary_info += {'smartcard support': cacard}
summary_info += {'U2F support': u2f}
summary_info += {'libusb': libusb}
Index: pve-qemu-kvm-8.1.2/meson_options.txt
===================================================================
--- pve-qemu-kvm-8.1.2.orig/meson_options.txt
+++ pve-qemu-kvm-8.1.2/meson_options.txt
@@ -186,6 +186,8 @@ option('lzo', type : 'feature', value :
description: 'lzo compression support')
option('rbd', type : 'feature', value : 'auto',
description: 'Ceph block device driver')
+option('vitastor', type : 'feature', value : 'auto',
+ description: 'Vitastor block device driver')
option('opengl', type : 'feature', value : 'auto',
description: 'OpenGL support')
option('rdma', type : 'feature', value : 'auto',
Index: pve-qemu-kvm-8.1.2/qapi/block-core.json
===================================================================
--- pve-qemu-kvm-8.1.2.orig/qapi/block-core.json
+++ pve-qemu-kvm-8.1.2/qapi/block-core.json
@@ -3403,7 +3403,7 @@
'raw', 'rbd',
{ 'name': 'replication', 'if': 'CONFIG_REPLICATION' },
'pbs',
- 'ssh', 'throttle', 'vdi', 'vhdx',
+ 'ssh', 'throttle', 'vdi', 'vhdx', 'vitastor',
{ 'name': 'virtio-blk-vfio-pci', 'if': 'CONFIG_BLKIO' },
{ 'name': 'virtio-blk-vhost-user', 'if': 'CONFIG_BLKIO' },
{ 'name': 'virtio-blk-vhost-vdpa', 'if': 'CONFIG_BLKIO' },
@@ -4465,6 +4465,28 @@
'*server': ['InetSocketAddressBase'] } }
##
+# @BlockdevOptionsVitastor:
+#
+# Driver specific block device options for vitastor
+#
+# @image: Image name
+# @inode: Inode number
+# @pool: Pool ID
+# @size: Desired image size in bytes
+# @config-path: Path to Vitastor configuration
+# @etcd-host: etcd connection address(es)
+# @etcd-prefix: etcd key/value prefix
+##
+{ 'struct': 'BlockdevOptionsVitastor',
+ 'data': { '*inode': 'uint64',
+ '*pool': 'uint64',
+ '*size': 'uint64',
+ '*image': 'str',
+ '*config-path': 'str',
+ '*etcd-host': 'str',
+ '*etcd-prefix': 'str' } }
+
+##
# @ReplicationMode:
#
# An enumeration of replication modes.
@@ -4923,6 +4945,7 @@
'throttle': 'BlockdevOptionsThrottle',
'vdi': 'BlockdevOptionsGenericFormat',
'vhdx': 'BlockdevOptionsGenericFormat',
+ 'vitastor': 'BlockdevOptionsVitastor',
'virtio-blk-vfio-pci':
{ 'type': 'BlockdevOptionsVirtioBlkVfioPci',
'if': 'CONFIG_BLKIO' },
@@ -5360,6 +5383,17 @@
'*encrypt' : 'RbdEncryptionCreateOptions' } }
##
+# @BlockdevCreateOptionsVitastor:
+#
+# Driver specific image creation options for Vitastor.
+#
+# @size: Size of the virtual disk in bytes
+##
+{ 'struct': 'BlockdevCreateOptionsVitastor',
+ 'data': { 'location': 'BlockdevOptionsVitastor',
+ 'size': 'size' } }
+
+##
# @BlockdevVmdkSubformat:
#
# Subformat options for VMDK images
@@ -5581,6 +5615,7 @@
'ssh': 'BlockdevCreateOptionsSsh',
'vdi': 'BlockdevCreateOptionsVdi',
'vhdx': 'BlockdevCreateOptionsVhdx',
+ 'vitastor': 'BlockdevCreateOptionsVitastor',
'vmdk': 'BlockdevCreateOptionsVmdk',
'vpc': 'BlockdevCreateOptionsVpc'
} }
Index: pve-qemu-kvm-8.1.2/scripts/ci/org.centos/stream/8/x86_64/configure
===================================================================
--- pve-qemu-kvm-8.1.2.orig/scripts/ci/org.centos/stream/8/x86_64/configure
+++ pve-qemu-kvm-8.1.2/scripts/ci/org.centos/stream/8/x86_64/configure
@@ -30,7 +30,7 @@
--with-suffix="qemu-kvm" \
--firmwarepath=/usr/share/qemu-firmware \
--target-list="x86_64-softmmu" \
---block-drv-rw-whitelist="qcow2,raw,file,host_device,nbd,iscsi,rbd,blkdebug,luks,null-co,nvme,copy-on-read,throttle,gluster" \
+--block-drv-rw-whitelist="qcow2,raw,file,host_device,nbd,iscsi,rbd,vitastor,blkdebug,luks,null-co,nvme,copy-on-read,throttle,gluster" \
--audio-drv-list="" \
--block-drv-ro-whitelist="vmdk,vhdx,vpc,https,ssh" \
--with-coroutine=ucontext \
@@ -176,6 +176,7 @@
--enable-opengl \
--enable-pie \
--enable-rbd \
+--enable-vitastor \
--enable-rdma \
--enable-seccomp \
--enable-snappy \
Index: pve-qemu-kvm-8.1.2/scripts/meson-buildoptions.sh
===================================================================
--- pve-qemu-kvm-8.1.2.orig/scripts/meson-buildoptions.sh
+++ pve-qemu-kvm-8.1.2/scripts/meson-buildoptions.sh
@@ -153,6 +153,7 @@ meson_options_help() {
printf "%s\n" ' qed qed image format support'
printf "%s\n" ' qga-vss build QGA VSS support (broken with MinGW)'
printf "%s\n" ' rbd Ceph block device driver'
+ printf "%s\n" ' vitastor Vitastor block device driver'
printf "%s\n" ' rdma Enable RDMA-based migration'
printf "%s\n" ' replication replication support'
printf "%s\n" ' sdl SDL user interface'
@@ -416,6 +417,8 @@ _meson_option_parse() {
--disable-qom-cast-debug) printf "%s" -Dqom_cast_debug=false ;;
--enable-rbd) printf "%s" -Drbd=enabled ;;
--disable-rbd) printf "%s" -Drbd=disabled ;;
+ --enable-vitastor) printf "%s" -Dvitastor=enabled ;;
+ --disable-vitastor) printf "%s" -Dvitastor=disabled ;;
--enable-rdma) printf "%s" -Drdma=enabled ;;
--disable-rdma) printf "%s" -Drdma=disabled ;;
--enable-replication) printf "%s" -Dreplication=enabled ;;

View File

@ -1,190 +0,0 @@
diff --git a/block/meson.build b/block/meson.build
index 529fc172c6..d542dc0609 100644
--- a/block/meson.build
+++ b/block/meson.build
@@ -110,6 +110,7 @@ foreach m : [
[libnfs, 'nfs', files('nfs.c')],
[libssh, 'ssh', files('ssh.c')],
[rbd, 'rbd', files('rbd.c')],
+ [vitastor, 'vitastor', files('vitastor.c')],
]
if m[0].found()
module_ss = ss.source_set()
diff --git a/meson.build b/meson.build
index a9c4f28247..8496cf13f1 100644
--- a/meson.build
+++ b/meson.build
@@ -1303,6 +1303,26 @@ if not get_option('rbd').auto() or have_block
endif
endif
+vitastor = not_found
+if not get_option('vitastor').auto() or have_block
+ libvitastor_client = cc.find_library('vitastor_client', has_headers: ['vitastor_c.h'],
+ required: get_option('vitastor'))
+ if libvitastor_client.found()
+ if cc.links('''
+ #include <vitastor_c.h>
+ int main(void) {
+ vitastor_c_create_qemu(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0);
+ return 0;
+ }''', dependencies: libvitastor_client)
+ vitastor = declare_dependency(dependencies: libvitastor_client)
+ elif get_option('vitastor').enabled()
+ error('could not link libvitastor_client')
+ else
+ warning('could not link libvitastor_client, disabling')
+ endif
+ endif
+endif
+
glusterfs = not_found
glusterfs_ftruncate_has_stat = false
glusterfs_iocb_has_stat = false
@@ -2119,6 +2139,7 @@ if numa.found()
endif
config_host_data.set('CONFIG_OPENGL', opengl.found())
config_host_data.set('CONFIG_RBD', rbd.found())
+config_host_data.set('CONFIG_VITASTOR', vitastor.found())
config_host_data.set('CONFIG_RDMA', rdma.found())
config_host_data.set('CONFIG_SAFESTACK', get_option('safe_stack'))
config_host_data.set('CONFIG_SDL', sdl.found())
@@ -4286,6 +4307,7 @@ summary_info += {'fdt support': fdt_opt == 'disabled' ? false : fdt_opt}
summary_info += {'libcap-ng support': libcap_ng}
summary_info += {'bpf support': libbpf}
summary_info += {'rbd support': rbd}
+summary_info += {'vitastor support': vitastor}
summary_info += {'smartcard support': cacard}
summary_info += {'U2F support': u2f}
summary_info += {'libusb': libusb}
diff --git a/meson_options.txt b/meson_options.txt
index ae6d8f469d..e3d9f8404d 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -186,6 +186,8 @@ option('lzo', type : 'feature', value : 'auto',
description: 'lzo compression support')
option('rbd', type : 'feature', value : 'auto',
description: 'Ceph block device driver')
+option('vitastor', type : 'feature', value : 'auto',
+ description: 'Vitastor block device driver')
option('opengl', type : 'feature', value : 'auto',
description: 'OpenGL support')
option('rdma', type : 'feature', value : 'auto',
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 2b1d493d6e..90673fdbdc 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -3146,7 +3146,7 @@
'parallels', 'preallocate', 'qcow', 'qcow2', 'qed', 'quorum',
'raw', 'rbd',
{ 'name': 'replication', 'if': 'CONFIG_REPLICATION' },
- 'ssh', 'throttle', 'vdi', 'vhdx',
+ 'ssh', 'throttle', 'vdi', 'vhdx', 'vitastor',
{ 'name': 'virtio-blk-vfio-pci', 'if': 'CONFIG_BLKIO' },
{ 'name': 'virtio-blk-vhost-user', 'if': 'CONFIG_BLKIO' },
{ 'name': 'virtio-blk-vhost-vdpa', 'if': 'CONFIG_BLKIO' },
@@ -4196,6 +4196,28 @@
'*key-secret': 'str',
'*server': ['InetSocketAddressBase'] } }
+##
+# @BlockdevOptionsVitastor:
+#
+# Driver specific block device options for vitastor
+#
+# @image: Image name
+# @inode: Inode number
+# @pool: Pool ID
+# @size: Desired image size in bytes
+# @config-path: Path to Vitastor configuration
+# @etcd-host: etcd connection address(es)
+# @etcd-prefix: etcd key/value prefix
+##
+{ 'struct': 'BlockdevOptionsVitastor',
+ 'data': { '*inode': 'uint64',
+ '*pool': 'uint64',
+ '*size': 'uint64',
+ '*image': 'str',
+ '*config-path': 'str',
+ '*etcd-host': 'str',
+ '*etcd-prefix': 'str' } }
+
##
# @ReplicationMode:
#
@@ -4654,6 +4676,7 @@
'throttle': 'BlockdevOptionsThrottle',
'vdi': 'BlockdevOptionsGenericFormat',
'vhdx': 'BlockdevOptionsGenericFormat',
+ 'vitastor': 'BlockdevOptionsVitastor',
'virtio-blk-vfio-pci':
{ 'type': 'BlockdevOptionsVirtioBlkVfioPci',
'if': 'CONFIG_BLKIO' },
@@ -5089,6 +5112,17 @@
'*cluster-size' : 'size',
'*encrypt' : 'RbdEncryptionCreateOptions' } }
+##
+# @BlockdevCreateOptionsVitastor:
+#
+# Driver specific image creation options for Vitastor.
+#
+# @size: Size of the virtual disk in bytes
+##
+{ 'struct': 'BlockdevCreateOptionsVitastor',
+ 'data': { 'location': 'BlockdevOptionsVitastor',
+ 'size': 'size' } }
+
##
# @BlockdevVmdkSubformat:
#
@@ -5311,6 +5345,7 @@
'ssh': 'BlockdevCreateOptionsSsh',
'vdi': 'BlockdevCreateOptionsVdi',
'vhdx': 'BlockdevCreateOptionsVhdx',
+ 'vitastor': 'BlockdevCreateOptionsVitastor',
'vmdk': 'BlockdevCreateOptionsVmdk',
'vpc': 'BlockdevCreateOptionsVpc'
} }
diff --git a/scripts/ci/org.centos/stream/8/x86_64/configure b/scripts/ci/org.centos/stream/8/x86_64/configure
index d02b09a4b9..f0b5fbfef3 100755
--- a/scripts/ci/org.centos/stream/8/x86_64/configure
+++ b/scripts/ci/org.centos/stream/8/x86_64/configure
@@ -30,7 +30,7 @@
--with-suffix="qemu-kvm" \
--firmwarepath=/usr/share/qemu-firmware \
--target-list="x86_64-softmmu" \
---block-drv-rw-whitelist="qcow2,raw,file,host_device,nbd,iscsi,rbd,blkdebug,luks,null-co,nvme,copy-on-read,throttle,gluster" \
+--block-drv-rw-whitelist="qcow2,raw,file,host_device,nbd,iscsi,rbd,vitastor,blkdebug,luks,null-co,nvme,copy-on-read,throttle,gluster" \
--audio-drv-list="" \
--block-drv-ro-whitelist="vmdk,vhdx,vpc,https,ssh" \
--with-coroutine=ucontext \
@@ -176,6 +176,7 @@
--enable-opengl \
--enable-pie \
--enable-rbd \
+--enable-vitastor \
--enable-rdma \
--enable-seccomp \
--enable-snappy \
diff --git a/scripts/meson-buildoptions.sh b/scripts/meson-buildoptions.sh
index d7020af175..94958eb6fa 100644
--- a/scripts/meson-buildoptions.sh
+++ b/scripts/meson-buildoptions.sh
@@ -153,6 +153,7 @@ meson_options_help() {
printf "%s\n" ' qed qed image format support'
printf "%s\n" ' qga-vss build QGA VSS support (broken with MinGW)'
printf "%s\n" ' rbd Ceph block device driver'
+ printf "%s\n" ' vitastor Vitastor block device driver'
printf "%s\n" ' rdma Enable RDMA-based migration'
printf "%s\n" ' replication replication support'
printf "%s\n" ' sdl SDL user interface'
@@ -416,6 +417,8 @@ _meson_option_parse() {
--disable-qom-cast-debug) printf "%s" -Dqom_cast_debug=false ;;
--enable-rbd) printf "%s" -Drbd=enabled ;;
--disable-rbd) printf "%s" -Drbd=disabled ;;
+ --enable-vitastor) printf "%s" -Dvitastor=enabled ;;
+ --disable-vitastor) printf "%s" -Dvitastor=disabled ;;
--enable-rdma) printf "%s" -Drdma=enabled ;;
--disable-rdma) printf "%s" -Drdma=disabled ;;
--enable-replication) printf "%s" -Dreplication=enabled ;;

View File

@ -24,4 +24,4 @@ rm fio
mv fio-copy fio mv fio-copy fio
FIO=`rpm -qi fio | perl -e 'while(<>) { /^Epoch[\s:]+(\S+)/ && print "$1:"; /^Version[\s:]+(\S+)/ && print $1; /^Release[\s:]+(\S+)/ && print "-$1"; }'` FIO=`rpm -qi fio | perl -e 'while(<>) { /^Epoch[\s:]+(\S+)/ && print "$1:"; /^Version[\s:]+(\S+)/ && print $1; /^Release[\s:]+(\S+)/ && print "-$1"; }'`
perl -i -pe 's/(Requires:\s*fio)([^\n]+)?/$1 = '$FIO'/' $VITASTOR/rpm/vitastor-el$EL.spec perl -i -pe 's/(Requires:\s*fio)([^\n]+)?/$1 = '$FIO'/' $VITASTOR/rpm/vitastor-el$EL.spec
tar --transform 's#^#vitastor-1.3.1/#' --exclude 'rpm/*.rpm' -czf $VITASTOR/../vitastor-1.3.1$(rpm --eval '%dist').tar.gz * tar --transform 's#^#vitastor-1.2.0/#' --exclude 'rpm/*.rpm' -czf $VITASTOR/../vitastor-1.2.0$(rpm --eval '%dist').tar.gz *

View File

@ -15,7 +15,6 @@ RUN yumdownloader --disablerepo=centos-sclo-rh --source fio
RUN rpm --nomd5 -i fio*.src.rpm RUN rpm --nomd5 -i fio*.src.rpm
RUN rm -f /etc/yum.repos.d/CentOS-Media.repo RUN rm -f /etc/yum.repos.d/CentOS-Media.repo
RUN cd ~/rpmbuild/SPECS && yum-builddep -y fio.spec RUN cd ~/rpmbuild/SPECS && yum-builddep -y fio.spec
RUN yum -y install cmake3
ADD https://vitastor.io/rpms/liburing-el7/liburing-0.7-2.el7.src.rpm /root ADD https://vitastor.io/rpms/liburing-el7/liburing-0.7-2.el7.src.rpm /root
@ -36,7 +35,7 @@ ADD . /root/vitastor
RUN set -e; \ RUN set -e; \
cd /root/vitastor/rpm; \ cd /root/vitastor/rpm; \
sh build-tarball.sh; \ sh build-tarball.sh; \
cp /root/vitastor-1.3.1.el7.tar.gz ~/rpmbuild/SOURCES; \ cp /root/vitastor-1.2.0.el7.tar.gz ~/rpmbuild/SOURCES; \
cp vitastor-el7.spec ~/rpmbuild/SPECS/vitastor.spec; \ cp vitastor-el7.spec ~/rpmbuild/SPECS/vitastor.spec; \
cd ~/rpmbuild/SPECS/; \ cd ~/rpmbuild/SPECS/; \
rpmbuild -ba vitastor.spec; \ rpmbuild -ba vitastor.spec; \

View File

@ -1,11 +1,11 @@
Name: vitastor Name: vitastor
Version: 1.3.1 Version: 1.2.0
Release: 1%{?dist} Release: 1%{?dist}
Summary: Vitastor, a fast software-defined clustered block storage Summary: Vitastor, a fast software-defined clustered block storage
License: Vitastor Network Public License 1.1 License: Vitastor Network Public License 1.1
URL: https://vitastor.io/ URL: https://vitastor.io/
Source0: vitastor-1.3.1.el7.tar.gz Source0: vitastor-1.2.0.el7.tar.gz
BuildRequires: liburing-devel >= 0.6 BuildRequires: liburing-devel >= 0.6
BuildRequires: gperftools-devel BuildRequires: gperftools-devel
@ -16,7 +16,7 @@ BuildRequires: jerasure-devel
BuildRequires: libisa-l-devel BuildRequires: libisa-l-devel
BuildRequires: gf-complete-devel BuildRequires: gf-complete-devel
BuildRequires: libibverbs-devel BuildRequires: libibverbs-devel
BuildRequires: cmake3 BuildRequires: cmake
Requires: vitastor-osd = %{version}-%{release} Requires: vitastor-osd = %{version}-%{release}
Requires: vitastor-mon = %{version}-%{release} Requires: vitastor-mon = %{version}-%{release}
Requires: vitastor-client = %{version}-%{release} Requires: vitastor-client = %{version}-%{release}
@ -94,7 +94,7 @@ Vitastor fio drivers for benchmarking.
%build %build
. /opt/rh/devtoolset-9/enable . /opt/rh/devtoolset-9/enable
%cmake3 . %cmake .
%make_build %make_build

View File

@ -35,7 +35,7 @@ ADD . /root/vitastor
RUN set -e; \ RUN set -e; \
cd /root/vitastor/rpm; \ cd /root/vitastor/rpm; \
sh build-tarball.sh; \ sh build-tarball.sh; \
cp /root/vitastor-1.3.1.el8.tar.gz ~/rpmbuild/SOURCES; \ cp /root/vitastor-1.2.0.el8.tar.gz ~/rpmbuild/SOURCES; \
cp vitastor-el8.spec ~/rpmbuild/SPECS/vitastor.spec; \ cp vitastor-el8.spec ~/rpmbuild/SPECS/vitastor.spec; \
cd ~/rpmbuild/SPECS/; \ cd ~/rpmbuild/SPECS/; \
rpmbuild -ba vitastor.spec; \ rpmbuild -ba vitastor.spec; \

View File

@ -1,11 +1,11 @@
Name: vitastor Name: vitastor
Version: 1.3.1 Version: 1.2.0
Release: 1%{?dist} Release: 1%{?dist}
Summary: Vitastor, a fast software-defined clustered block storage Summary: Vitastor, a fast software-defined clustered block storage
License: Vitastor Network Public License 1.1 License: Vitastor Network Public License 1.1
URL: https://vitastor.io/ URL: https://vitastor.io/
Source0: vitastor-1.3.1.el8.tar.gz Source0: vitastor-1.2.0.el8.tar.gz
BuildRequires: liburing-devel >= 0.6 BuildRequires: liburing-devel >= 0.6
BuildRequires: gperftools-devel BuildRequires: gperftools-devel

View File

@ -18,7 +18,7 @@ ADD . /root/vitastor
RUN set -e; \ RUN set -e; \
cd /root/vitastor/rpm; \ cd /root/vitastor/rpm; \
sh build-tarball.sh; \ sh build-tarball.sh; \
cp /root/vitastor-1.3.1.el9.tar.gz ~/rpmbuild/SOURCES; \ cp /root/vitastor-1.2.0.el9.tar.gz ~/rpmbuild/SOURCES; \
cp vitastor-el9.spec ~/rpmbuild/SPECS/vitastor.spec; \ cp vitastor-el9.spec ~/rpmbuild/SPECS/vitastor.spec; \
cd ~/rpmbuild/SPECS/; \ cd ~/rpmbuild/SPECS/; \
rpmbuild -ba vitastor.spec; \ rpmbuild -ba vitastor.spec; \

View File

@ -1,11 +1,11 @@
Name: vitastor Name: vitastor
Version: 1.3.1 Version: 1.2.0
Release: 1%{?dist} Release: 1%{?dist}
Summary: Vitastor, a fast software-defined clustered block storage Summary: Vitastor, a fast software-defined clustered block storage
License: Vitastor Network Public License 1.1 License: Vitastor Network Public License 1.1
URL: https://vitastor.io/ URL: https://vitastor.io/
Source0: vitastor-1.3.1.el9.tar.gz Source0: vitastor-1.2.0.el9.tar.gz
BuildRequires: liburing-devel >= 0.6 BuildRequires: liburing-devel >= 0.6
BuildRequires: gperftools-devel BuildRequires: gperftools-devel

View File

@ -16,7 +16,7 @@ if("${CMAKE_INSTALL_PREFIX}" MATCHES "^/usr/local/?$")
set(CMAKE_INSTALL_RPATH "${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_LIBDIR}") set(CMAKE_INSTALL_RPATH "${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_LIBDIR}")
endif() endif()
add_definitions(-DVERSION="1.3.1") add_definitions(-DVERSION="1.2.0")
add_definitions(-Wall -Wno-sign-compare -Wno-comment -Wno-parentheses -Wno-pointer-arith -fdiagnostics-color=always -fno-omit-frame-pointer -I ${CMAKE_SOURCE_DIR}/src) add_definitions(-Wall -Wno-sign-compare -Wno-comment -Wno-parentheses -Wno-pointer-arith -fdiagnostics-color=always -fno-omit-frame-pointer -I ${CMAKE_SOURCE_DIR}/src)
add_link_options(-fno-omit-frame-pointer) add_link_options(-fno-omit-frame-pointer)
if (${WITH_ASAN}) if (${WITH_ASAN})

View File

@ -8,7 +8,6 @@
#include <stdio.h> #include <stdio.h>
#include <stdexcept> #include <stdexcept>
#include <set>
#include "addr_util.h" #include "addr_util.h"
@ -136,7 +135,7 @@ std::vector<std::string> getifaddr_list(std::vector<std::string> mask_cfg, bool
throw std::runtime_error((include_v6 ? "Invalid IPv4 address mask: " : "Invalid IP address mask: ") + mask); throw std::runtime_error((include_v6 ? "Invalid IPv4 address mask: " : "Invalid IP address mask: ") + mask);
} }
} }
std::set<std::string> addresses; std::vector<std::string> addresses;
ifaddrs *list, *ifa; ifaddrs *list, *ifa;
if (getifaddrs(&list) == -1) if (getifaddrs(&list) == -1)
{ {
@ -150,8 +149,7 @@ std::vector<std::string> getifaddr_list(std::vector<std::string> mask_cfg, bool
} }
int family = ifa->ifa_addr->sa_family; int family = ifa->ifa_addr->sa_family;
if ((family == AF_INET || family == AF_INET6 && include_v6) && if ((family == AF_INET || family == AF_INET6 && include_v6) &&
// Do not skip loopback addresses if the address filter is specified (ifa->ifa_flags & (IFF_UP | IFF_RUNNING | IFF_LOOPBACK)) == (IFF_UP | IFF_RUNNING))
(ifa->ifa_flags & (IFF_UP | IFF_RUNNING | (masks.size() ? 0 : IFF_LOOPBACK))) == (IFF_UP | IFF_RUNNING))
{ {
void *addr_ptr; void *addr_ptr;
if (family == AF_INET) if (family == AF_INET)
@ -184,11 +182,11 @@ std::vector<std::string> getifaddr_list(std::vector<std::string> mask_cfg, bool
{ {
throw std::runtime_error(std::string("inet_ntop: ") + strerror(errno)); throw std::runtime_error(std::string("inet_ntop: ") + strerror(errno));
} }
addresses.insert(std::string(addr)); addresses.push_back(std::string(addr));
} }
} }
freeifaddrs(list); freeifaddrs(list);
return std::vector<std::string>(addresses.begin(), addresses.end()); return addresses;
} }
int create_and_bind_socket(std::string bind_address, int bind_port, int listen_backlog, int *listening_port) int create_and_bind_socket(std::string bind_address, int bind_port, int listen_backlog, int *listening_port)

View File

@ -277,7 +277,6 @@ class blockstore_impl_t
int unsynced_big_write_count = 0, unstable_unsynced = 0; int unsynced_big_write_count = 0, unstable_unsynced = 0;
int unsynced_queued_ops = 0; int unsynced_queued_ops = 0;
allocator *data_alloc = NULL; allocator *data_alloc = NULL;
uint64_t used_blocks = 0;
uint8_t *zero_object; uint8_t *zero_object;
void *metadata_buffer = NULL; void *metadata_buffer = NULL;
@ -431,7 +430,7 @@ public:
inline uint32_t get_block_size() { return dsk.data_block_size; } inline uint32_t get_block_size() { return dsk.data_block_size; }
inline uint64_t get_block_count() { return dsk.block_count; } inline uint64_t get_block_count() { return dsk.block_count; }
inline uint64_t get_free_block_count() { return dsk.block_count - used_blocks; } inline uint64_t get_free_block_count() { return data_alloc->get_free_count(); }
inline uint32_t get_bitmap_granularity() { return dsk.disk_alignment; } inline uint32_t get_bitmap_granularity() { return dsk.disk_alignment; }
inline uint64_t get_journal_size() { return dsk.journal_len; } inline uint64_t get_journal_size() { return dsk.journal_len; }
}; };

View File

@ -376,7 +376,6 @@ bool blockstore_init_meta::handle_meta_block(uint8_t *buf, uint64_t entries_per_
else else
{ {
bs->inode_space_stats[entry->oid.inode] += bs->dsk.data_block_size; bs->inode_space_stats[entry->oid.inode] += bs->dsk.data_block_size;
bs->used_blocks++;
} }
entries_loaded++; entries_loaded++;
#ifdef BLOCKSTORE_DEBUG #ifdef BLOCKSTORE_DEBUG
@ -733,9 +732,8 @@ int blockstore_init_journal::handle_journal_part(void *buf, uint64_t done_pos, u
resume: resume:
while (pos < bs->journal.block_size) while (pos < bs->journal.block_size)
{ {
auto buf_pos = proc_pos - done_pos + pos; journal_entry *je = (journal_entry*)((uint8_t*)buf + proc_pos - done_pos + pos);
journal_entry *je = (journal_entry*)((uint8_t*)buf + buf_pos); if (je->magic != JOURNAL_MAGIC || je_crc32(je) != je->crc32 ||
if (je->magic != JOURNAL_MAGIC || buf_pos+je->size > len || je_crc32(je) != je->crc32 ||
je->type < JE_MIN || je->type > JE_MAX || started && je->crc32_prev != crc32_last) je->type < JE_MIN || je->type > JE_MAX || started && je->crc32_prev != crc32_last)
{ {
if (pos == 0) if (pos == 0)
@ -1182,7 +1180,6 @@ void blockstore_init_journal::erase_dirty_object(blockstore_dirty_db_t::iterator
sp -= bs->dsk.data_block_size; sp -= bs->dsk.data_block_size;
else else
bs->inode_space_stats.erase(oid.inode); bs->inode_space_stats.erase(oid.inode);
bs->used_blocks--;
} }
bs->erase_dirty(dirty_it, dirty_end, clean_loc); bs->erase_dirty(dirty_it, dirty_end, clean_loc);
// Remove it from the flusher's queue, too // Remove it from the flusher's queue, too

View File

@ -144,10 +144,8 @@ journal_entry* prefill_single_journal_entry(journal_t & journal, uint16_t type,
journal.sector_info[journal.cur_sector].written = false; journal.sector_info[journal.cur_sector].written = false;
journal.sector_info[journal.cur_sector].offset = journal.next_free; journal.sector_info[journal.cur_sector].offset = journal.next_free;
journal.in_sector_pos = 0; journal.in_sector_pos = 0;
auto next_next_free = (journal.next_free+journal.block_size) < journal.len ? journal.next_free + journal.block_size : journal.block_size; journal.next_free = (journal.next_free+journal.block_size) < journal.len ? journal.next_free + journal.block_size : journal.block_size;
// double check that next_free doesn't cross used_start from the left assert(journal.next_free != journal.used_start);
assert(journal.next_free >= journal.used_start || next_next_free < journal.used_start);
journal.next_free = next_next_free;
memset(journal.inmemory memset(journal.inmemory
? (uint8_t*)journal.buffer + journal.sector_info[journal.cur_sector].offset ? (uint8_t*)journal.buffer + journal.sector_info[journal.cur_sector].offset
: (uint8_t*)journal.sector_buf + journal.block_size*journal.cur_sector, 0, journal.block_size); : (uint8_t*)journal.sector_buf + journal.block_size*journal.cur_sector, 0, journal.block_size);

View File

@ -445,7 +445,6 @@ void blockstore_impl_t::mark_stable(const obj_ver_id & v, bool forget_dirty)
if (!exists) if (!exists)
{ {
inode_space_stats[dirty_it->first.oid.inode] += dsk.data_block_size; inode_space_stats[dirty_it->first.oid.inode] += dsk.data_block_size;
used_blocks++;
} }
big_to_flush++; big_to_flush++;
} }
@ -456,7 +455,6 @@ void blockstore_impl_t::mark_stable(const obj_ver_id & v, bool forget_dirty)
sp -= dsk.data_block_size; sp -= dsk.data_block_size;
else else
inode_space_stats.erase(dirty_it->first.oid.inode); inode_space_stats.erase(dirty_it->first.oid.inode);
used_blocks--;
big_to_flush++; big_to_flush++;
} }
} }

View File

@ -386,7 +386,7 @@ int blockstore_impl_t::dequeue_write(blockstore_op_t *op)
sqe, dsk.data_fd, PRIV(op)->iov_zerofill, vcnt, dsk.data_offset + (loc << dsk.block_order) + op->offset - stripe_offset sqe, dsk.data_fd, PRIV(op)->iov_zerofill, vcnt, dsk.data_offset + (loc << dsk.block_order) + op->offset - stripe_offset
); );
PRIV(op)->pending_ops = 1; PRIV(op)->pending_ops = 1;
if (!(dirty_it->second.state & BS_ST_INSTANT)) if (immediate_commit != IMMEDIATE_ALL && !(dirty_it->second.state & BS_ST_INSTANT))
{ {
unstable_unsynced++; unstable_unsynced++;
} }
@ -412,7 +412,7 @@ int blockstore_impl_t::dequeue_write(blockstore_op_t *op)
sizeof(journal_entry_big_write) + dsk.clean_dyn_size, 0) sizeof(journal_entry_big_write) + dsk.clean_dyn_size, 0)
|| !space_check.check_available(op, 1, || !space_check.check_available(op, 1,
sizeof(journal_entry_small_write) + dyn_size, sizeof(journal_entry_small_write) + dyn_size,
op->len + (unstable_writes.size()+unstable_unsynced)*journal.block_size)) (unstable_writes.size()+unstable_unsynced)*journal.block_size))
{ {
return 0; return 0;
} }
@ -462,8 +462,6 @@ int blockstore_impl_t::dequeue_write(blockstore_op_t *op)
exit(1); exit(1);
} }
} }
// double check that next_free doesn't cross used_start from the left
assert(journal.next_free >= journal.used_start || next_next_free < journal.used_start);
journal.next_free = next_next_free; journal.next_free = next_next_free;
je->oid = op->oid; je->oid = op->oid;
je->version = op->version; je->version = op->version;
@ -501,13 +499,13 @@ int blockstore_impl_t::dequeue_write(blockstore_op_t *op)
} }
dirty_it->second.location = journal.next_free; dirty_it->second.location = journal.next_free;
dirty_it->second.state = (dirty_it->second.state & ~BS_ST_WORKFLOW_MASK) | BS_ST_SUBMITTED; dirty_it->second.state = (dirty_it->second.state & ~BS_ST_WORKFLOW_MASK) | BS_ST_SUBMITTED;
next_next_free = journal.next_free + op->len; journal.next_free += op->len;
if (next_next_free >= journal.len) if (journal.next_free >= journal.len)
next_next_free = dsk.journal_block_size; {
// double check that next_free doesn't cross used_start from the left journal.next_free = dsk.journal_block_size;
assert(journal.next_free >= journal.used_start || next_next_free < journal.used_start); assert(journal.next_free != journal.used_start);
journal.next_free = next_next_free; }
if (!(dirty_it->second.state & BS_ST_INSTANT)) if (immediate_commit == IMMEDIATE_NONE && !(dirty_it->second.state & BS_ST_INSTANT))
{ {
unstable_unsynced++; unstable_unsynced++;
} }
@ -598,12 +596,12 @@ resume_4:
{ {
auto & unstab = unstable_writes[op->oid]; auto & unstab = unstable_writes[op->oid];
unstab = unstab < op->version ? op->version : unstab; unstab = unstab < op->version ? op->version : unstab;
if (!is_instant) }
else if (!is_instant)
{ {
unstable_unsynced--; unstable_unsynced--;
assert(unstable_unsynced >= 0); assert(unstable_unsynced >= 0);
} }
}
dirty_it->second.state = (dirty_it->second.state & ~BS_ST_WORKFLOW_MASK) dirty_it->second.state = (dirty_it->second.state & ~BS_ST_WORKFLOW_MASK)
| (imm ? BS_ST_SYNCED : BS_ST_WRITTEN); | (imm ? BS_ST_SYNCED : BS_ST_WRITTEN);
if (imm && is_instant) if (imm && is_instant)

View File

@ -116,8 +116,7 @@ static const char* help_text =
"Use vitastor-cli --help <command> for command details or vitastor-cli --help --all for all details.\n" "Use vitastor-cli --help <command> for command details or vitastor-cli --help --all for all details.\n"
"\n" "\n"
"GLOBAL OPTIONS:\n" "GLOBAL OPTIONS:\n"
" --config_file FILE Path to Vitastor configuration file\n" " --etcd_address <etcd_address>\n"
" --etcd_address URL Etcd connection address\n"
" --iodepth N Send N operations in parallel to each OSD when possible (default 32)\n" " --iodepth N Send N operations in parallel to each OSD when possible (default 32)\n"
" --parallel_osds M Work with M osds in parallel when possible (default 4)\n" " --parallel_osds M Work with M osds in parallel when possible (default 4)\n"
" --progress 1|0 Report progress (default 1)\n" " --progress 1|0 Report progress (default 1)\n"

View File

@ -359,8 +359,6 @@ void cluster_client_t::on_load_config_hook(json11::Json::object & etcd_global_co
{ {
up_wait_retry_interval = 50; up_wait_retry_interval = 50;
} }
// log_level
log_level = config["log_level"].uint64_value();
msgr.parse_config(config); msgr.parse_config(config);
st_cli.parse_config(config); st_cli.parse_config(config);
st_cli.load_pgs(); st_cli.load_pgs();
@ -705,8 +703,6 @@ resume_1:
} }
goto resume_2; goto resume_2;
} }
// Protect from try_send completing the operation immediately
op->inflight_count++;
for (int i = 0; i < op->parts.size(); i++) for (int i = 0; i < op->parts.size(); i++)
{ {
if (!(op->parts[i].flags & PART_SENT)) if (!(op->parts[i].flags & PART_SENT))
@ -730,10 +726,8 @@ resume_1:
} }
} }
} }
op->inflight_count--;
if (op->state == 1) if (op->state == 1)
{ {
// Some suboperations have to be resent
return 0; return 0;
} }
resume_2: resume_2:
@ -1153,14 +1147,11 @@ void cluster_client_t::handle_op_part(cluster_op_part_t *part)
if (op->retval != -EINTR && op->retval != -EIO && op->retval != -ENOSPC) if (op->retval != -EINTR && op->retval != -EIO && op->retval != -ENOSPC)
{ {
stop_fd = part->op.peer_fd; stop_fd = part->op.peer_fd;
if (op->retval != -EPIPE || log_level > 0)
{
fprintf( fprintf(
stderr, "%s operation failed on OSD %lu: retval=%ld (expected %d), dropping connection\n", stderr, "%s operation failed on OSD %lu: retval=%ld (expected %d), dropping connection\n",
osd_op_names[part->op.req.hdr.opcode], part->osd_num, part->op.reply.hdr.retval, expected osd_op_names[part->op.req.hdr.opcode], part->osd_num, part->op.reply.hdr.retval, expected
); );
} }
}
else if (log_level > 0) else if (log_level > 0)
{ {
fprintf( fprintf(

View File

@ -91,7 +91,7 @@ class cluster_client_t
uint64_t client_max_buffered_ops = 0; uint64_t client_max_buffered_ops = 0;
uint64_t client_max_writeback_iodepth = 0; uint64_t client_max_writeback_iodepth = 0;
int log_level = 0; int log_level;
int up_wait_retry_interval = 500; // ms int up_wait_retry_interval = 500; // ms
int retry_timeout_id = 0; int retry_timeout_id = 0;

View File

@ -127,10 +127,6 @@ static const char *help_text =
"vitastor-disk write-sb <device>\n" "vitastor-disk write-sb <device>\n"
" Read JSON from STDIN and write it into Vitastor OSD superblock on <device>.\n" " Read JSON from STDIN and write it into Vitastor OSD superblock on <device>.\n"
"\n" "\n"
"vitastor-disk update-sb <device> [--force] [--<parameter> <value>] [...]\n"
" Read Vitastor OSD superblock from <device>, update parameters in it and write it back.\n"
" --force allows to ignore validation errors.\n"
"\n"
"vitastor-disk udev <device>\n" "vitastor-disk udev <device>\n"
" Try to read Vitastor OSD superblock from <device> and print variables for udev.\n" " Try to read Vitastor OSD superblock from <device> and print variables for udev.\n"
"\n" "\n"
@ -367,15 +363,6 @@ int main(int argc, char *argv[])
} }
return self.write_sb(cmd[1]); return self.write_sb(cmd[1]);
} }
else if (!strcmp(cmd[0], "update-sb"))
{
if (cmd.size() != 2)
{
fprintf(stderr, "Exactly 1 device path argument is required\n");
return 1;
}
return self.update_sb(cmd[1]);
}
else if (!strcmp(cmd[0], "start") || !strcmp(cmd[0], "stop") || else if (!strcmp(cmd[0], "start") || !strcmp(cmd[0], "stop") ||
!strcmp(cmd[0], "restart") || !strcmp(cmd[0], "enable") || !strcmp(cmd[0], "disable")) !strcmp(cmd[0], "restart") || !strcmp(cmd[0], "enable") || !strcmp(cmd[0], "disable"))
{ {

View File

@ -109,7 +109,6 @@ struct disk_tool_t
int udev_import(std::string device); int udev_import(std::string device);
int read_sb(std::string device); int read_sb(std::string device);
int write_sb(std::string device); int write_sb(std::string device);
int update_sb(std::string device);
int exec_osd(std::string device); int exec_osd(std::string device);
int systemd_start_stop_osds(const std::vector<std::string> & cmd, const std::vector<std::string> & devices); int systemd_start_stop_osds(const std::vector<std::string> & cmd, const std::vector<std::string> & devices);
int pre_exec_osd(std::string device); int pre_exec_osd(std::string device);

View File

@ -440,27 +440,18 @@ std::vector<std::string> disk_tool_t::get_new_data_parts(vitastor_dev_info_t & d
{ {
// Use this partition // Use this partition
use_parts.push_back(part["uuid"].string_value()); use_parts.push_back(part["uuid"].string_value());
osds_exist++;
} }
else else
{ {
std::string part_path = "/dev/disk/by-partuuid/"+strtolower(part["uuid"].string_value());
bool is_meta = sb["params"]["meta_device"].string_value() == part_path;
bool is_journal = sb["params"]["journal_device"].string_value() == part_path;
bool is_data = sb["params"]["data_device"].string_value() == part_path;
fprintf( fprintf(
stderr, "%s is already initialized for OSD %lu%s, skipping\n", stderr, "%s is already initialized for OSD %lu, skipping\n",
part["node"].string_value().c_str(), sb["params"]["osd_num"].uint64_value(), part["node"].string_value().c_str(), sb["params"]["osd_num"].uint64_value()
(is_data ? " data" : (is_meta ? " meta" : (is_journal ? " journal" : "")))
); );
if (is_data || sb["params"]["data_device"].string_value().substr(0, 22) != "/dev/disk/by-partuuid/")
{
osds_size += part["size"].uint64_value()*dev.pt["sectorsize"].uint64_value(); osds_size += part["size"].uint64_value()*dev.pt["sectorsize"].uint64_value();
}
osds_exist++; osds_exist++;
} }
} }
}
}
// Still create OSD(s) if a disk has no more than (max_other_percent) other data // Still create OSD(s) if a disk has no more than (max_other_percent) other data
if (osds_exist >= osd_per_disk || (dev.free+osds_size) < dev.size*(100-max_other_percent)/100) if (osds_exist >= osd_per_disk || (dev.free+osds_size) < dev.size*(100-max_other_percent)/100)
fprintf(stderr, "%s is already partitioned, skipping\n", dev.path.c_str()); fprintf(stderr, "%s is already partitioned, skipping\n", dev.path.c_str());

View File

@ -86,24 +86,6 @@ int disk_tool_t::write_sb(std::string device)
return !write_osd_superblock(device, params); return !write_osd_superblock(device, params);
} }
int disk_tool_t::update_sb(std::string device)
{
json11::Json sb = read_osd_superblock(device, true, options.find("force") != options.end());
if (sb.is_null())
{
return 1;
}
auto sb_obj = sb["params"].object_items();
for (auto & kv: options)
{
if (kv.first != "force")
{
sb_obj[kv.first] = kv.second;
}
}
return !write_osd_superblock(device, sb_obj);
}
uint32_t disk_tool_t::write_osd_superblock(std::string device, json11::Json params) uint32_t disk_tool_t::write_osd_superblock(std::string device, json11::Json params)
{ {
std::string json_data = params.dump(); std::string json_data = params.dump();

View File

@ -135,8 +135,8 @@ void etcd_state_client_t::etcd_call(std::string api, json11::Json payload, int t
{ {
if (this->log_level > 0) if (this->log_level > 0)
{ {
fprintf( printf(
stderr, "Warning: etcd request failed: %s, retrying %d more times\n", "Warning: etcd request failed: %s, retrying %d more times\n",
err.c_str(), retries err.c_str(), retries
); );
} }
@ -333,7 +333,7 @@ void etcd_state_client_t::start_etcd_watcher()
etcd_watch_ws = NULL; etcd_watch_ws = NULL;
} }
if (this->log_level > 1) if (this->log_level > 1)
fprintf(stderr, "Trying to connect to etcd websocket at %s, watch from revision %lu\n", etcd_address.c_str(), etcd_watch_revision); printf("Trying to connect to etcd websocket at %s\n", etcd_address.c_str());
etcd_watch_ws = open_websocket(tfd, etcd_address, etcd_api_path+"/watch", etcd_slow_timeout, etcd_watch_ws = open_websocket(tfd, etcd_address, etcd_api_path+"/watch", etcd_slow_timeout,
[this, cur_addr = selected_etcd_address](const http_response_t *msg) [this, cur_addr = selected_etcd_address](const http_response_t *msg)
{ {
@ -356,8 +356,8 @@ void etcd_state_client_t::start_etcd_watcher()
watch_id == ETCD_PG_HISTORY_WATCH_ID || watch_id == ETCD_PG_HISTORY_WATCH_ID ||
watch_id == ETCD_OSD_STATE_WATCH_ID) watch_id == ETCD_OSD_STATE_WATCH_ID)
etcd_watches_initialised++; etcd_watches_initialised++;
if (etcd_watches_initialised == ETCD_TOTAL_WATCHES && this->log_level > 0) if (etcd_watches_initialised == 4 && this->log_level > 0)
fprintf(stderr, "Successfully subscribed to etcd at %s, revision %lu\n", cur_addr.c_str(), etcd_watch_revision); fprintf(stderr, "Successfully subscribed to etcd at %s\n", cur_addr.c_str());
} }
if (data["result"]["canceled"].bool_value()) if (data["result"]["canceled"].bool_value())
{ {
@ -393,13 +393,9 @@ void etcd_state_client_t::start_etcd_watcher()
exit(1); exit(1);
} }
} }
if (etcd_watches_initialised == ETCD_TOTAL_WATCHES && !data["result"]["header"]["revision"].is_null()) if (etcd_watches_initialised == 4)
{ {
// Protect against a revision beign split into multiple messages and some etcd_watch_revision = data["result"]["header"]["revision"].uint64_value()+1;
// of them being lost. Even though I'm not sure if etcd actually splits them
// Also sometimes etcd sends something without a header, like:
// {"error": {"grpc_code": 14, "http_code": 503, "http_status": "Service Unavailable", "message": "error reading from server: EOF"}}
etcd_watch_revision = data["result"]["header"]["revision"].uint64_value();
addresses_to_try.clear(); addresses_to_try.clear();
} }
// First gather all changes into a hash to remove multiple overwrites // First gather all changes into a hash to remove multiple overwrites
@ -511,7 +507,7 @@ void etcd_state_client_t::start_ws_keepalive()
{ {
ws_keepalive_timer = tfd->set_timer(etcd_ws_keepalive_interval*1000, true, [this](int) ws_keepalive_timer = tfd->set_timer(etcd_ws_keepalive_interval*1000, true, [this](int)
{ {
if (!etcd_watch_ws || etcd_watches_initialised < ETCD_TOTAL_WATCHES) if (!etcd_watch_ws)
{ {
// Do nothing // Do nothing
} }
@ -640,28 +636,18 @@ void etcd_state_client_t::load_pgs()
on_load_pgs_hook(false); on_load_pgs_hook(false);
return; return;
} }
reset_pg_exists();
if (!etcd_watch_revision) if (!etcd_watch_revision)
{ {
etcd_watch_revision = data["header"]["revision"].uint64_value()+1; etcd_watch_revision = data["header"]["revision"].uint64_value()+1;
if (this->log_level > 3)
{
fprintf(stderr, "Loaded revision %lu of PG configuration\n", etcd_watch_revision-1);
}
} }
for (auto & res: data["responses"].array_items()) for (auto & res: data["responses"].array_items())
{ {
for (auto & kv_json: res["response_range"]["kvs"].array_items()) for (auto & kv_json: res["response_range"]["kvs"].array_items())
{ {
auto kv = parse_etcd_kv(kv_json); auto kv = parse_etcd_kv(kv_json);
if (this->log_level > 3)
{
fprintf(stderr, "Loaded key: %s -> %s\n", kv.key.c_str(), kv.value.dump().c_str());
}
parse_state(kv); parse_state(kv);
} }
} }
clean_nonexistent_pgs();
on_load_pgs_hook(true); on_load_pgs_hook(true);
start_etcd_watcher(); start_etcd_watcher();
}); });
@ -682,73 +668,6 @@ void etcd_state_client_t::load_pgs()
} }
#endif #endif
void etcd_state_client_t::reset_pg_exists()
{
for (auto & pool_item: pool_config)
{
for (auto & pg_item: pool_item.second.pg_config)
{
pg_item.second.state_exists = false;
pg_item.second.history_exists = false;
}
}
seen_peers.clear();
}
void etcd_state_client_t::clean_nonexistent_pgs()
{
for (auto & pool_item: pool_config)
{
for (auto pg_it = pool_item.second.pg_config.begin(); pg_it != pool_item.second.pg_config.end(); )
{
auto & pg_cfg = pg_it->second;
if (!pg_cfg.config_exists && !pg_cfg.state_exists && !pg_cfg.history_exists)
{
if (this->log_level > 3)
{
fprintf(stderr, "PG %u/%u disappeared after reload, forgetting it\n", pool_item.first, pg_it->first);
}
pool_item.second.pg_config.erase(pg_it++);
}
else
{
if (!pg_cfg.state_exists)
{
if (this->log_level > 3)
{
fprintf(stderr, "PG %u/%u primary OSD disappeared after reload, forgetting it\n", pool_item.first, pg_it->first);
}
parse_state((etcd_kv_t){
.key = etcd_prefix+"/pg/state/"+std::to_string(pool_item.first)+"/"+std::to_string(pg_it->first),
});
}
if (!pg_cfg.history_exists)
{
if (this->log_level > 3)
{
fprintf(stderr, "PG %u/%u history disappeared after reload, forgetting it\n", pool_item.first, pg_it->first);
}
parse_state((etcd_kv_t){
.key = etcd_prefix+"/pg/history/"+std::to_string(pool_item.first)+"/"+std::to_string(pg_it->first),
});
}
pg_it++;
}
}
}
for (auto & peer_item: peer_states)
{
if (seen_peers.find(peer_item.first) == seen_peers.end())
{
fprintf(stderr, "OSD %lu state disappeared after reload, forgetting it\n", peer_item.first);
parse_state((etcd_kv_t){
.key = etcd_prefix+"/osd/state/"+std::to_string(peer_item.first),
});
}
}
seen_peers.clear();
}
void etcd_state_client_t::parse_state(const etcd_kv_t & kv) void etcd_state_client_t::parse_state(const etcd_kv_t & kv)
{ {
const std::string & key = kv.key; const std::string & key = kv.key;
@ -903,7 +822,7 @@ void etcd_state_client_t::parse_state(const etcd_kv_t & kv)
{ {
for (auto & pg_item: pool_item.second.pg_config) for (auto & pg_item: pool_item.second.pg_config)
{ {
pg_item.second.config_exists = false; pg_item.second.exists = false;
} }
} }
for (auto & pool_item: value["items"].object_items()) for (auto & pool_item: value["items"].object_items())
@ -926,7 +845,7 @@ void etcd_state_client_t::parse_state(const etcd_kv_t & kv)
continue; continue;
} }
auto & parsed_cfg = this->pool_config[pool_id].pg_config[pg_num]; auto & parsed_cfg = this->pool_config[pool_id].pg_config[pg_num];
parsed_cfg.config_exists = true; parsed_cfg.exists = true;
parsed_cfg.pause = pg_item.second["pause"].bool_value(); parsed_cfg.pause = pg_item.second["pause"].bool_value();
parsed_cfg.primary = pg_item.second["primary"].uint64_value(); parsed_cfg.primary = pg_item.second["primary"].uint64_value();
parsed_cfg.target_set.clear(); parsed_cfg.target_set.clear();
@ -947,7 +866,7 @@ void etcd_state_client_t::parse_state(const etcd_kv_t & kv)
int n = 0; int n = 0;
for (auto pg_it = pool_item.second.pg_config.begin(); pg_it != pool_item.second.pg_config.end(); pg_it++) for (auto pg_it = pool_item.second.pg_config.begin(); pg_it != pool_item.second.pg_config.end(); pg_it++)
{ {
if (pg_it->second.config_exists && pg_it->first != ++n) if (pg_it->second.exists && pg_it->first != ++n)
{ {
fprintf( fprintf(
stderr, "Invalid pool %u PG configuration: PG numbers don't cover whole 1..%lu range\n", stderr, "Invalid pool %u PG configuration: PG numbers don't cover whole 1..%lu range\n",
@ -955,7 +874,7 @@ void etcd_state_client_t::parse_state(const etcd_kv_t & kv)
); );
for (pg_it = pool_item.second.pg_config.begin(); pg_it != pool_item.second.pg_config.end(); pg_it++) for (pg_it = pool_item.second.pg_config.begin(); pg_it != pool_item.second.pg_config.end(); pg_it++)
{ {
pg_it->second.config_exists = false; pg_it->second.exists = false;
} }
n = 0; n = 0;
break; break;
@ -980,7 +899,6 @@ void etcd_state_client_t::parse_state(const etcd_kv_t & kv)
auto & pg_cfg = this->pool_config[pool_id].pg_config[pg_num]; auto & pg_cfg = this->pool_config[pool_id].pg_config[pg_num];
pg_cfg.target_history.clear(); pg_cfg.target_history.clear();
pg_cfg.all_peers.clear(); pg_cfg.all_peers.clear();
pg_cfg.history_exists = !value.is_null();
// Refuse to start PG if any set of the <osd_sets> has no live OSDs // Refuse to start PG if any set of the <osd_sets> has no live OSDs
for (auto & hist_item: value["osd_sets"].array_items()) for (auto & hist_item: value["osd_sets"].array_items())
{ {
@ -1033,15 +951,11 @@ void etcd_state_client_t::parse_state(const etcd_kv_t & kv)
} }
else if (value.is_null()) else if (value.is_null())
{ {
auto & pg_cfg = this->pool_config[pool_id].pg_config[pg_num]; this->pool_config[pool_id].pg_config[pg_num].cur_primary = 0;
pg_cfg.state_exists = false; this->pool_config[pool_id].pg_config[pg_num].cur_state = 0;
pg_cfg.cur_primary = 0;
pg_cfg.cur_state = 0;
} }
else else
{ {
auto & pg_cfg = this->pool_config[pool_id].pg_config[pg_num];
pg_cfg.state_exists = true;
osd_num_t cur_primary = value["primary"].uint64_value(); osd_num_t cur_primary = value["primary"].uint64_value();
int state = 0; int state = 0;
for (auto & e: value["state"].array_items()) for (auto & e: value["state"].array_items())
@ -1069,8 +983,8 @@ void etcd_state_client_t::parse_state(const etcd_kv_t & kv)
fprintf(stderr, "Unexpected pool %u PG %u state in etcd: primary=%lu, state=%s\n", pool_id, pg_num, cur_primary, value["state"].dump().c_str()); fprintf(stderr, "Unexpected pool %u PG %u state in etcd: primary=%lu, state=%s\n", pool_id, pg_num, cur_primary, value["state"].dump().c_str());
return; return;
} }
pg_cfg.cur_primary = cur_primary; this->pool_config[pool_id].pg_config[pg_num].cur_primary = cur_primary;
pg_cfg.cur_state = state; this->pool_config[pool_id].pg_config[pg_num].cur_state = state;
} }
} }
else if (key.substr(0, etcd_prefix.length()+11) == etcd_prefix+"/osd/state/") else if (key.substr(0, etcd_prefix.length()+11) == etcd_prefix+"/osd/state/")
@ -1084,7 +998,6 @@ void etcd_state_client_t::parse_state(const etcd_kv_t & kv)
value["port"].int64_value() > 0 && value["port"].int64_value() < 65536) value["port"].int64_value() > 0 && value["port"].int64_value() < 65536)
{ {
this->peer_states[peer_osd] = value; this->peer_states[peer_osd] = value;
this->seen_peers.insert(peer_osd);
} }
else else
{ {

View File

@ -3,8 +3,6 @@
#pragma once #pragma once
#include <set>
#include "json11/json11.hpp" #include "json11/json11.hpp"
#include "osd_id.h" #include "osd_id.h"
#include "timerfd_manager.h" #include "timerfd_manager.h"
@ -13,7 +11,6 @@
#define ETCD_PG_STATE_WATCH_ID 2 #define ETCD_PG_STATE_WATCH_ID 2
#define ETCD_PG_HISTORY_WATCH_ID 3 #define ETCD_PG_HISTORY_WATCH_ID 3
#define ETCD_OSD_STATE_WATCH_ID 4 #define ETCD_OSD_STATE_WATCH_ID 4
#define ETCD_TOTAL_WATCHES 4
#define DEFAULT_BLOCK_SIZE 128*1024 #define DEFAULT_BLOCK_SIZE 128*1024
#define MIN_DATA_BLOCK_SIZE 4*1024 #define MIN_DATA_BLOCK_SIZE 4*1024
@ -33,7 +30,7 @@ struct etcd_kv_t
struct pg_config_t struct pg_config_t
{ {
bool config_exists, history_exists, state_exists; bool exists;
osd_num_t primary; osd_num_t primary;
std::vector<osd_num_t> target_set; std::vector<osd_num_t> target_set;
std::vector<std::vector<osd_num_t>> target_history; std::vector<std::vector<osd_num_t>> target_history;
@ -64,21 +61,21 @@ struct pool_config_t
struct inode_config_t struct inode_config_t
{ {
uint64_t num = 0; uint64_t num;
std::string name; std::string name;
uint64_t size = 0; uint64_t size;
inode_t parent_id = 0; inode_t parent_id;
bool readonly = false; bool readonly;
// Arbitrary metadata // Arbitrary metadata
json11::Json meta; json11::Json meta;
// Change revision of the metadata in etcd // Change revision of the metadata in etcd
uint64_t mod_revision = 0; uint64_t mod_revision;
}; };
struct inode_watch_t struct inode_watch_t
{ {
std::string name; std::string name;
inode_config_t cfg = {}; inode_config_t cfg;
}; };
struct http_co_t; struct http_co_t;
@ -116,7 +113,6 @@ public:
uint64_t etcd_watch_revision = 0; uint64_t etcd_watch_revision = 0;
std::map<pool_id_t, pool_config_t> pool_config; std::map<pool_id_t, pool_config_t> pool_config;
std::map<osd_num_t, json11::Json> peer_states; std::map<osd_num_t, json11::Json> peer_states;
std::set<osd_num_t> seen_peers;
std::map<inode_t, inode_config_t> inode_config; std::map<inode_t, inode_config_t> inode_config;
std::map<std::string, inode_t> inode_by_name; std::map<std::string, inode_t> inode_by_name;
@ -142,8 +138,6 @@ public:
void start_ws_keepalive(); void start_ws_keepalive();
void load_global_config(); void load_global_config();
void load_pgs(); void load_pgs();
void reset_pg_exists();
void clean_nonexistent_pgs();
void parse_state(const etcd_kv_t & kv); void parse_state(const etcd_kv_t & kv);
void parse_config(const json11::Json & config); void parse_config(const json11::Json & config);
void insert_inode_config(const inode_config_t & cfg); void insert_inode_config(const inode_config_t & cfg);

View File

@ -149,7 +149,7 @@ public:
std::map<osd_num_t, osd_wanted_peer_t> wanted_peers; std::map<osd_num_t, osd_wanted_peer_t> wanted_peers;
std::map<uint64_t, int> osd_peer_fds; std::map<uint64_t, int> osd_peer_fds;
// op statistics // op statistics
osd_op_stats_t stats, recovery_stats; osd_op_stats_t stats;
void init(); void init();
void parse_config(const json11::Json & config); void parse_config(const json11::Json & config);
@ -175,7 +175,6 @@ public:
bool connect_rdma(int peer_fd, std::string rdma_address, uint64_t client_max_msg); bool connect_rdma(int peer_fd, std::string rdma_address, uint64_t client_max_msg);
#endif #endif
void inc_op_stats(osd_op_stats_t & stats, uint64_t opcode, timespec & tv_begin, timespec & tv_end, uint64_t len);
void measure_exec(osd_op_t *cur_op); void measure_exec(osd_op_t *cur_op);
protected: protected:

View File

@ -24,17 +24,3 @@ osd_op_t::~osd_op_t()
free(buf); free(buf);
} }
} }
bool osd_op_t::is_recovery_related()
{
return (req.hdr.opcode == OSD_OP_SEC_READ ||
req.hdr.opcode == OSD_OP_SEC_WRITE ||
req.hdr.opcode == OSD_OP_SEC_WRITE_STABLE) &&
(req.sec_rw.flags & OSD_OP_RECOVERY_RELATED) ||
req.hdr.opcode == OSD_OP_SEC_DELETE &&
(req.sec_del.flags & OSD_OP_RECOVERY_RELATED) ||
req.hdr.opcode == OSD_OP_SEC_STABILIZE &&
(req.sec_stab.flags & OSD_OP_RECOVERY_RELATED) ||
req.hdr.opcode == OSD_OP_SEC_SYNC &&
(req.sec_sync.flags & OSD_OP_RECOVERY_RELATED);
}

View File

@ -173,6 +173,4 @@ struct osd_op_t
osd_op_buf_list_t iov; osd_op_buf_list_t iov;
~osd_op_t(); ~osd_op_t();
bool is_recovery_related();
}; };

View File

@ -131,23 +131,6 @@ void osd_messenger_t::outbox_push(osd_op_t *cur_op)
} }
} }
void osd_messenger_t::inc_op_stats(osd_op_stats_t & stats, uint64_t opcode, timespec & tv_begin, timespec & tv_end, uint64_t len)
{
uint64_t usecs = (
(tv_end.tv_sec - tv_begin.tv_sec)*1000000 +
(tv_end.tv_nsec - tv_begin.tv_nsec)/1000
);
stats.op_stat_count[opcode]++;
if (!stats.op_stat_count[opcode])
{
stats.op_stat_count[opcode] = 1;
stats.op_stat_sum[opcode] = 0;
stats.op_stat_bytes[opcode] = 0;
}
stats.op_stat_sum[opcode] += usecs;
stats.op_stat_bytes[opcode] += len;
}
void osd_messenger_t::measure_exec(osd_op_t *cur_op) void osd_messenger_t::measure_exec(osd_op_t *cur_op)
{ {
// Measure execution latency // Measure execution latency
@ -159,24 +142,29 @@ void osd_messenger_t::measure_exec(osd_op_t *cur_op)
{ {
clock_gettime(CLOCK_REALTIME, &cur_op->tv_end); clock_gettime(CLOCK_REALTIME, &cur_op->tv_end);
} }
uint64_t len = 0; stats.op_stat_count[cur_op->req.hdr.opcode]++;
if (!stats.op_stat_count[cur_op->req.hdr.opcode])
{
stats.op_stat_count[cur_op->req.hdr.opcode]++;
stats.op_stat_sum[cur_op->req.hdr.opcode] = 0;
stats.op_stat_bytes[cur_op->req.hdr.opcode] = 0;
}
stats.op_stat_sum[cur_op->req.hdr.opcode] += (
(cur_op->tv_end.tv_sec - cur_op->tv_begin.tv_sec)*1000000 +
(cur_op->tv_end.tv_nsec - cur_op->tv_begin.tv_nsec)/1000
);
if (cur_op->req.hdr.opcode == OSD_OP_READ || if (cur_op->req.hdr.opcode == OSD_OP_READ ||
cur_op->req.hdr.opcode == OSD_OP_WRITE || cur_op->req.hdr.opcode == OSD_OP_WRITE ||
cur_op->req.hdr.opcode == OSD_OP_SCRUB) cur_op->req.hdr.opcode == OSD_OP_SCRUB)
{ {
// req.rw.len is internally set to the full object size for scrubs // req.rw.len is internally set to the full object size for scrubs
len = cur_op->req.rw.len; stats.op_stat_bytes[cur_op->req.hdr.opcode] += cur_op->req.rw.len;
} }
else if (cur_op->req.hdr.opcode == OSD_OP_SEC_READ || else if (cur_op->req.hdr.opcode == OSD_OP_SEC_READ ||
cur_op->req.hdr.opcode == OSD_OP_SEC_WRITE || cur_op->req.hdr.opcode == OSD_OP_SEC_WRITE ||
cur_op->req.hdr.opcode == OSD_OP_SEC_WRITE_STABLE) cur_op->req.hdr.opcode == OSD_OP_SEC_WRITE_STABLE)
{ {
len = cur_op->req.sec_rw.len; stats.op_stat_bytes[cur_op->req.hdr.opcode] += cur_op->req.sec_rw.len;
}
inc_op_stats(stats, cur_op->req.hdr.opcode, cur_op->tv_begin, cur_op->tv_end, len);
if (cur_op->is_recovery_related())
{
inc_op_stats(recovery_stats, cur_op->req.hdr.opcode, cur_op->tv_begin, cur_op->tv_end, len);
} }
} }

View File

@ -30,7 +30,7 @@ protected:
std::string image_name; std::string image_name;
uint64_t inode = 0; uint64_t inode = 0;
uint64_t device_size = 0; uint64_t device_size = 0;
int nbd_timeout = 300; int nbd_timeout = 30;
int nbd_max_devices = 64; int nbd_max_devices = 64;
int nbd_max_part = 3; int nbd_max_part = 3;
inode_watch_t *watch = NULL; inode_watch_t *watch = NULL;
@ -135,16 +135,14 @@ public:
" %s unmap /dev/nbd0\n" " %s unmap /dev/nbd0\n"
" %s ls [--json]\n" " %s ls [--json]\n"
"OPTIONS:\n" "OPTIONS:\n"
" All usual Vitastor config options like --config_file <path_to_config> plus NBD-specific:\n" " All usual Vitastor config options like --etcd_address <etcd_address> plus NBD-specific:\n"
" --nbd_timeout 300\n" " --nbd_timeout 30\n"
" Timeout for I/O operations in seconds after exceeding which the kernel stops\n" " Timeout for I/O operations in seconds after exceeding which the kernel stops\n"
" the device. You can set it to 0 to disable the timeout, but beware that you\n" " the device. You can set it to 0 to disable the timeout, but beware that you\n"
" won't be able to stop the device at all if vitastor-nbd process dies.\n" " won't be able to stop the device at all if vitastor-nbd process dies.\n"
" --nbd_max_devices 64 --nbd_max_part 3\n" " --nbd_max_devices 64 --nbd_max_part 3\n"
" Options for the \"nbd\" kernel module when modprobing it (nbds_max and max_part).\n" " Options for the \"nbd\" kernel module when modprobing it (nbds_max and max_part).\n"
" note that maximum allowed (nbds_max)*(1+max_part) is 256.\n" " note that maximum allowed (nbds_max)*(1+max_part) is 256.\n"
" Note that nbd_timeout, nbd_max_devices and nbd_max_part options may also be specified\n"
" in /etc/vitastor/vitastor.conf or in other configuration file specified with --config_file.\n"
" --logfile /path/to/log/file.txt\n" " --logfile /path/to/log/file.txt\n"
" Wite log messages to the specified file instead of dropping them (in background mode)\n" " Wite log messages to the specified file instead of dropping them (in background mode)\n"
" or printing them to the standard output (in foreground mode).\n" " or printing them to the standard output (in foreground mode).\n"
@ -206,18 +204,17 @@ public:
exit(1); exit(1);
} }
} }
auto file_config = osd_messenger_t::read_config(cfg); if (cfg["nbd_max_devices"].is_number() || cfg["nbd_max_devices"].is_string())
if (file_config["nbd_max_devices"].is_number() || file_config["nbd_max_devices"].is_string())
{ {
nbd_max_devices = file_config["nbd_max_devices"].uint64_value(); nbd_max_devices = cfg["nbd_max_devices"].uint64_value();
} }
if (file_config["nbd_max_part"].is_number() || file_config["nbd_max_part"].is_string()) if (cfg["nbd_max_part"].is_number() || cfg["nbd_max_part"].is_string())
{ {
nbd_max_part = file_config["nbd_max_part"].uint64_value(); nbd_max_part = cfg["nbd_max_part"].uint64_value();
} }
if (file_config["nbd_timeout"].is_number() || file_config["nbd_timeout"].is_string()) if (cfg["nbd_timeout"].is_number() || cfg["nbd_timeout"].is_string())
{ {
nbd_timeout = file_config["nbd_timeout"].uint64_value(); nbd_timeout = cfg["nbd_timeout"].uint64_value();
} }
if (cfg["client_writeback_allowed"].is_null()) if (cfg["client_writeback_allowed"].is_null())
{ {
@ -275,7 +272,7 @@ public:
int i = 0; int i = 0;
while (true) while (true)
{ {
int r = run_nbd(sockfd, i, device_size, NBD_FLAG_SEND_FLUSH, nbd_timeout, bg); int r = run_nbd(sockfd, i, device_size, NBD_FLAG_SEND_FLUSH, 30, bg);
if (r == 0) if (r == 0)
{ {
printf("/dev/nbd%d\n", i); printf("/dev/nbd%d\n", i);

View File

@ -56,7 +56,7 @@ json11::Json::object nfs_proxy_t::parse_args(int narg, const char *args[])
"(c) Vitaliy Filippov, 2021-2022 (VNPL-1.1)\n" "(c) Vitaliy Filippov, 2021-2022 (VNPL-1.1)\n"
"\n" "\n"
"USAGE:\n" "USAGE:\n"
" %s [STANDARD OPTIONS] [OTHER OPTIONS]\n" " %s [--etcd_address ADDR] [OTHER OPTIONS]\n"
" --subdir <DIR> export images prefixed <DIR>/ (default empty - export all images)\n" " --subdir <DIR> export images prefixed <DIR>/ (default empty - export all images)\n"
" --portmap 0 do not listen on port 111 (portmap/rpcbind, requires root)\n" " --portmap 0 do not listen on port 111 (portmap/rpcbind, requires root)\n"
" --bind <IP> bind service to <IP> address (default 0.0.0.0)\n" " --bind <IP> bind service to <IP> address (default 0.0.0.0)\n"

View File

@ -68,21 +68,14 @@ osd_t::osd_t(const json11::Json & config, ring_loop_t *ringloop)
} }
} }
if (print_stats_timer_id == -1)
{
print_stats_timer_id = this->tfd->set_timer(print_stats_interval*1000, true, [this](int timer_id) print_stats_timer_id = this->tfd->set_timer(print_stats_interval*1000, true, [this](int timer_id)
{ {
print_stats(); print_stats();
}); });
}
if (slow_log_timer_id == -1)
{
slow_log_timer_id = this->tfd->set_timer(slow_log_interval*1000, true, [this](int timer_id) slow_log_timer_id = this->tfd->set_timer(slow_log_interval*1000, true, [this](int timer_id)
{ {
print_slow(); print_slow();
}); });
}
apply_recovery_tune_interval();
msgr.tfd = this->tfd; msgr.tfd = this->tfd;
msgr.ringloop = this->ringloop; msgr.ringloop = this->ringloop;
@ -104,11 +97,6 @@ osd_t::~osd_t()
tfd->clear_timer(slow_log_timer_id); tfd->clear_timer(slow_log_timer_id);
slow_log_timer_id = -1; slow_log_timer_id = -1;
} }
if (rtune_timer_id >= 0)
{
tfd->clear_timer(rtune_timer_id);
rtune_timer_id = -1;
}
if (print_stats_timer_id >= 0) if (print_stats_timer_id >= 0)
{ {
tfd->clear_timer(print_stats_timer_id); tfd->clear_timer(print_stats_timer_id);
@ -208,30 +196,6 @@ void osd_t::parse_config(bool init)
recovery_queue_depth = config["recovery_queue_depth"].uint64_value(); recovery_queue_depth = config["recovery_queue_depth"].uint64_value();
if (recovery_queue_depth < 1 || recovery_queue_depth > MAX_RECOVERY_QUEUE) if (recovery_queue_depth < 1 || recovery_queue_depth > MAX_RECOVERY_QUEUE)
recovery_queue_depth = DEFAULT_RECOVERY_QUEUE; recovery_queue_depth = DEFAULT_RECOVERY_QUEUE;
recovery_sleep_us = config["recovery_sleep_us"].uint64_value();
recovery_tune_util_low = config["recovery_tune_util_low"].is_null()
? 0.1 : config["recovery_tune_util_low"].number_value();
if (recovery_tune_util_low < 0.01)
recovery_tune_util_low = 0.01;
recovery_tune_util_high = config["recovery_tune_util_high"].is_null()
? 1.0 : config["recovery_tune_util_high"].number_value();
if (recovery_tune_util_high < 0.01)
recovery_tune_util_high = 0.01;
recovery_tune_client_util_low = config["recovery_tune_client_util_low"].is_null()
? 0 : config["recovery_tune_client_util_low"].number_value();
if (recovery_tune_client_util_low < 0.01)
recovery_tune_client_util_low = 0.01;
recovery_tune_client_util_high = config["recovery_tune_client_util_high"].is_null()
? 0.5 : config["recovery_tune_client_util_high"].number_value();
if (recovery_tune_client_util_high < 0.01)
recovery_tune_client_util_high = 0.01;
auto old_recovery_tune_interval = recovery_tune_interval;
recovery_tune_interval = config["recovery_tune_interval"].is_null()
? 1 : config["recovery_tune_interval"].uint64_value();
recovery_tune_agg_interval = config["recovery_tune_agg_interval"].is_null()
? 10 : config["recovery_tune_agg_interval"].uint64_value();
recovery_tune_sleep_min_us = config["recovery_tune_sleep_min_us"].is_null()
? 10 : config["recovery_tune_sleep_min_us"].uint64_value();
recovery_pg_switch = config["recovery_pg_switch"].uint64_value(); recovery_pg_switch = config["recovery_pg_switch"].uint64_value();
if (recovery_pg_switch < 1) if (recovery_pg_switch < 1)
recovery_pg_switch = DEFAULT_RECOVERY_PG_SWITCH; recovery_pg_switch = DEFAULT_RECOVERY_PG_SWITCH;
@ -310,10 +274,6 @@ void osd_t::parse_config(bool init)
print_slow(); print_slow();
}); });
} }
if (old_recovery_tune_interval != recovery_tune_interval)
{
apply_recovery_tune_interval();
}
} }
void osd_t::bind_socket() void osd_t::bind_socket()
@ -461,6 +421,14 @@ void osd_t::exec_op(osd_op_t *cur_op)
} }
} }
void osd_t::reset_stats()
{
msgr.stats = {};
prev_stats = {};
memset(recovery_stat_count, 0, sizeof(recovery_stat_count));
memset(recovery_stat_bytes, 0, sizeof(recovery_stat_bytes));
}
void osd_t::print_stats() void osd_t::print_stats()
{ {
for (int i = OSD_OP_MIN; i <= OSD_OP_MAX; i++) for (int i = OSD_OP_MIN; i <= OSD_OP_MAX; i++)
@ -498,20 +466,19 @@ void osd_t::print_stats()
} }
for (int i = 0; i < 2; i++) for (int i = 0; i < 2; i++)
{ {
if (recovery_stat[i].count > recovery_print_prev[i].count) if (recovery_stat_count[0][i] != recovery_stat_count[1][i])
{ {
uint64_t bw = (recovery_stat[i].bytes - recovery_print_prev[i].bytes) / print_stats_interval; uint64_t bw = (recovery_stat_bytes[0][i] - recovery_stat_bytes[1][i]) / print_stats_interval;
printf( printf(
"[OSD %lu] %s recovery: %.1f op/s, B/W: %.2f %s, avg latency %ld us, delay %ld us\n", osd_num, recovery_stat_names[i], "[OSD %lu] %s recovery: %.1f op/s, B/W: %.2f %s\n", osd_num, recovery_stat_names[i],
(recovery_stat[i].count - recovery_print_prev[i].count) * 1.0 / print_stats_interval, (recovery_stat_count[0][i] - recovery_stat_count[1][i]) * 1.0 / print_stats_interval,
(bw > 1024*1024*1024 ? bw/1024.0/1024/1024 : (bw > 1024*1024 ? bw/1024.0/1024 : bw/1024.0)), (bw > 1024*1024*1024 ? bw/1024.0/1024/1024 : (bw > 1024*1024 ? bw/1024.0/1024 : bw/1024.0)),
(bw > 1024*1024*1024 ? "GB/s" : (bw > 1024*1024 ? "MB/s" : "KB/s")), (bw > 1024*1024*1024 ? "GB/s" : (bw > 1024*1024 ? "MB/s" : "KB/s"))
(recovery_stat[i].usec - recovery_print_prev[i].usec) / (recovery_stat[i].count - recovery_print_prev[i].count),
recovery_target_sleep_us
); );
recovery_stat_count[1][i] = recovery_stat_count[0][i];
recovery_stat_bytes[1][i] = recovery_stat_bytes[0][i];
} }
} }
memcpy(recovery_print_prev, recovery_stat, sizeof(recovery_stat));
if (corrupted_objects > 0) if (corrupted_objects > 0)
{ {
printf("[OSD %lu] %lu object(s) corrupted\n", osd_num, corrupted_objects); printf("[OSD %lu] %lu object(s) corrupted\n", osd_num, corrupted_objects);
@ -605,8 +572,8 @@ void osd_t::print_slow()
op->req.hdr.opcode == OSD_OP_SEC_STABILIZE || op->req.hdr.opcode == OSD_OP_SEC_ROLLBACK || op->req.hdr.opcode == OSD_OP_SEC_STABILIZE || op->req.hdr.opcode == OSD_OP_SEC_ROLLBACK ||
op->req.hdr.opcode == OSD_OP_SEC_READ_BMP) op->req.hdr.opcode == OSD_OP_SEC_READ_BMP)
{ {
bufprintf(" state=%d", op->bs_op ? PRIV(op->bs_op)->op_state : -1); bufprintf(" state=%d", PRIV(op->bs_op)->op_state);
int wait_for = op->bs_op ? PRIV(op->bs_op)->wait_for : 0; int wait_for = PRIV(op->bs_op)->wait_for;
if (wait_for) if (wait_for)
{ {
bufprintf(" wait=%d (detail=%lu)", wait_for, PRIV(op->bs_op)->wait_detail); bufprintf(" wait=%d (detail=%lu)", wait_for, PRIV(op->bs_op)->wait_detail);

View File

@ -34,7 +34,7 @@
#define DEFAULT_AUTOSYNC_INTERVAL 5 #define DEFAULT_AUTOSYNC_INTERVAL 5
#define DEFAULT_AUTOSYNC_WRITES 128 #define DEFAULT_AUTOSYNC_WRITES 128
#define MAX_RECOVERY_QUEUE 2048 #define MAX_RECOVERY_QUEUE 2048
#define DEFAULT_RECOVERY_QUEUE 1 #define DEFAULT_RECOVERY_QUEUE 4
#define DEFAULT_RECOVERY_PG_SWITCH 128 #define DEFAULT_RECOVERY_PG_SWITCH 128
#define DEFAULT_RECOVERY_BATCH 16 #define DEFAULT_RECOVERY_BATCH 16
@ -87,11 +87,6 @@ struct osd_chain_read_t
struct osd_rmw_stripe_t; struct osd_rmw_stripe_t;
struct recovery_stat_t
{
uint64_t count, usec, bytes;
};
class osd_t class osd_t
{ {
// config // config
@ -116,15 +111,7 @@ class osd_t
int immediate_commit = IMMEDIATE_NONE; int immediate_commit = IMMEDIATE_NONE;
int autosync_interval = DEFAULT_AUTOSYNC_INTERVAL; // "emergency" sync every 5 seconds int autosync_interval = DEFAULT_AUTOSYNC_INTERVAL; // "emergency" sync every 5 seconds
int autosync_writes = DEFAULT_AUTOSYNC_WRITES; int autosync_writes = DEFAULT_AUTOSYNC_WRITES;
uint64_t recovery_queue_depth = 1; int recovery_queue_depth = DEFAULT_RECOVERY_QUEUE;
uint64_t recovery_sleep_us = 0;
double recovery_tune_util_low = 0.1;
double recovery_tune_client_util_low = 0;
double recovery_tune_util_high = 1.0;
double recovery_tune_client_util_high = 0.5;
int recovery_tune_interval = 1;
int recovery_tune_agg_interval = 10;
int recovery_tune_sleep_min_us = 10;
int recovery_pg_switch = DEFAULT_RECOVERY_PG_SWITCH; int recovery_pg_switch = DEFAULT_RECOVERY_PG_SWITCH;
int recovery_sync_batch = DEFAULT_RECOVERY_BATCH; int recovery_sync_batch = DEFAULT_RECOVERY_BATCH;
int inode_vanish_time = 60; int inode_vanish_time = 60;
@ -202,18 +189,8 @@ class osd_t
std::map<uint64_t, inode_stats_t> inode_stats; std::map<uint64_t, inode_stats_t> inode_stats;
std::map<uint64_t, timespec> vanishing_inodes; std::map<uint64_t, timespec> vanishing_inodes;
const char* recovery_stat_names[2] = { "degraded", "misplaced" }; const char* recovery_stat_names[2] = { "degraded", "misplaced" };
recovery_stat_t recovery_stat[2]; uint64_t recovery_stat_count[2][2] = {};
recovery_stat_t recovery_print_prev[2]; uint64_t recovery_stat_bytes[2][2] = {};
// recovery auto-tuning
int rtune_timer_id = -1;
uint64_t rtune_avg_lat = 0;
double rtune_client_util = 0, rtune_target_util = 1;
osd_op_stats_t rtune_prev_stats, rtune_prev_recovery_stats;
std::vector<uint64_t> recovery_target_sleep_items;
uint64_t recovery_target_sleep_us = 0;
uint64_t recovery_target_sleep_total = 0;
int recovery_target_sleep_cur = 0, recovery_target_sleep_count = 0;
// cluster connection // cluster connection
void parse_config(bool init); void parse_config(bool init);
@ -231,9 +208,8 @@ class osd_t
void create_osd_state(); void create_osd_state();
void renew_lease(bool reload); void renew_lease(bool reload);
void print_stats(); void print_stats();
void tune_recovery();
void apply_recovery_tune_interval();
void print_slow(); void print_slow();
void reset_stats();
json11::Json get_statistics(); json11::Json get_statistics();
void report_statistics(); void report_statistics();
void report_pg_state(pg_t & pg); void report_pg_state(pg_t & pg);
@ -262,7 +238,6 @@ class osd_t
bool submit_flush_op(pool_id_t pool_id, pg_num_t pg_num, pg_flush_batch_t *fb, bool rollback, osd_num_t peer_osd, int count, obj_ver_id *data); bool submit_flush_op(pool_id_t pool_id, pg_num_t pg_num, pg_flush_batch_t *fb, bool rollback, osd_num_t peer_osd, int count, obj_ver_id *data);
bool pick_next_recovery(osd_recovery_op_t &op); bool pick_next_recovery(osd_recovery_op_t &op);
void submit_recovery_op(osd_recovery_op_t *op); void submit_recovery_op(osd_recovery_op_t *op);
void finish_recovery_op(osd_recovery_op_t *op);
bool continue_recovery(); bool continue_recovery();
pg_osd_set_state_t* change_osd_set(pg_osd_set_state_t *st, pg_t *pg); pg_osd_set_state_t* change_osd_set(pg_osd_set_state_t *st, pg_t *pg);
@ -304,7 +279,7 @@ class osd_t
bool remember_unstable_write(osd_op_t *cur_op, pg_t & pg, pg_osd_set_t & loc_set, int base_state); bool remember_unstable_write(osd_op_t *cur_op, pg_t & pg, pg_osd_set_t & loc_set, int base_state);
void handle_primary_subop(osd_op_t *subop, osd_op_t *cur_op); void handle_primary_subop(osd_op_t *subop, osd_op_t *cur_op);
void handle_primary_bs_subop(osd_op_t *subop); void handle_primary_bs_subop(osd_op_t *subop);
void add_bs_subop_stats(osd_op_t *subop, bool recovery_related = false); void add_bs_subop_stats(osd_op_t *subop);
void pg_cancel_write_queue(pg_t & pg, osd_op_t *first_op, object_id oid, int retval); void pg_cancel_write_queue(pg_t & pg, osd_op_t *first_op, object_id oid, int retval);
void submit_primary_subops(int submit_type, uint64_t op_version, const uint64_t* osd_set, osd_op_t *cur_op); void submit_primary_subops(int submit_type, uint64_t op_version, const uint64_t* osd_set, osd_op_t *cur_op);

View File

@ -213,14 +213,12 @@ json11::Json osd_t::get_statistics()
st["subop_stats"] = subop_stats; st["subop_stats"] = subop_stats;
st["recovery_stats"] = json11::Json::object { st["recovery_stats"] = json11::Json::object {
{ recovery_stat_names[0], json11::Json::object { { recovery_stat_names[0], json11::Json::object {
{ "count", recovery_stat[0].count }, { "count", recovery_stat_count[0][0] },
{ "bytes", recovery_stat[0].bytes }, { "bytes", recovery_stat_bytes[0][0] },
{ "usec", recovery_stat[0].usec },
} }, } },
{ recovery_stat_names[1], json11::Json::object { { recovery_stat_names[1], json11::Json::object {
{ "count", recovery_stat[1].count }, { "count", recovery_stat_count[0][1] },
{ "bytes", recovery_stat[1].bytes }, { "bytes", recovery_stat_bytes[0][1] },
{ "usec", recovery_stat[1].usec },
} }, } },
}; };
return st; return st;
@ -651,7 +649,7 @@ void osd_t::apply_pg_config()
{ {
pg_num_t pg_num = kv.first; pg_num_t pg_num = kv.first;
auto & pg_cfg = kv.second; auto & pg_cfg = kv.second;
bool take = pg_cfg.config_exists && pg_cfg.primary == this->osd_num && bool take = pg_cfg.exists && pg_cfg.primary == this->osd_num &&
!pg_cfg.pause && (!pg_cfg.cur_primary || pg_cfg.cur_primary == this->osd_num); !pg_cfg.pause && (!pg_cfg.cur_primary || pg_cfg.cur_primary == this->osd_num);
auto pg_it = this->pgs.find({ .pool_id = pool_id, .pg_num = pg_num }); auto pg_it = this->pgs.find({ .pool_id = pool_id, .pg_num = pg_num });
bool currently_taken = pg_it != this->pgs.end() && pg_it->second.state != PG_OFFLINE; bool currently_taken = pg_it != this->pgs.end() && pg_it->second.state != PG_OFFLINE;

View File

@ -325,37 +325,10 @@ void osd_t::submit_recovery_op(osd_recovery_op_t *op)
{ {
printf("Recovery operation done for %lx:%lx\n", op->oid.inode, op->oid.stripe); printf("Recovery operation done for %lx:%lx\n", op->oid.inode, op->oid.stripe);
} }
finish_recovery_op(op);
};
exec_op(op->osd_op);
}
void osd_t::apply_recovery_tune_interval()
{
if (rtune_timer_id >= 0)
{
tfd->clear_timer(rtune_timer_id);
rtune_timer_id = -1;
}
if (recovery_tune_interval != 0)
{
rtune_timer_id = this->tfd->set_timer(recovery_tune_interval*1000, true, [this](int timer_id)
{
tune_recovery();
});
}
else
{
recovery_target_sleep_us = recovery_sleep_us;
}
}
void osd_t::finish_recovery_op(osd_recovery_op_t *op)
{
// CAREFUL! op = &recovery_ops[op->oid]. Don't access op->* after recovery_ops.erase() // CAREFUL! op = &recovery_ops[op->oid]. Don't access op->* after recovery_ops.erase()
delete op->osd_op;
op->osd_op = NULL; op->osd_op = NULL;
recovery_ops.erase(op->oid); recovery_ops.erase(op->oid);
delete osd_op;
if (immediate_commit != IMMEDIATE_ALL) if (immediate_commit != IMMEDIATE_ALL)
{ {
recovery_done++; recovery_done++;
@ -368,84 +341,8 @@ void osd_t::finish_recovery_op(osd_recovery_op_t *op)
} }
} }
continue_recovery(); continue_recovery();
}
void osd_t::tune_recovery()
{
static int accounted_ops[] = {
OSD_OP_SEC_READ, OSD_OP_SEC_WRITE, OSD_OP_SEC_WRITE_STABLE,
OSD_OP_SEC_STABILIZE, OSD_OP_SEC_SYNC, OSD_OP_SEC_DELETE
}; };
uint64_t total_client_usec = 0, total_recovery_usec = 0, recovery_count = 0; exec_op(op->osd_op);
for (int i = 0; i < sizeof(accounted_ops)/sizeof(accounted_ops[0]); i++)
{
total_client_usec += (msgr.stats.op_stat_sum[accounted_ops[i]]
- rtune_prev_stats.op_stat_sum[accounted_ops[i]]);
total_recovery_usec += (msgr.recovery_stats.op_stat_sum[accounted_ops[i]]
- rtune_prev_recovery_stats.op_stat_sum[accounted_ops[i]]);
recovery_count += (msgr.recovery_stats.op_stat_count[accounted_ops[i]]
- rtune_prev_recovery_stats.op_stat_count[accounted_ops[i]]);
rtune_prev_stats.op_stat_sum[accounted_ops[i]] = msgr.stats.op_stat_sum[accounted_ops[i]];
rtune_prev_recovery_stats.op_stat_sum[accounted_ops[i]] = msgr.recovery_stats.op_stat_sum[accounted_ops[i]];
rtune_prev_recovery_stats.op_stat_count[accounted_ops[i]] = msgr.recovery_stats.op_stat_count[accounted_ops[i]];
}
total_client_usec -= total_recovery_usec;
if (recovery_count == 0)
{
return;
}
// example:
// total 3 GB/s
// recovery queue 1
// 120 OSDs
// EC 5+3
// 128kb block_size => 640kb object
// 3000*1024/640/120 = 40 MB/s per OSD = 64 recovered objects per OSD
// = 64*8*2 subops = 1024 recovery subop iops
// 8 recovery subop queue
// => subop avg latency = 0.0078125 sec
// utilisation = 8
// target util 1
// intuitively target latency should be 8x of real
// target_lat = rtune_avg_lat * utilisation / target_util
// = rtune_avg_lat * rtune_avg_lat * rtune_avg_iops / target_util
// = 0.0625
// recovery utilisation will be 1
rtune_client_util = total_client_usec/1000000.0/recovery_tune_interval;
rtune_target_util = (rtune_client_util < recovery_tune_client_util_low
? recovery_tune_util_high
: recovery_tune_util_low + (rtune_client_util >= recovery_tune_client_util_high
? 0 : (recovery_tune_util_high-recovery_tune_util_low)*
(recovery_tune_client_util_high-rtune_client_util)/(recovery_tune_client_util_high-recovery_tune_client_util_low)
)
);
rtune_avg_lat = total_recovery_usec/recovery_count;
uint64_t target_lat = rtune_avg_lat * rtune_avg_lat/1000000.0 * recovery_count/recovery_tune_interval / rtune_target_util;
auto sleep_us = target_lat > rtune_avg_lat+recovery_tune_sleep_min_us ? target_lat-rtune_avg_lat : 0;
if (recovery_target_sleep_items.size() != recovery_tune_agg_interval)
{
recovery_target_sleep_items.resize(recovery_tune_agg_interval);
for (int i = 0; i < recovery_tune_agg_interval; i++)
recovery_target_sleep_items[i] = 0;
recovery_target_sleep_total = 0;
recovery_target_sleep_cur = 0;
recovery_target_sleep_count = 0;
}
recovery_target_sleep_total -= recovery_target_sleep_items[recovery_target_sleep_cur];
recovery_target_sleep_items[recovery_target_sleep_cur] = sleep_us;
recovery_target_sleep_cur = (recovery_target_sleep_cur+1) % recovery_tune_agg_interval;
recovery_target_sleep_total += sleep_us;
if (recovery_target_sleep_count < recovery_tune_agg_interval)
recovery_target_sleep_count++;
recovery_target_sleep_us = recovery_target_sleep_total / recovery_target_sleep_count;
if (log_level > 4)
{
printf(
"[OSD %lu] auto-tune: client util: %.2f, recovery util: %.2f, lat: %lu us -> target util %.2f, delay %lu us\n",
osd_num, rtune_client_util, total_recovery_usec/1000000.0/recovery_tune_interval,
rtune_avg_lat, rtune_target_util, recovery_target_sleep_us
);
}
} }
// Just trigger write requests for degraded objects. They'll be recovered during writing // Just trigger write requests for degraded objects. They'll be recovered during writing

View File

@ -34,7 +34,6 @@
#define OSD_OP_MAX 18 #define OSD_OP_MAX 18
#define OSD_RW_MAX 64*1024*1024 #define OSD_RW_MAX 64*1024*1024
#define OSD_PROTOCOL_VERSION 1 #define OSD_PROTOCOL_VERSION 1
#define OSD_OP_RECOVERY_RELATED (uint32_t)1
// Memory alignment for direct I/O (usually 512 bytes) // Memory alignment for direct I/O (usually 512 bytes)
#ifndef DIRECT_IO_ALIGNMENT #ifndef DIRECT_IO_ALIGNMENT
@ -89,8 +88,7 @@ struct __attribute__((__packed__)) osd_op_sec_rw_t
uint32_t len; uint32_t len;
// bitmap/attribute length - bitmap comes after header, but before data // bitmap/attribute length - bitmap comes after header, but before data
uint32_t attr_len; uint32_t attr_len;
// the only possible flag is OSD_OP_RECOVERY_RELATED uint32_t pad0;
uint32_t flags;
}; };
struct __attribute__((__packed__)) osd_reply_sec_rw_t struct __attribute__((__packed__)) osd_reply_sec_rw_t
@ -111,9 +109,6 @@ struct __attribute__((__packed__)) osd_op_sec_del_t
object_id oid; object_id oid;
// delete version (automatic or specific) // delete version (automatic or specific)
uint64_t version; uint64_t version;
// the only possible flag is OSD_OP_RECOVERY_RELATED
uint32_t flags;
uint32_t pad0;
}; };
struct __attribute__((__packed__)) osd_reply_sec_del_t struct __attribute__((__packed__)) osd_reply_sec_del_t
@ -126,9 +121,6 @@ struct __attribute__((__packed__)) osd_reply_sec_del_t
struct __attribute__((__packed__)) osd_op_sec_sync_t struct __attribute__((__packed__)) osd_op_sec_sync_t
{ {
osd_op_header_t header; osd_op_header_t header;
// the only possible flag is OSD_OP_RECOVERY_RELATED
uint32_t flags;
uint32_t pad0;
}; };
struct __attribute__((__packed__)) osd_reply_sec_sync_t struct __attribute__((__packed__)) osd_reply_sec_sync_t
@ -142,9 +134,6 @@ struct __attribute__((__packed__)) osd_op_sec_stab_t
osd_op_header_t header; osd_op_header_t header;
// obj_ver_id array length in bytes // obj_ver_id array length in bytes
uint64_t len; uint64_t len;
// the only possible flag is OSD_OP_RECOVERY_RELATED
uint32_t flags;
uint32_t pad0;
}; };
typedef osd_op_sec_stab_t osd_op_sec_rollback_t; typedef osd_op_sec_stab_t osd_op_sec_rollback_t;

View File

@ -3,15 +3,13 @@
#include "osd_primary.h" #include "osd_primary.h"
#define SELF_FD -1
void osd_t::autosync() void osd_t::autosync()
{ {
if (immediate_commit != IMMEDIATE_ALL && !autosync_op) if (immediate_commit != IMMEDIATE_ALL && !autosync_op)
{ {
autosync_op = new osd_op_t(); autosync_op = new osd_op_t();
autosync_op->op_type = OSD_OP_IN; autosync_op->op_type = OSD_OP_IN;
autosync_op->peer_fd = SELF_FD; autosync_op->peer_fd = -1;
autosync_op->req = (osd_any_op_t){ autosync_op->req = (osd_any_op_t){
.sync = { .sync = {
.header = { .header = {
@ -87,13 +85,9 @@ void osd_t::finish_op(osd_op_t *cur_op, int retval)
cur_op->reply.hdr.id = cur_op->req.hdr.id; cur_op->reply.hdr.id = cur_op->req.hdr.id;
cur_op->reply.hdr.opcode = cur_op->req.hdr.opcode; cur_op->reply.hdr.opcode = cur_op->req.hdr.opcode;
cur_op->reply.hdr.retval = retval; cur_op->reply.hdr.retval = retval;
if (cur_op->peer_fd == SELF_FD) if (cur_op->peer_fd == -1)
{
// Do not include internal primary writes (recovery/rebalance) into client op statistics
if (cur_op->req.hdr.opcode != OSD_OP_WRITE)
{ {
msgr.measure_exec(cur_op); msgr.measure_exec(cur_op);
}
// Copy lambda to be unaffected by `delete op` // Copy lambda to be unaffected by `delete op`
std::function<void(osd_op_t*)>(cur_op->callback)(cur_op); std::function<void(osd_op_t*)>(cur_op->callback)(cur_op);
} }
@ -221,7 +215,6 @@ int osd_t::submit_primary_subop_batch(int submit_type, inode_t inode, uint64_t o
.offset = wr ? si->write_start : si->read_start, .offset = wr ? si->write_start : si->read_start,
.len = subop_len, .len = subop_len,
.attr_len = wr ? clean_entry_bitmap_size : 0, .attr_len = wr ? clean_entry_bitmap_size : 0,
.flags = cur_op->peer_fd == SELF_FD && cur_op->req.hdr.opcode != OSD_OP_SCRUB ? OSD_OP_RECOVERY_RELATED : 0,
}; };
#ifdef OSD_DEBUG #ifdef OSD_DEBUG
printf( printf(
@ -301,8 +294,7 @@ void osd_t::handle_primary_bs_subop(osd_op_t *subop)
" retval = "+std::to_string(bs_op->retval)+")" " retval = "+std::to_string(bs_op->retval)+")"
); );
} }
bool recovery_related = cur_op->peer_fd == SELF_FD && cur_op->req.hdr.opcode != OSD_OP_SCRUB; add_bs_subop_stats(subop);
add_bs_subop_stats(subop, recovery_related);
subop->req.hdr.opcode = bs_op_to_osd_op[bs_op->opcode]; subop->req.hdr.opcode = bs_op_to_osd_op[bs_op->opcode];
subop->reply.hdr.retval = bs_op->retval; subop->reply.hdr.retval = bs_op->retval;
if (bs_op->opcode == BS_OP_READ || bs_op->opcode == BS_OP_WRITE || bs_op->opcode == BS_OP_WRITE_STABLE) if (bs_op->opcode == BS_OP_READ || bs_op->opcode == BS_OP_WRITE || bs_op->opcode == BS_OP_WRITE_STABLE)
@ -314,33 +306,30 @@ void osd_t::handle_primary_bs_subop(osd_op_t *subop)
} }
delete bs_op; delete bs_op;
subop->bs_op = NULL; subop->bs_op = NULL;
subop->peer_fd = SELF_FD; subop->peer_fd = -1;
if (recovery_related && recovery_target_sleep_us)
{
tfd->set_timer_us(recovery_target_sleep_us, false, [=](int timer_id)
{
handle_primary_subop(subop, cur_op); handle_primary_subop(subop, cur_op);
});
}
else
{
handle_primary_subop(subop, cur_op);
}
} }
void osd_t::add_bs_subop_stats(osd_op_t *subop, bool recovery_related) void osd_t::add_bs_subop_stats(osd_op_t *subop)
{ {
// Include local blockstore ops in statistics // Include local blockstore ops in statistics
uint64_t opcode = bs_op_to_osd_op[subop->bs_op->opcode]; uint64_t opcode = bs_op_to_osd_op[subop->bs_op->opcode];
timespec tv_end; timespec tv_end;
clock_gettime(CLOCK_REALTIME, &tv_end); clock_gettime(CLOCK_REALTIME, &tv_end);
uint64_t len = (opcode == OSD_OP_SEC_READ || opcode == OSD_OP_SEC_WRITE) msgr.stats.op_stat_count[opcode]++;
? subop->bs_op->len : 0; if (!msgr.stats.op_stat_count[opcode])
msgr.inc_op_stats(msgr.stats, opcode, subop->tv_begin, tv_end, len);
if (recovery_related)
{ {
// It is OSD_OP_RECOVERY_RELATED msgr.stats.op_stat_count[opcode] = 1;
msgr.inc_op_stats(msgr.recovery_stats, opcode, subop->tv_begin, tv_end, len); msgr.stats.op_stat_sum[opcode] = 0;
msgr.stats.op_stat_bytes[opcode] = 0;
}
msgr.stats.op_stat_sum[opcode] += (
(tv_end.tv_sec - subop->tv_begin.tv_sec)*1000000 +
(tv_end.tv_nsec - subop->tv_begin.tv_nsec)/1000
);
if (opcode == OSD_OP_SEC_READ || opcode == OSD_OP_SEC_WRITE)
{
msgr.stats.op_stat_bytes[opcode] += subop->bs_op->len;
} }
} }
@ -563,7 +552,6 @@ void osd_t::submit_primary_del_batch(osd_op_t *cur_op, obj_ver_osd_t *chunks_to_
}, },
.oid = chunk.oid, .oid = chunk.oid,
.version = chunk.version, .version = chunk.version,
.flags = cur_op->peer_fd == SELF_FD && cur_op->req.hdr.opcode != OSD_OP_SCRUB ? OSD_OP_RECOVERY_RELATED : 0,
} }; } };
subops[i].callback = [cur_op, this](osd_op_t *subop) subops[i].callback = [cur_op, this](osd_op_t *subop)
{ {
@ -621,7 +609,6 @@ int osd_t::submit_primary_sync_subops(osd_op_t *cur_op)
.id = msgr.next_subop_id++, .id = msgr.next_subop_id++,
.opcode = OSD_OP_SEC_SYNC, .opcode = OSD_OP_SEC_SYNC,
}, },
.flags = cur_op->peer_fd == SELF_FD && cur_op->req.hdr.opcode != OSD_OP_SCRUB ? OSD_OP_RECOVERY_RELATED : 0,
} }; } };
subops[i].callback = [cur_op, this](osd_op_t *subop) subops[i].callback = [cur_op, this](osd_op_t *subop)
{ {
@ -681,7 +668,6 @@ void osd_t::submit_primary_stab_subops(osd_op_t *cur_op)
.opcode = OSD_OP_SEC_STABILIZE, .opcode = OSD_OP_SEC_STABILIZE,
}, },
.len = (uint64_t)(stab_osd.len * sizeof(obj_ver_id)), .len = (uint64_t)(stab_osd.len * sizeof(obj_ver_id)),
.flags = cur_op->peer_fd == SELF_FD && cur_op->req.hdr.opcode != OSD_OP_SCRUB ? OSD_OP_RECOVERY_RELATED : 0,
} }; } };
subops[i].iov.push_back(op_data->unstable_writes + stab_osd.start, stab_osd.len * sizeof(obj_ver_id)); subops[i].iov.push_back(op_data->unstable_writes + stab_osd.start, stab_osd.len * sizeof(obj_ver_id));
subops[i].callback = [cur_op, this](osd_op_t *subop) subops[i].callback = [cur_op, this](osd_op_t *subop)

View File

@ -292,26 +292,16 @@ resume_7:
{ {
{ {
int recovery_type = op_data->object_state->state & (OBJ_DEGRADED|OBJ_INCOMPLETE) ? 0 : 1; int recovery_type = op_data->object_state->state & (OBJ_DEGRADED|OBJ_INCOMPLETE) ? 0 : 1;
recovery_stat[recovery_type].count++; recovery_stat_count[0][recovery_type]++;
if (!recovery_stat[recovery_type].count) // wrapped if (!recovery_stat_count[0][recovery_type])
{ {
memset(&recovery_print_prev[recovery_type], 0, sizeof(recovery_print_prev[recovery_type])); recovery_stat_count[0][recovery_type]++;
memset(&recovery_stat[recovery_type], 0, sizeof(recovery_stat[recovery_type])); recovery_stat_bytes[0][recovery_type] = 0;
recovery_stat[recovery_type].count++;
} }
for (int role = 0; role < (op_data->scheme == POOL_SCHEME_REPLICATED ? 1 : pg.pg_size); role++) for (int role = 0; role < (op_data->scheme == POOL_SCHEME_REPLICATED ? 1 : pg.pg_size); role++)
{ {
recovery_stat[recovery_type].bytes += op_data->stripes[role].write_end - op_data->stripes[role].write_start; recovery_stat_bytes[0][recovery_type] += op_data->stripes[role].write_end - op_data->stripes[role].write_start;
} }
if (!cur_op->tv_end.tv_sec)
{
clock_gettime(CLOCK_REALTIME, &cur_op->tv_end);
}
uint64_t usec = (
(cur_op->tv_end.tv_sec - cur_op->tv_begin.tv_sec)*1000000 +
(cur_op->tv_end.tv_nsec - cur_op->tv_begin.tv_nsec)/1000
);
recovery_stat[recovery_type].usec += usec;
} }
// Any kind of a non-clean object can have extra chunks, because we don't record objects // Any kind of a non-clean object can have extra chunks, because we don't record objects
// as degraded & misplaced or incomplete & misplaced at the same time. So try to remove extra chunks // as degraded & misplaced or incomplete & misplaced at the same time. So try to remove extra chunks

View File

@ -42,21 +42,7 @@ void osd_t::secondary_op_callback(osd_op_t *op)
int retval = op->bs_op->retval; int retval = op->bs_op->retval;
delete op->bs_op; delete op->bs_op;
op->bs_op = NULL; op->bs_op = NULL;
if (op->is_recovery_related() && recovery_target_sleep_us)
{
if (!op->tv_end.tv_sec)
{
clock_gettime(CLOCK_REALTIME, &op->tv_end);
}
tfd->set_timer_us(recovery_target_sleep_us, false, [this, op, retval](int timer_id)
{
finish_op(op, retval); finish_op(op, retval);
});
}
else
{
finish_op(op, retval);
}
} }
void osd_t::exec_secondary(osd_op_t *cur_op) void osd_t::exec_secondary(osd_op_t *cur_op)

View File

@ -196,11 +196,10 @@ static void vitastor_parse_filename(const char *filename, QDict *options, Error
!strcmp(name, "rdma-gid-index") || !strcmp(name, "rdma-gid-index") ||
!strcmp(name, "rdma-mtu")) !strcmp(name, "rdma-mtu"))
{ {
#if QEMU_VERSION_MAJOR < 8 || QEMU_VERSION_MAJOR == 8 && QEMU_VERSION_MINOR < 1
unsigned long long num_val; unsigned long long num_val;
#if QEMU_VERSION_MAJOR < 8 || QEMU_VERSION_MAJOR == 8 && QEMU_VERSION_MINOR < 1
if (parse_uint_full(value, &num_val, 0)) if (parse_uint_full(value, &num_val, 0))
#else #else
uint64_t num_val;
if (parse_uint_full(value, 0, &num_val)) if (parse_uint_full(value, 0, &num_val))
#endif #endif
{ {

View File

@ -90,12 +90,6 @@ void timerfd_manager_t::clear_timer(int timer_id)
void timerfd_manager_t::set_nearest() void timerfd_manager_t::set_nearest()
{ {
if (onstack > 0)
{
// Prevent re-entry
return;
}
onstack++;
again: again:
if (!timers.size()) if (!timers.size())
{ {
@ -145,7 +139,6 @@ again:
} }
wait_state = wait_state | 1; wait_state = wait_state | 1;
} }
onstack--;
} }
void timerfd_manager_t::handle_readable() void timerfd_manager_t::handle_readable()

View File

@ -22,7 +22,6 @@ class timerfd_manager_t
int timerfd; int timerfd;
int nearest = -1; int nearest = -1;
int id = 1; int id = 1;
int onstack = 0;
std::vector<timerfd_timer_t> timers; std::vector<timerfd_timer_t> timers;
void inc_timer(timerfd_timer_t & t); void inc_timer(timerfd_timer_t & t);

View File

@ -6,7 +6,7 @@ includedir=${prefix}/@CMAKE_INSTALL_INCLUDEDIR@
Name: Vitastor Name: Vitastor
Description: Vitastor client library Description: Vitastor client library
Version: 1.3.1 Version: 1.2.0
Libs: -L${libdir} -lvitastor_client Libs: -L${libdir} -lvitastor_client
Cflags: -I${includedir} Cflags: -I${includedir}

View File

@ -19,10 +19,10 @@ fi
if [ "$IMMEDIATE_COMMIT" != "" ]; then if [ "$IMMEDIATE_COMMIT" != "" ]; then
NO_SAME="--journal_no_same_sector_overwrites true --journal_sector_buffer_count 1024 --disable_data_fsync 1 --immediate_commit all --log_level 10 --etcd_stats_interval 5" NO_SAME="--journal_no_same_sector_overwrites true --journal_sector_buffer_count 1024 --disable_data_fsync 1 --immediate_commit all --log_level 10 --etcd_stats_interval 5"
$ETCDCTL put /vitastor/config/global '{"recovery_queue_depth":1,"recovery_tune_util_low":1,"osd_out_time":1,"immediate_commit":"all","client_enable_writeback":true}' $ETCDCTL put /vitastor/config/global '{"recovery_queue_depth":1,"osd_out_time":1,"immediate_commit":"all","client_enable_writeback":true}'
else else
NO_SAME="--journal_sector_buffer_count 1024 --log_level 10 --etcd_stats_interval 5" NO_SAME="--journal_sector_buffer_count 1024 --log_level 10 --etcd_stats_interval 5"
$ETCDCTL put /vitastor/config/global '{"recovery_queue_depth":1,"recovery_tune_util_low":1,"osd_out_time":1,"client_enable_writeback":true}' $ETCDCTL put /vitastor/config/global '{"recovery_queue_depth":1,"osd_out_time":1,"client_enable_writeback":true}'
fi fi
start_osd_on() start_osd_on()
@ -53,7 +53,7 @@ for i in $(seq 1 $OSD_COUNT); do
start_osd $i start_osd $i
done done
(while true; do node mon/mon-main.js --etcd_address $ETCD_URL --etcd_prefix "/vitastor" --verbose 1 || true; done) >>./testdata/mon.log 2>&1 & (while true; do node mon/mon-main.js --etcd_url $ETCD_URL --etcd_prefix "/vitastor" --verbose 1 || true; done) &>./testdata/mon.log &
MON_PID=$! MON_PID=$!
if [ "$SCHEME" = "ec" ]; then if [ "$SCHEME" = "ec" ]; then

View File

@ -18,7 +18,6 @@ try_change()
for i in {1..6}; do for i in {1..6}; do
echo --- Change PG count to $n --- >>testdata/osd$i.log echo --- Change PG count to $n --- >>testdata/osd$i.log
done done
echo --- Change PG count to $n --- >>testdata/mon.log
$ETCDCTL put /vitastor/config/pools '{"1":{'$POOLCFG',"pg_size":'$PG_SIZE',"pg_minsize":'$PG_MINSIZE',"pg_count":'$n'}}' $ETCDCTL put /vitastor/config/pools '{"1":{'$POOLCFG',"pg_size":'$PG_SIZE',"pg_minsize":'$PG_MINSIZE',"pg_count":'$n'}}'

View File

@ -15,7 +15,7 @@ $ETCDCTL put /vitastor/osd/stats/7 '{"host":"host4","size":1073741824,"time":"'$
$ETCDCTL put /vitastor/osd/stats/8 '{"host":"host4","size":1073741824,"time":"'$TIME'"}' $ETCDCTL put /vitastor/osd/stats/8 '{"host":"host4","size":1073741824,"time":"'$TIME'"}'
$ETCDCTL put /vitastor/config/pools '{"1":{"name":"testpool","scheme":"replicated","pg_size":2,"pg_minsize":1,"pg_count":4,"failure_domain":"rack"}}' $ETCDCTL put /vitastor/config/pools '{"1":{"name":"testpool","scheme":"replicated","pg_size":2,"pg_minsize":1,"pg_count":4,"failure_domain":"rack"}}'
node mon/mon-main.js --etcd_address $ETCD_URL --etcd_prefix "/vitastor" >>./testdata/mon.log 2>&1 & node mon/mon-main.js --etcd_url $ETCD_URL --etcd_prefix "/vitastor" &>./testdata/mon.log &
MON_PID=$! MON_PID=$!
sleep 2 sleep 2

View File

@ -7,7 +7,7 @@ OSD_COUNT=5
OSD_ARGS="$OSD_ARGS" OSD_ARGS="$OSD_ARGS"
for i in $(seq 1 $OSD_COUNT); do for i in $(seq 1 $OSD_COUNT); do
dd if=/dev/zero of=./testdata/test_osd$i.bin bs=1024 count=1 seek=$((OSD_SIZE*1024-1)) dd if=/dev/zero of=./testdata/test_osd$i.bin bs=1024 count=1 seek=$((OSD_SIZE*1024-1))
build/src/vitastor-osd --log_level 10 --osd_num $i --bind_address 127.0.0.1 --etcd_stats_interval 5 $OSD_ARGS --etcd_address $ETCD_URL $(build/src/vitastor-disk simple-offsets --format options ./testdata/test_osd$i.bin 2>/dev/null) >>./testdata/osd$i.log 2>&1 & build/src/vitastor-osd --osd_num $i --bind_address 127.0.0.1 --etcd_stats_interval 5 $OSD_ARGS --etcd_address $ETCD_URL $(build/src/vitastor-disk simple-offsets --format options ./testdata/test_osd$i.bin 2>/dev/null) >>./testdata/osd$i.log 2>&1 &
eval OSD${i}_PID=$! eval OSD${i}_PID=$!
done done
@ -53,11 +53,6 @@ for i in {1..30}; do
fi fi
done done
# Sync so all moved objects are removed from OSD 1 (they aren't removed without a sync)
LD_PRELOAD="build/src/libfio_vitastor.so" \
fio -thread -name=test -ioengine=build/src/libfio_vitastor.so -bs=4k -direct=1 -iodepth=1 -fsync=1 -number_ios=2 -rw=write \
-etcd=$ETCD_URL -pool=1 -inode=2 -size=32M -cluster_log_level=10
$ETCDCTL put /vitastor/config/pgs '{"items":{"1":{"1":{"osd_set":[4,5],"primary":0}}}}' $ETCDCTL put /vitastor/config/pgs '{"items":{"1":{"1":{"osd_set":[4,5],"primary":0}}}}'
$ETCDCTL put /vitastor/pg/history/1/1 '{"all_peers":[1,2,3]}' $ETCDCTL put /vitastor/pg/history/1/1 '{"all_peers":[1,2,3]}'

View File

@ -1,54 +0,0 @@
#!/bin/bash -ex
# Test changing EC 4+1 into EC 4+3
OSD_COUNT=7
PG_COUNT=16
SCHEME=ec
PG_SIZE=5
PG_DATA_SIZE=4
PG_MINSIZE=5
. `dirname $0`/run_3osds.sh
try_change()
{
n=$1
s=$2
for i in {1..10}; do
($ETCDCTL get /vitastor/config/pgs --print-value-only |\
jq -s -e '(.[0].items["1"] | map( ([ .osd_set[] | select(. != 0) ] | length) == '$s' ) | length == '$n')
and ([ .[0].items["1"] | map(.osd_set)[][] ] | sort | unique == ["1","2","3","4","5","6","7"])') && \
($ETCDCTL get --prefix /vitastor/pg/state/ --print-value-only | jq -s -e '([ .[] | select(.state == ["active"]) ] | length) == '$n'') && \
break
sleep 1
done
if ! ($ETCDCTL get /vitastor/config/pgs --print-value-only |\
jq -s -e '(.[0].items["1"] | map( ([ .osd_set[] | select(. != 0) ] | length) == '$s' ) | length == '$n')
and ([ .[0].items["1"] | map(.osd_set)[][] ] | sort | unique == ["1","2","3","4","5","6","7"])'); then
$ETCDCTL get /vitastor/config/pgs
$ETCDCTL get --prefix /vitastor/pg/state/
format_error "FAILED: PG SIZE NOT CHANGED OR SOME OSDS DO NOT HAVE PGS"
fi
if ! ($ETCDCTL get --prefix /vitastor/pg/state/ --print-value-only | jq -s -e '([ .[] | select(.state == ["active"]) ] | length) == '$n); then
$ETCDCTL get /vitastor/config/pgs
$ETCDCTL get --prefix /vitastor/pg/state/
format_error "FAILED: PGS NOT UP AFTER PG SIZE CHANGE"
fi
}
LD_PRELOAD="build/src/libfio_vitastor.so" \
fio -thread -name=test -ioengine=build/src/libfio_vitastor.so -bs=1M -direct=1 -iodepth=4 \
-rw=write -etcd=$ETCD_URL -pool=1 -inode=1 -size=128M -runtime=10
PG_SIZE=7
POOLCFG='"name":"testpool","failure_domain":"osd","scheme":"ec","parity_chunks":'$((PG_SIZE-PG_DATA_SIZE))
$ETCDCTL put /vitastor/config/pools '{"1":{'$POOLCFG',"pg_size":'$PG_SIZE',"pg_minsize":'$PG_MINSIZE',"pg_count":'$PG_COUNT'}}'
sleep 2
try_change 16 7
format_green OK

View File

@ -15,7 +15,7 @@ for i in $(seq 1 $OSD_COUNT); do
eval OSD${i}_PID=$! eval OSD${i}_PID=$!
done done
(while true; do node mon/mon-main.js --etcd_address $ETCD_URL --etcd_prefix "/vitastor" --verbose 1 || true; done) >>./testdata/mon.log 2>&1 & (while true; do node mon/mon-main.js --etcd_url $ETCD_URL --etcd_prefix "/vitastor" --verbose 1 || true; done) &>./testdata/mon.log &
MON_PID=$! MON_PID=$!
sleep 3 sleep 3