Compare commits

..

No commits in common. "master" and "v2.1.0" have entirely different histories.

109 changed files with 510 additions and 2310 deletions
.gitea/workflows
debian
docker
etc
apt/sources.list.d
vitastor
node-binding

View File

@ -720,24 +720,6 @@ jobs:
echo ""
done
test_heal_local_read:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 10
run: TEST_NAME=local_read POOLCFG='"local_reads":"random",' /root/vitastor/tests/test_heal.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_heal_ec:
runs-on: ubuntu-latest
needs: build

View File

@ -2,6 +2,6 @@ cmake_minimum_required(VERSION 2.8.12)
project(vitastor)
set(VITASTOR_VERSION "2.2.0")
set(VITASTOR_VERSION "2.1.0")
add_subdirectory(src)

View File

@ -1,4 +1,4 @@
VITASTOR_VERSION ?= v2.2.0
VITASTOR_VERSION ?= v2.1.0
all: build push

View File

@ -49,7 +49,7 @@ spec:
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: vitalif/vitastor-csi:v2.2.0
image: vitalif/vitastor-csi:v2.1.0
args:
- "--node=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)"

View File

@ -121,7 +121,7 @@ spec:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
image: vitalif/vitastor-csi:v2.2.0
image: vitalif/vitastor-csi:v2.1.0
args:
- "--node=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)"

View File

@ -5,7 +5,7 @@ package vitastor
const (
vitastorCSIDriverName = "csi.vitastor.io"
vitastorCSIDriverVersion = "2.2.0"
vitastorCSIDriverVersion = "2.1.0"
)
// Config struct fills the parameters of request or user input

2
debian/changelog vendored
View File

@ -1,4 +1,4 @@
vitastor (2.2.0-1) unstable; urgency=medium
vitastor (2.1.0-1) unstable; urgency=medium
* Bugfixes

View File

@ -1,9 +1,9 @@
VITASTOR_VERSION ?= v2.2.0
VITASTOR_VERSION ?= v2.1.0
all: build push
build:
@docker build --no-cache --rm -t vitalif/vitastor:$(VITASTOR_VERSION) .
@docker build --rm -t vitalif/vitastor:$(VITASTOR_VERSION) .
push:
@docker push vitalif/vitastor:$(VITASTOR_VERSION)

View File

@ -1,2 +1 @@
deb http://vitastor.io/debian bookworm main
deb http://http.debian.net/debian/ bookworm-backports main

View File

@ -4,7 +4,7 @@
#
# Desired Vitastor version
VITASTOR_VERSION=v2.2.0
VITASTOR_VERSION=v2.1.0
# Additional arguments for all containers
# For example, you may want to specify a custom logging driver here

View File

@ -24,7 +24,6 @@ affect their interaction with the cluster.
- [nbd_max_devices](#nbd_max_devices)
- [nbd_max_part](#nbd_max_part)
- [osd_nearfull_ratio](#osd_nearfull_ratio)
- [hostname](#hostname)
## client_iothread_count
@ -216,12 +215,3 @@ just one OSD becomes 100 % full!
However, unlike in Ceph, 100 % full Vitastor OSDs don't crash (in Ceph they're
unable to start at all), so you'll be able to recover from "out of space" errors
without destroying and recreating OSDs.
## hostname
- Type: string
- Can be changed online: yes
Clients use host name to find their distance to OSDs when [localized reads](pool.en.md#local_reads)
are enabled. By default, standard [gethostname](https://man7.org/linux/man-pages/man2/gethostname.2.html)
function is used to determine host name, but you can also override it with this parameter.

View File

@ -24,7 +24,6 @@
- [nbd_max_devices](#nbd_max_devices)
- [nbd_max_part](#nbd_max_part)
- [osd_nearfull_ratio](#osd_nearfull_ratio)
- [hostname](#hostname)
## client_iothread_count
@ -220,13 +219,3 @@ RDMA и хотите повысить пиковую производитель
заполненные на 100% OSD вообще не могут стартовать), так что вы сможете
восстановить работу кластера после ошибок отсутствия свободного места
без уничтожения и пересоздания OSD.
## hostname
- Тип: строка
- Можно менять на лету: да
Клиенты используют имя хоста для определения расстояния до OSD, когда включены
[локальные чтения](pool.ru.md#local_reads). По умолчанию для определения имени
хоста используется стандартная функция [gethostname](https://man7.org/linux/man-pages/man2/gethostname.2.html),
но вы также можете задать имя хоста вручную данным параметром.

View File

@ -34,7 +34,6 @@ between clients, OSDs and etcd.
- [etcd_ws_keepalive_interval](#etcd_ws_keepalive_interval)
- [etcd_min_reload_interval](#etcd_min_reload_interval)
- [tcp_header_buffer_size](#tcp_header_buffer_size)
- [min_zerocopy_send_size](#min_zerocopy_send_size)
- [use_sync_send_recv](#use_sync_send_recv)
## osd_network
@ -314,34 +313,6 @@ is received without an additional copy. You can try to play with this
parameter and see how it affects random iops and linear bandwidth if you
want.
## min_zerocopy_send_size
- Type: integer
- Default: 32768
OSDs and clients will attempt to use io_uring-based zero-copy TCP send
for buffers larger than this number of bytes. Zero-copy send with io_uring is
supported since Linux kernel version 6.1. Support is auto-detected and disabled
automatically when not available. It can also be disabled explicitly by setting
this parameter to a negative value.
⚠️ Warning! Zero-copy send performance may vary greatly from CPU to CPU and from
one kernel version to another. Generally, it tends to only make benefit with larger
messages. With smaller messages (say, 4 KB), it may actually be slower. 32 KB is
enough for almost all CPUs, but even smaller values are optimal for some of them.
For example, 4 KB is OK for EPYC Milan/Genoa and 12 KB is OK for Xeon Ice Lake
(but verify it yourself please).
Verification instructions:
1. Add `iommu=pt` into your Linux kernel command line and reboot.
2. Upgrade your kernel. For example, it's very important to use 6.11+ with recent AMD EPYCs.
3. Run some tests with the [send-zerocopy liburing example](https://github.com/axboe/liburing/blob/master/examples/send-zerocopy.c)
to find the minimal message size for which zero-copy is optimal.
Use `./send-zerocopy tcp -4 -R` at the server side and
`time ./send-zerocopy tcp -4 -b 0 -s BUFFER_SIZE -D SERVER_IP` at the client side with
`-z 0` (no zero-copy) and `-z 1` (zero-copy), and compare MB/s and used CPU time
(user+system).
## use_sync_send_recv
- Type: boolean

View File

@ -34,7 +34,6 @@
- [etcd_ws_keepalive_interval](#etcd_ws_keepalive_interval)
- [etcd_min_reload_interval](#etcd_min_reload_interval)
- [tcp_header_buffer_size](#tcp_header_buffer_size)
- [min_zerocopy_send_size](#min_zerocopy_send_size)
- [use_sync_send_recv](#use_sync_send_recv)
## osd_network
@ -322,34 +321,6 @@ Vitastor содержат 128-байтные заголовки, за котор
поменять этот параметр и посмотреть, как он влияет на производительность
случайного и линейного доступа.
## min_zerocopy_send_size
- Тип: целое число
- Значение по умолчанию: 32768
OSD и клиенты будут пробовать использовать TCP-отправку без копирования (zero-copy) на
основе io_uring для буферов, больших, чем это число байт. Отправка без копирования
поддерживается в io_uring, начиная с версии ядра Linux 6.1. Наличие поддержки
проверяется автоматически и zero-copy отключается, когда поддержки нет. Также
её можно отключить явно, установив данный параметр в отрицательное значение.
⚠️ Внимание! Производительность данной функции может сильно отличаться на разных
процессорах и на разных версиях ядра Linux. В целом, zero-copy обычно быстрее с
большими сообщениями, а с мелкими (например, 4 КБ) zero-copy может быть даже
медленнее. 32 КБ достаточно почти для всех процессоров, но для каких-то можно
использовать даже меньшие значения. Например, для EPYC Milan/Genoa подходит 4 КБ,
а для Xeon Ice Lake - 12 КБ (но, пожалуйста, перепроверьте это сами).
Инструкция по проверке:
1. Добавьте `iommu=pt` в командную строку загрузки вашего ядра Linux и перезагрузитесь.
2. Обновите ядро. Например, для AMD EPYC очень важно использовать версию 6.11+.
3. Позапускайте тесты с помощью [send-zerocopy из примеров liburing](https://github.com/axboe/liburing/blob/master/examples/send-zerocopy.c),
чтобы найти минимальный размер сообщения, для которого zero-copy отправка оптимальна.
Запускайте `./send-zerocopy tcp -4 -R` на стороне сервера и
`time ./send-zerocopy tcp -4 -b 0 -s РАЗМЕРУФЕРА -D АДРЕС_СЕРВЕРА` на стороне клиента
с опцией `-z 0` (обычная отправка) и `-z 1` (отправка без копирования), и сравнивайте
скорость в МБ/с и занятое процессорное время (user+system).
## use_sync_send_recv
- Тип: булево (да/нет)

View File

@ -63,8 +63,6 @@ with an OSD restart or, for some of them, even without restarting by updating co
- [discard_on_start](#discard_on_start)
- [min_discard_size](#min_discard_size)
- [allow_net_split](#allow_net_split)
- [enable_pg_locks](#enable_pg_locks)
- [pg_lock_retry_interval_ms](#pg_lock_retry_interval_ms)
## bind_address
@ -649,20 +647,3 @@ The downside is that it increases the probability of writing data into just pg_m
OSDs during failover which can lead to PGs becoming incomplete after additional outages.
The old behaviour in versions up to 2.0.0 was equal to enabled allow_net_split.
## enable_pg_locks
- Type: boolean
Vitastor 2.2.0 introduces a new layer of split-brain prevention mechanism in
addition to etcd: PG locks. They prevent split-brain even in abnormal theoretical cases
when etcd is extremely laggy. As a new feature, by default, PG locks are only enabled
for pools where they're required - pools with [localized reads](pool.en.md#local_reads).
Use this parameter to enable or disable this function for all pools.
## pg_lock_retry_interval_ms
- Type: milliseconds
- Default: 100
Retry interval for failed PG lock attempts.

View File

@ -64,8 +64,6 @@
- [discard_on_start](#discard_on_start)
- [min_discard_size](#min_discard_size)
- [allow_net_split](#allow_net_split)
- [enable_pg_locks](#enable_pg_locks)
- [pg_lock_retry_interval_ms](#pg_lock_retry_interval_ms)
## bind_address
@ -681,21 +679,3 @@ pg_minsize OSD во время переключений, что может по
неполными (incomplete), если упадут ещё какие-то OSD.
Старое поведение в версиях до 2.0.0 было идентично включённому allow_net_split.
## enable_pg_locks
- Тип: булево (да/нет)
В Vitastor 2.2.0 появился новый слой защиты от сплитбрейна в дополнение к etcd -
блокировки PG. Они гарантируют порядок даже в теоретических ненормальных случаях,
когда etcd очень сильно тормозит. Так как функция новая, по умолчанию она включается
только для пулов, в которых она необходима - а именно, в пулах с включёнными
[локальными чтениями](pool.ru.md#local_reads). Ну а с помощью данного параметра
можно включить блокировки PG для всех пулов.
## pg_lock_retry_interval_ms
- Тип: миллисекунды
- Значение по умолчанию: 100
Интервал повтора неудачных попыток блокировки PG.

View File

@ -34,7 +34,6 @@ Parameters:
- [failure_domain](#failure_domain)
- [level_placement](#level_placement)
- [raw_placement](#raw_placement)
- [local_reads](#local_reads)
- [max_osd_combinations](#max_osd_combinations)
- [block_size](#block_size)
- [bitmap_granularity](#bitmap_granularity)
@ -134,8 +133,8 @@ Pool name.
## scheme
- Type: string
- One of: "replicated", "xor", "ec" or "jerasure"
- Required
- One of: "replicated", "xor", "ec" or "jerasure"
Redundancy scheme used for data in this pool. "jerasure" is an alias for "ec",
both use Reed-Solomon-Vandermonde codes based on ISA-L or jerasure libraries.
@ -290,30 +289,6 @@ Examples:
- EC 4+2 in 3 DC: `any, dc=1 host!=1, dc!=1, dc=3 host!=3, dc!=(1,3), dc=5 host!=5`
- 1 replica in fixed DC + 2 in random DCs: `dc?=meow, dc!=1, dc!=(1,2)`
## local_reads
- Type: string
- One of: "primary", "nearest" or "random"
- Default: primary
By default, Vitastor serves all read and write requests from the primary OSD of each PG.
But it can also serve read requests for replicated pools from secondary OSDs in clean PGs
(active or active+left_on_dead) which may be useful if you have OSDs with different network
latency to the client - for example, if you have a cross-datacenter setup.
If you set this parameter to "nearest", clients will try to read from the nearest OSD
in the [Placement Tree](#placement-tree), i.e. from an OSD from the same host or datacenter.
Distance to different OSDs will be calculated based on client hostname, determined
automatically or set manually in the [hostname](client.en.md#hostname) parameter.
If you set this parameter to "random", clients will try to distribute read requests over
all available secondary OSDs. This mode is mainly useful for tests, but, probably, not
really required in production setups.
[PG locks](osd.en.md#enable_pg_locks) are required for local reads to function. However,
PG locks are enabled automatically by default for pools with enabled local reads, so you
don't have to enable them explicitly.
## max_osd_combinations
- Type: integer
@ -349,8 +324,7 @@ Read more about this parameter in [Cluster-Wide Disk Layout Parameters](layout-c
## immediate_commit
- Type: string
- One of: "all", "small" or "none"
- Type: string, one of "all", "small" and "none"
- Default: none
Immediate commit setting for this pool. The value from /vitastor/config/global

View File

@ -33,7 +33,6 @@
- [failure_domain](#failure_domain)
- [level_placement](#level_placement)
- [raw_placement](#raw_placement)
- [local_reads](#local_reads)
- [max_osd_combinations](#max_osd_combinations)
- [block_size](#block_size)
- [bitmap_granularity](#bitmap_granularity)
@ -134,8 +133,8 @@ OSD игнорируется и OSD не удаляется из распред
## scheme
- Тип: строка
- Возможные значения: "replicated", "xor", "ec" или "jerasure"
- Обязательный
- Возможные значения: "replicated", "xor", "ec" или "jerasure"
Схема избыточности, используемая в данном пуле. "jerasure" - синоним для "ec",
в обеих схемах используются коды Рида-Соломона-Вандермонда, реализованные на
@ -288,30 +287,6 @@ meow недоступен".
- EC 4+2 в 3 датацентрах: `any, dc=1 host!=1, dc!=1, dc=3 host!=3, dc!=(1,3), dc=5 host!=5`
- 1 копия в фиксированном ДЦ + 2 в других ДЦ: `dc?=meow, dc!=1, dc!=(1,2)`
## local_reads
- Тип: строка
- Возможные значения: "primary", "nearest" или "random"
- По умолчанию: primary
По умолчанию Vitastor обслуживает все запросы чтения и записи с первичного OSD каждой PG.
Однако, в чистых PG (active или active+left_on_dead) реплицированных пулов также есть
возможность обслуживать запросы чтения с вторичных OSD, что может быть полезно, если
у вас сильно отличается время сетевого обращения от клиента к разным OSD - например,
если у вас несколько дата-центров.
Если данный параметр установлен в значение "nearest", клиенты будут стараться читать с
ближайших по [Дереву размещения](#дерево-размещения) OSD, то есть, с OSD с того же хоста
или датацентра. Расстояние до разных OSD будет рассчитываться с помощью имени хоста клиента,
определяемого автоматически или заданного вручную параметром [hostname](client.ru.md#hostname).
Если данный параметр установлен в значение "random", клиенты будут стараться распределять
запросы чтения по всем доступным вторичным OSD. Этот режим в основном полезен для тестов,
но, скорее всего, редко нужен в реальных инсталляциях.
Для работы локальных чтений требуются [блокировки PG](osd.ru.md#enable_pg_locks). Включать
их явно не нужно - они включаются автоматически для пулов с включёнными локальными чтениями.
## max_osd_combinations
- Тип: целое число
@ -349,8 +324,7 @@ meow недоступен".
## immediate_commit
- Тип: строка
- Возможные значения: "all", "small" или "none"
- Тип: строка "all", "small" или "none"
- По умолчанию: none
Настройка мгновенного коммита для данного пула. Если не задана, используется

View File

@ -271,15 +271,3 @@
заполненные на 100% OSD вообще не могут стартовать), так что вы сможете
восстановить работу кластера после ошибок отсутствия свободного места
без уничтожения и пересоздания OSD.
- name: hostname
type: string
online: true
info: |
Clients use host name to find their distance to OSDs when [localized reads](pool.en.md#local_reads)
are enabled. By default, standard [gethostname](https://man7.org/linux/man-pages/man2/gethostname.2.html)
function is used to determine host name, but you can also override it with this parameter.
info_ru: |
Клиенты используют имя хоста для определения расстояния до OSD, когда включены
[локальные чтения](pool.ru.md#local_reads). По умолчанию для определения имени
хоста используется стандартная функция [gethostname](https://man7.org/linux/man-pages/man2/gethostname.2.html),
но вы также можете задать имя хоста вручную данным параметром.

View File

@ -373,55 +373,6 @@
параметра читается без дополнительного копирования. Вы можете попробовать
поменять этот параметр и посмотреть, как он влияет на производительность
случайного и линейного доступа.
- name: min_zerocopy_send_size
type: int
default: 32768
info: |
OSDs and clients will attempt to use io_uring-based zero-copy TCP send
for buffers larger than this number of bytes. Zero-copy send with io_uring is
supported since Linux kernel version 6.1. Support is auto-detected and disabled
automatically when not available. It can also be disabled explicitly by setting
this parameter to a negative value.
⚠️ Warning! Zero-copy send performance may vary greatly from CPU to CPU and from
one kernel version to another. Generally, it tends to only make benefit with larger
messages. With smaller messages (say, 4 KB), it may actually be slower. 32 KB is
enough for almost all CPUs, but even smaller values are optimal for some of them.
For example, 4 KB is OK for EPYC Milan/Genoa and 12 KB is OK for Xeon Ice Lake
(but verify it yourself please).
Verification instructions:
1. Add `iommu=pt` into your Linux kernel command line and reboot.
2. Upgrade your kernel. For example, it's very important to use 6.11+ with recent AMD EPYCs.
3. Run some tests with the [send-zerocopy liburing example](https://github.com/axboe/liburing/blob/master/examples/send-zerocopy.c)
to find the minimal message size for which zero-copy is optimal.
Use `./send-zerocopy tcp -4 -R` at the server side and
`time ./send-zerocopy tcp -4 -b 0 -s BUFFER_SIZE -D SERVER_IP` at the client side with
`-z 0` (no zero-copy) and `-z 1` (zero-copy), and compare MB/s and used CPU time
(user+system).
info_ru: |
OSD и клиенты будут пробовать использовать TCP-отправку без копирования (zero-copy) на
основе io_uring для буферов, больших, чем это число байт. Отправка без копирования
поддерживается в io_uring, начиная с версии ядра Linux 6.1. Наличие поддержки
проверяется автоматически и zero-copy отключается, когда поддержки нет. Также
её можно отключить явно, установив данный параметр в отрицательное значение.
⚠️ Внимание! Производительность данной функции может сильно отличаться на разных
процессорах и на разных версиях ядра Linux. В целом, zero-copy обычно быстрее с
большими сообщениями, а с мелкими (например, 4 КБ) zero-copy может быть даже
медленнее. 32 КБ достаточно почти для всех процессоров, но для каких-то можно
использовать даже меньшие значения. Например, для EPYC Milan/Genoa подходит 4 КБ,
а для Xeon Ice Lake - 12 КБ (но, пожалуйста, перепроверьте это сами).
Инструкция по проверке:
1. Добавьте `iommu=pt` в командную строку загрузки вашего ядра Linux и перезагрузитесь.
2. Обновите ядро. Например, для AMD EPYC очень важно использовать версию 6.11+.
3. Позапускайте тесты с помощью [send-zerocopy из примеров liburing](https://github.com/axboe/liburing/blob/master/examples/send-zerocopy.c),
чтобы найти минимальный размер сообщения, для которого zero-copy отправка оптимальна.
Запускайте `./send-zerocopy tcp -4 -R` на стороне сервера и
`time ./send-zerocopy tcp -4 -b 0 -s РАЗМЕРУФЕРА -D АДРЕС_СЕРВЕРА` на стороне клиента
с опцией `-z 0` (обычная отправка) и `-z 1` (отправка без копирования), и сравнивайте
скорость в МБ/с и занятое процессорное время (user+system).
- name: use_sync_send_recv
type: bool
default: false

View File

@ -781,23 +781,3 @@
неполными (incomplete), если упадут ещё какие-то OSD.
Старое поведение в версиях до 2.0.0 было идентично включённому allow_net_split.
- name: enable_pg_locks
type: bool
info: |
Vitastor 2.2.0 introduces a new layer of split-brain prevention mechanism in
addition to etcd: PG locks. They prevent split-brain even in abnormal theoretical cases
when etcd is extremely laggy. As a new feature, by default, PG locks are only enabled
for pools where they're required - pools with [localized reads](pool.en.md#local_reads).
Use this parameter to enable or disable this function for all pools.
info_ru: |
В Vitastor 2.2.0 появился новый слой защиты от сплитбрейна в дополнение к etcd -
блокировки PG. Они гарантируют порядок даже в теоретических ненормальных случаях,
когда etcd очень сильно тормозит. Так как функция новая, по умолчанию она включается
только для пулов, в которых она необходима - а именно, в пулах с включёнными
[локальными чтениями](pool.ru.md#local_reads). Ну а с помощью данного параметра
можно включить блокировки PG для всех пулов.
- name: pg_lock_retry_interval_ms
type: ms
default: 100
info: Retry interval for failed PG lock attempts.
info_ru: Интервал повтора неудачных попыток блокировки PG.

View File

@ -26,9 +26,9 @@ at Vitastor Kubernetes operator: https://github.com/Antilles7227/vitastor-operat
The instruction is very simple.
1. Download a Docker image of the desired version: \
`docker pull vitastor:v2.2.0`
`docker pull vitastor:2.1.0`
2. Install scripts to the host system: \
`docker run --rm -it -v /etc:/host-etc -v /usr/bin:/host-bin vitastor:v2.2.0 install.sh`
`docker run --rm -it -v /etc:/host-etc -v /usr/bin:/host-bin vitastor:2.1.0 install.sh`
3. Reload udev rules: \
`udevadm control --reload-rules`

View File

@ -25,9 +25,9 @@ Vitastor можно установить в Docker/Podman. При этом etcd,
Инструкция по установке максимально простая.
1. Скачайте Docker-образ желаемой версии: \
`docker pull vitastor:v2.2.0`
`docker pull vitastor:2.1.0`
2. Установите скрипты в хост-систему командой: \
`docker run --rm -it -v /etc:/host-etc -v /usr/bin:/host-bin vitastor:v2.2.0 install.sh`
`docker run --rm -it -v /etc:/host-etc -v /usr/bin:/host-bin vitastor:2.1.0 install.sh`
3. Перезагрузите правила udev: \
`udevadm control --reload-rules`

View File

@ -125,13 +125,6 @@ All other client-side components are based on the client library:
all current read/write operations to it fail with EPIPE error and are retried by clients.
- After completing all secondary read/write requests, primary OSD sends the response to
the client.
- When [localized reads](../config/pool.en.md#local_reads) are enabled for a PG in a
replicated pool, and the PG is in an active and clean state (active or
active+left_on_dead), the client can send the request to one of secondary OSDs instead
of the primary. Secondary OSD checks the [PG lock](../config/osd.en.md#enable_pg_locks)
and handles the request locally without communicating to the primary. PG lock is required
for the secondary OSD to know for sure that the PG is in clean state and not switching
primary at the moment.
### Nuances of request handling

View File

@ -125,12 +125,6 @@
и если любое из этих соединений отключается, PG перезапускается, а все текущие запросы чтения
и записи в неё завершаются с ошибкой EPIPE, после чего повторяются клиентами.
- После завершения всех вторичных операций чтения/записи первичный OSD отправляет ответ клиенту.
- Если в реплицированном пуле включены [локализованные чтения](../config/pool.ru.md#local_reads),
а PG находится в чистом активном состоянии (active или active+left_on_dead), клиент может
послать запрос к одному из вторичных OSD вместо первичного. Вторичный OSD проверяет
[блокировку PG](../config/osd.ru.md#enable_pg_locks) и обрабатывает запрос локально, не
обращаясь к первичному. Блокировка PG здесь нужна, чтобы вторичный OSD мог точно знать,
что PG находится в чистом состоянии и не переключается на другой первичный OSD.
### Особенности обработки запросов

View File

@ -12,14 +12,6 @@
Лицензия: VNPL 1.1 на серверный код и двойная VNPL 1.1 + GPL 2.0+ на клиентский.
Серверные компоненты распространяются только на условиях VNPL.
Клиентские библиотеки распространяются на условиях двойной лицензии VNPL 1.0
и также на условиях GNU GPL 2.0 или более поздней версии. Так сделано в целях
совместимости с таким ПО, как QEMU и fio.
## VNPL
VNPL - "сетевой копилефт", собственная свободная копилефт-лицензия
Vitastor Network Public License 1.1, основанная на GNU GPL 3.0 с дополнительным
условием "Сетевого взаимодействия", требующим распространять все программы,
@ -37,70 +29,9 @@ Vitastor Network Public License 1.1, основанная на GNU GPL 3.0 с д
На Windows и любое другое ПО, не разработанное *специально* для использования
вместе с Vitastor, никакие ограничения не накладываются.
## Пояснение
Клиентские библиотеки распространяются на условиях двойной лицензии VNPL 1.0
и также на условиях GNU GPL 2.0 или более поздней версии. Так сделано в целях
совместимости с таким ПО, как QEMU и fio.
Сетевой копилефт регулируется пунктом лицензии **13. Удалённое сетевое взаимодействие**.
Программа считается "прокси-программой", если верны оба условия:
- Она создана специально для работы вместе с Vitastor. По сути это означает, что программа
должна иметь специфичный для Vitastor функционал, то есть, "знать", что она взаимодействует
именно с Vitastor.
- Она прямо или косвенно взаимодействует с Vitastor через абсолютно любой программный
интерфейс, включая любые способы вызова: API, CLI, сеть или через какую-то обёртку (в
свою очередь тоже являющуюся прокси-программой).
Если в дополнение к этому также:
- Вы предоставляете любому пользователю возможность взаимодействовать с Vitastor по сети,
опять-таки, через любой интерфейс или любую серию "обёрток" (прокси-программ)
То, согласно VNPL, вы должны открыть код "прокси-программ" **таким пользователям** на условиях
любой GPL-совместимой лицензии - то есть, GPL, LGPL, MIT/BSD или Apache 2 - "совместимость с GPL"
понимается как возможность включать лицензируемый код в GPL-приложение.
Соответственно, если у вас есть "прокси-программа", но её код не открыт пользователю,
который прямо или косвенно взаимодействует с Vitastor - вам запрещено использовать Vitastor
на условиях VNPL и вам нужна коммерческая лицензия, не содержащая требований об открытии кода.
## Примеры
- Kubernetes CSI-драйвер Vitastor, создающий PersistentVolume с помощью вызова vitastor-cli create.
- Да, взаимодействует с Vitastor через vitastor-cli.
- Да, создавался специально для работы с Vitastor (иначе в чём же ещё его смысл).
- Значит, CSI-драйвер **точно считается** "прокси-программой" и должен быть открыт под свободной
лицензией.
- Windows, установленный в виртуальную машину на диске Vitastor.
- Да, взаимодействует с Vitastor "прямо или косвенно" - пишет и читает данные через интерфейс
блочного устройства, эмулируемый QEMU.
- Нет, точно не создан *специально для работы с Vitastor* - когда его создавали, никакого
Vitastor ещё и в помине не было.
- Значит, Windows **точно не считается** "прокси-программой" и на него требования VNPL не распространяются.
- Панель управления облака, делающая запросы к Kubernetes CSI-драйверу Vitastor.
- Да, взаимодействует с Vitastor косвенно через CSI-драйвер, являющийся "прокси-программой".
- Сходу не известно, создавалась ли конкретно для работы с Vitastor. Как понять, да или нет?
Представьте, что Vitastor заменён на любую другую систему хранения (например, на проприетарную).
Работа панели управления изменится? Если да (например, перестанут работать снапшоты) - значит,
панель содержит специфичный функционал и "создана специально для работы с Vitastor".
Если нет - значит, специфичного функционала панель не содержит и в принципе она универсальна.
- Нужно ли открывать панель - **зависит** от того, содержит она специфичный функционал или нет.
## Почему так?
Потому что я одновременно верю в дух копилефт-лицензий (Linux не стал бы так популярен,
если бы не GPL!) и хочу иметь возможность монетизации продукта.
При этом использовать даже AGPL для программной СХД бессмысленно - это глубоко внутреннее
ПО, которое пользователь почти наверняка не увидит вообще, поэтому и открывать код никому
никогда не придётся, даже при создании производного продукта.
Да и в целом сложившаяся в мире ситуация, при которой действие GPL ограничивается только
прямым связыванием в один исполняемый файл, не очень корректна. В настоящее время программы
гораздо чаще интегрируют сетевыми вызовами, а не с помощью /usr/bin/ld, и общий программный
продукт может состоять из нескольких десятков микросервисов, взаимодействующих по сети.
Поэтому для сохранения достаточной "копилефтности" и придумана VNPL.
## Тексты лицензий
- VNPL 1.1 на английском языке: [VNPL-1.1.txt](../../VNPL-1.1.txt)
- VNPL 1.1 на русском языке: [VNPL-1.1-RU.txt](../../VNPL-1.1-RU.txt)
- GPL 2.0: [GPL-2.0.txt](../../GPL-2.0.txt)
Вы можете найти полный текст VNPL 1.1 на английском языке в файле [VNPL-1.1.txt](../../VNPL-1.1.txt),
VNPL 1.1 на русском языке в файле [VNPL-1.1-RU.txt](../../VNPL-1.1-RU.txt), а GPL 2.0 в файле [GPL-2.0.txt](../../GPL-2.0.txt).

View File

@ -25,7 +25,6 @@
- Recovery of degraded blocks
- Rebalancing (data movement between OSDs)
- [Lazy fsync support](../config/layout-cluster.en.md#immediate_commit)
- [Localized read support](../config/pool.en.md#local_reads) for cross-datacenter setup optimization
- Per-OSD and per-image I/O and space usage statistics in etcd
- Snapshots and copy-on-write image clones
- [Write throttling to smooth random write workloads in SSD+HDD configurations](../config/osd.en.md#throttle_small_writes)

View File

@ -25,7 +25,6 @@
- Восстановление деградированных блоков
- Ребаланс, то есть перемещение данных между OSD (дисками)
- [Поддержка "ленивого" fsync (fsync не на каждую операцию)](../config/layout-cluster.ru.md#immediate_commit)
- [Локальные чтения](../config/pool.ru.md#local_reads) для оптимизации при нескольких датацентрах
- Сбор статистики ввода/вывода в etcd
- Статистика операций ввода/вывода и занятого места в разрезе инодов
- Именование инодов через хранение их метаданных в etcd

View File

@ -14,7 +14,6 @@
- [Removing a failed disk](#removing-a-failed-disk)
- [Adding a disk](#adding-a-disk)
- [Restoring from lost pool configuration](#restoring-from-lost-pool-configuration)
- [Incompatibility problems](#Incompatibility-problems)
- [Upgrading Vitastor](#upgrading-vitastor)
- [OSD memory usage](#osd-memory-usage)
@ -167,17 +166,6 @@ done
After that all PGs should peer and find all previous data.
## Incompatibility problems
### ISA-L 2.31
⚠ It is FORBIDDEN to use Vitastor 2.1.0 and earlier versions with ISA-L 2.31 and newer if
you use EC N+K pools and K > 1 on a CPU with GF-NI instruction support, because it WILL
lead to **data loss** during EC recovery.
If you accidentally upgraded ISA-L to 2.31 but didn't upgrade Vitastor and restarted OSDs,
then stop them as soon as possible and either update Vitastor or roll back ISA-L.
## Upgrading Vitastor
Every upcoming Vitastor version is usually compatible with previous both forward

View File

@ -14,7 +14,6 @@
- [Удаление неисправного диска](#удаление-неисправного-диска)
- [Добавление диска](#добавление-диска)
- [Восстановление потерянной конфигурации пулов](#восстановление-потерянной-конфигурации-пулов)
- [Проблемы несовместимости](#проблемы-несовместимости)
- [Обновление Vitastor](#обновление-vitastor)
- [Потребление памяти OSD](#потребление-памяти-osd)
@ -164,17 +163,6 @@ done
После этого все PG должны пройти peering и найти все предыдущие данные.
## Проблемы несовместимости
### ISA-L 2.31
⚠ ЗАПРЕЩЕНО использовать Vitastor 2.1.0 и более ранних версий с библиотекой ISA-L версии 2.31
или более новой, если вы используете EC-пулы N+K и K > 1 на CPU с поддержкой инструкций GF-NI,
так как это приведёт к **потере данных** при восстановлении из EC.
Если вы случайно обновили ISA-L до 2.31, но не обновили Vitastor, и успели перезапустить OSD,
то как можно скорее остановите их все и либо обновите Vitastor, либо откатите ISA-L.
## Обновление Vitastor
Обычно каждая следующая версия Vitastor совместима с предыдущими и "вперёд", и "назад"

View File

@ -397,7 +397,6 @@ Optional parameters:
| `--immediate_commit none` | Put pool only on OSDs with this or larger immediate_commit (none < small < all) |
| `--level_placement <rules>` | Use additional failure domain rules (example: "dc=112233") |
| `--raw_placement <rules>` | Specify raw PG generation rules ([details](../config/pool.en.md#raw_placement)) |
| `--local_reads primary` | Local read policy for replicated pools: primary, nearest or random |
| `--primary_affinity_tags tags` | Prefer to put primary copies on OSDs with all specified tags |
| `--scrub_interval <time>` | Enable regular scrubbing for this pool. Format: number + unit s/m/h/d/M/y |
| `--used_for_app fs:<name>` | Mark pool as used for VitastorFS with metadata in image `<name>` |

View File

@ -414,7 +414,6 @@ OSD PARENT UP SIZE USED% TAGS WEIGHT BLOCK BITMAP
| `--immediate_commit none` | ...только OSD с этим или большим immediate_commit (none < small < all) |
| `--level_placement <rules>` | Задать правила дополнительных доменов отказа (пример: "dc=112233") |
| `--raw_placement <rules>` | Задать низкоуровневые правила генерации PG ([детали](../config/pool.ru.md#raw_placement)) |
| `--local_reads primary` | Политика локальных чтений для реплик: primary, nearest или random |
| `--primary_affinity_tags tags` | Предпочитать OSD со всеми данными тегами для роли первичных |
| `--scrub_interval <time>` | Включить скрабы с заданным интервалом времени (число + единица s/m/h/d/M/y) |
| `--pg_stripe_size <number>` | Увеличить блок группировки объектов по PG |

View File

@ -14,9 +14,6 @@ Commands:
- [upgrade](#upgrade)
- [defrag](#defrag)
⚠️ Important: follow the instructions from [Linux NFS write size](#linux-nfs-write-size)
for optimal Vitastor NFS performance if you use EC and HDD and mount your NFS from Linux.
## Pseudo-FS
Simplified pseudo-FS proxy is used for file-based image access emulation. It's not
@ -103,62 +100,6 @@ Other notable missing features which should be addressed in the future:
in the DB. The FS is implemented is such way that this garbage doesn't affect its
function, but having a tool to clean it up still seems a right thing to do.
## Linux NFS write size
Linux NFS client (nfs/nfsv3/nfsv4 kernel modules) has a hard-coded maximum I/O size,
currently set to 1 MB - see `rsize` and `wsize` in [man 5 nfs](https://linux.die.net/man/5/nfs).
This means that when you write to a file in an FS mounted over NFS, the maximum write
request size is 1 MB, even in the O_DIRECT mode and even if the original write request
is larger.
However, for optimal linear write performance in Vitastor EC (erasure-coded) pools,
the size of write requests should be a multiple of [block_size](../config/layout-cluster.en.md#block_size),
multiplied by the data chunk count of the pool ([pg_size](../config/pool.en.md#pg_size)-[parity_chunks](../config/pool.en.md#parity_chunks)).
When write requests are smaller or not a multiple of this number, Vitastor has to first
read paired data blocks from disks, calculate new parity blocks and only then write them
back. Obviously this is 2-3 times slower than a simple disk write.
Vitastor HDD setups use 1 MB block_size by default. So, for optimal performance, if
you use EC 2+1 and HDD, you need your NFS client to send 2 MB write requests, if you
use EC 4+1 - 4 MB and so on.
But Linux NFS client only writes in 1 MB chunks. 😢
The good news is that you can fix it by rebuilding Linux NFS kernel modules 😉 🤩!
You need to change NFS_MAX_FILE_IO_SIZE in nfs_xdr.h and then rebuild and reload modules.
The instruction, using Debian as an example (should be ran under root):
```
# download current Linux kernel headers required to build modules
apt-get install linux-headers-`uname -r`
# replace NFS_MAX_FILE_IO_SIZE with a desired number (here it's 4194304 - 4 MB)
sed -i 's/NFS_MAX_FILE_IO_SIZE\s*.*/NFS_MAX_FILE_IO_SIZE\t(4194304U)/' /lib/modules/`uname -r`/source/include/linux/nfs_xdr.h
# download current Linux kernel source
mkdir linux_src
cd linux_src
apt-get source linux-image-`uname -r`-unsigned
# build NFS modules
cd linux-*/fs/nfs
make -C /lib/modules/`uname -r`/build M=$PWD -j8 modules
make -C /lib/modules/`uname -r`/build M=$PWD modules_install
# move default NFS modules away
mv /lib/modules/`uname -r`/kernel/fs/nfs ~/nfs_orig_`uname -r`
depmod -a
# unload old modules and load the new ones
rmmod nfsv3 nfs
modprobe nfsv3
```
After these (not much complicated 🙂) manipulations NFS begins to be mounted
with new wsize and rsize by default and it fixes Vitastor-NFS linear write performance.
## Horizontal scaling
Linux NFS 3.0 client doesn't support built-in scaling or failover, i.e. you can't

View File

@ -14,9 +14,6 @@
- [upgrade](#upgrade)
- [defrag](#defrag)
⚠️ Важно: для оптимальной производительности Vitastor NFS в Linux при использовании
HDD и EC (erasure кодов) выполните инструкции из раздела [Размер записи Linux NFS](#размер-записи-linux-nfs).
## Псевдо-ФС
Упрощённая реализация псевдо-ФС используется для эмуляции файлового доступа к блочным
@ -107,66 +104,6 @@ JSON-формате :-). Для инспекции содержимого БД
записи. ФС устроена так, что на работу они не влияют, но для порядка и их стоит
уметь подчищать.
## Размер записи Linux NFS
Клиент Linux NFS (модули ядра nfs/nfsv3/nfsv4) имеет фиксированный в коде максимальный
размер запроса ввода-вывода, равный 1 МБ - см. `rsize` и `wsize` в [man 5 nfs](https://linux.die.net/man/5/nfs).
Это означает, что когда вы записываете в файл в примонтированной по NFS файловой системе,
максимальный размер запроса записи составляет 1 МБ, даже в режиме O_DIRECT и даже если
исходный запрос записи был больше.
Однако для оптимальной скорости линейной записи в Vitastor при использовании EC-пулов
(пулов с кодами коррекции ошибок) запросы записи должны быть по размеру кратны
[block_size](../config/layout-cluster.ru.md#block_size), умноженному на число частей
данных пула ([pg_size](../config/pool.ru.md#pg_size)-[parity_chunks](../config/pool.ru.md#parity_chunks)).
Если запросы записи меньше или не кратны, то Vitastor приходится сначала прочитать
с дисков старые версии парных блоков данных, рассчитать новые блоки чётности и только
после этого записать их на диски. Естественно, это в 2-3 раза медленнее простой записи
на диск.
При этом block_size на жёстких дисках по умолчанию устанавливается равным 1 МБ.
Таким образом, если вы используете EC 2+1 и HDD, для оптимальной скорости записи вам
нужно, чтобы NFS-клиент писал по 2 МБ, если EC 4+1 и HDD - то по 4 МБ, и т.п.
А Linux NFS-клиент пишет только по 1 МБ. 😢
Но это можно исправить, пересобрав модули ядра Linux NFS 😉 🤩! Для этого нужно
поменять значение переменной NFS_MAX_FILE_IO_SIZE в заголовочном файле nfs_xdr.h,
после чего пересобрать модули NFS.
Инструкция по пересборке на примере Debian (выполнять под root):
```
# скачиваем заголовки для сборки модулей для текущего ядра Linux
apt-get install linux-headers-`uname -r`
# заменяем в заголовках NFS_MAX_FILE_IO_SIZE на желаемый (здесь 4194304 - 4 МБ)
sed -i 's/NFS_MAX_FILE_IO_SIZE\s*.*/NFS_MAX_FILE_IO_SIZE\t(4194304U)/' /lib/modules/`uname -r`/source/include/linux/nfs_xdr.h
# скачиваем исходный код текущего ядра
mkdir linux_src
cd linux_src
apt-get source linux-image-`uname -r`-unsigned
# собираем модули NFS
cd linux-*/fs/nfs
make -C /lib/modules/`uname -r`/build M=$PWD -j8 modules
make -C /lib/modules/`uname -r`/build M=$PWD modules_install
# убираем в сторону штатные модули NFS
mv /lib/modules/`uname -r`/kernel/fs/nfs ~/nfs_orig_`uname -r`
depmod -a
# выгружаем старые модули и загружаем новые
rmmod nfsv3 nfs
modprobe nfsv3
```
После такой (относительно нехитрой 🙂) манипуляции NFS начинает по умолчанию
монтироваться с новыми wsize и rsize, и производительность линейной записи в Vitastor-NFS
исправляется.
## Горизонтальное масштабирование
Клиент Linux NFS 3.0 не поддерживает встроенное масштабирование или отказоустойчивость.

View File

@ -162,12 +162,10 @@ apt-get install linux-headers-`uname -r`
apt-get build-dep linux-image-`uname -r`-unsigned
apt-get source linux-image-`uname -r`-unsigned
cd linux*/drivers/vdpa
make -C /lib/modules/`uname -r`/build M=$PWD CONFIG_VDPA=m CONFIG_VDPA_USER=m CONFIG_VIRTIO_VDPA=m -j8 modules
make -C /lib/modules/`uname -r`/build M=$PWD CONFIG_VDPA=m CONFIG_VDPA_USER=m CONFIG_VIRTIO_VDPA=m modules_install
make -C /lib/modules/`uname -r`/build M=$PWD CONFIG_VDPA=m CONFIG_VDPA_USER=m CONFIG_VIRTIO_VDPA=m -j8 modules modules_install
cat Module.symvers >> /lib/modules/`uname -r`/build/Module.symvers
cd ../virtio
make -C /lib/modules/`uname -r`/build M=$PWD CONFIG_VDPA=m CONFIG_VDPA_USER=m CONFIG_VIRTIO_VDPA=m -j8 modules
make -C /lib/modules/`uname -r`/build M=$PWD CONFIG_VDPA=m CONFIG_VDPA_USER=m CONFIG_VIRTIO_VDPA=m modules_install
make -C /lib/modules/`uname -r`/build M=$PWD CONFIG_VDPA=m CONFIG_VDPA_USER=m CONFIG_VIRTIO_VDPA=m -j8 modules modules_install
depmod -a
```

View File

@ -165,12 +165,10 @@ apt-get install linux-headers-`uname -r`
apt-get build-dep linux-image-`uname -r`-unsigned
apt-get source linux-image-`uname -r`-unsigned
cd linux*/drivers/vdpa
make -C /lib/modules/`uname -r`/build M=$PWD CONFIG_VDPA=m CONFIG_VDPA_USER=m CONFIG_VIRTIO_VDPA=m -j8 modules
make -C /lib/modules/`uname -r`/build M=$PWD CONFIG_VDPA=m CONFIG_VDPA_USER=m CONFIG_VIRTIO_VDPA=m modules_install
make -C /lib/modules/`uname -r`/build M=$PWD CONFIG_VDPA=m CONFIG_VDPA_USER=m CONFIG_VIRTIO_VDPA=m -j8 modules modules_install
cat Module.symvers >> /lib/modules/`uname -r`/build/Module.symvers
cd ../virtio
make -C /lib/modules/`uname -r`/build M=$PWD CONFIG_VDPA=m CONFIG_VDPA_USER=m CONFIG_VIRTIO_VDPA=m -j8 modules
make -C /lib/modules/`uname -r`/build M=$PWD CONFIG_VDPA=m CONFIG_VDPA_USER=m CONFIG_VIRTIO_VDPA=m modules_install
make -C /lib/modules/`uname -r`/build M=$PWD CONFIG_VDPA=m CONFIG_VDPA_USER=m CONFIG_VIRTIO_VDPA=m -j8 modules modules_install
depmod -a
```

View File

@ -253,7 +253,7 @@ function random_custom_combinations(osd_tree, rules, count, ordered)
for (let i = 1; i < rules.length; i++)
{
const filtered = filter_tree_by_rules(osd_tree, rules[i], selected);
const idx = select_murmur3(filtered.length, i => 'p:'+f.id+':'+(filtered[i].name || filtered[i].id));
const idx = select_murmur3(filtered.length, i => 'p:'+f.id+':'+filtered[i].id);
selected.push(idx == null ? { levels: {}, id: null } : filtered[idx]);
}
const size = selected.filter(s => s.id !== null).length;
@ -270,7 +270,7 @@ function random_custom_combinations(osd_tree, rules, count, ordered)
for (const item_rules of rules)
{
const filtered = selected.length ? filter_tree_by_rules(osd_tree, item_rules, selected) : first;
const idx = select_murmur3(filtered.length, i => n+':'+(filtered[i].name || filtered[i].id));
const idx = select_murmur3(filtered.length, i => n+':'+filtered[i].id);
selected.push(idx == null ? { levels: {}, id: null } : filtered[idx]);
}
const size = selected.filter(s => s.id !== null).length;
@ -340,9 +340,9 @@ function filter_tree_by_rules(osd_tree, rules, selected)
}
// Convert from
// node_list = { id: string|number, name?: string, level: string, size?: number, parent?: string|number }[]
// node_list = { id: string|number, level: string, size?: number, parent?: string|number }[]
// to
// node_tree = { [node_id]: { id, name?, level, size?, parent?, children?: child_node[], levels: { [level]: id, ... } } }
// node_tree = { [node_id]: { id, level, size?, parent?, children?: child_node_id[], levels: { [level]: id, ... } } }
function index_tree(node_list)
{
const tree = { '': { children: [], levels: {} } };
@ -357,7 +357,7 @@ function index_tree(node_list)
tree[parent_id].children = tree[parent_id].children || [];
tree[parent_id].children.push(tree[node.id]);
}
const cur = [ ...tree[''].children ];
const cur = tree[''].children;
for (let i = 0; i < cur.length; i++)
{
cur[i].levels[cur[i].level] = cur[i].id;

View File

@ -1,244 +0,0 @@
// Copyright (c) Vitaliy Filippov, 2019+
// License: VNPL-1.1 (see README.md for details)
// Extract OSDs from the lowest affected tree level into a separate (flat) map
// to run PG optimisation on failure domains instead of individual OSDs
//
// node_list = same input as for index_tree()
// rules = [ level, operator, value ][][]
// returns { nodes: new_node_list, leaves: { new_folded_node_id: [ extracted_leaf_nodes... ] } }
function fold_failure_domains(node_list, rules)
{
const interest = {};
for (const level_rules of rules)
{
for (const rule of level_rules)
interest[rule[0]] = true;
}
const max_numeric_id = node_list.reduce((a, c) => a < (0|c.id) ? (0|c.id) : a, 0);
let next_id = max_numeric_id;
const node_map = node_list.reduce((a, c) => { a[c.id||''] = c; return a; }, {});
const old_ids_by_new = {};
const extracted_nodes = {};
let folded = true;
while (folded)
{
const per_parent = {};
for (const node_id in node_map)
{
const node = node_map[node_id];
const p = node.parent || '';
per_parent[p] = per_parent[p]||[];
per_parent[p].push(node);
}
folded = false;
for (const node_id in per_parent)
{
const fold_node = node_id !== '' && per_parent[node_id].length > 0 && per_parent[node_id].filter(child => per_parent[child.id||''] || interest[child.level]).length == 0;
if (fold_node)
{
const old_node = node_map[node_id];
const new_id = ++next_id;
node_map[new_id] = {
...old_node,
id: new_id,
name: node_id, // for use in murmur3 hashes
size: per_parent[node_id].reduce((a, c) => a + (Number(c.size)||0), 0),
};
delete node_map[node_id];
old_ids_by_new[new_id] = node_id;
extracted_nodes[new_id] = [];
for (const child of per_parent[node_id])
{
if (old_ids_by_new[child.id])
{
extracted_nodes[new_id].push(...extracted_nodes[child.id]);
delete extracted_nodes[child.id];
}
else
extracted_nodes[new_id].push(child);
delete node_map[child.id];
}
folded = true;
}
}
}
return { nodes: Object.values(node_map), leaves: extracted_nodes };
}
// Distribute PGs mapped to "folded" nodes to individual OSDs according to their weights
// folded_pgs = optimize_result.int_pgs before folding
// prev_pgs = optional previous PGs from optimize_change() input
// extracted_nodes = output from fold_failure_domains
function unfold_failure_domains(folded_pgs, prev_pgs, extracted_nodes)
{
const maps = {};
let found = false;
for (const new_id in extracted_nodes)
{
const weights = {};
for (const sub_node of extracted_nodes[new_id])
{
weights[sub_node.id] = sub_node.size;
}
maps[new_id] = { weights, prev: [], next: [], pos: 0 };
found = true;
}
if (!found)
{
return folded_pgs;
}
for (let i = 0; i < folded_pgs.length; i++)
{
for (let j = 0; j < folded_pgs[i].length; j++)
{
if (maps[folded_pgs[i][j]])
{
maps[folded_pgs[i][j]].prev.push(prev_pgs && prev_pgs[i] && prev_pgs[i][j] || 0);
}
}
}
for (const new_id in maps)
{
maps[new_id].next = adjust_distribution(maps[new_id].weights, maps[new_id].prev);
}
const mapped_pgs = [];
for (let i = 0; i < folded_pgs.length; i++)
{
mapped_pgs.push(folded_pgs[i].map(osd => (maps[osd] ? maps[osd].next[maps[osd].pos++] : osd)));
}
return mapped_pgs;
}
// Return the new array of items re-distributed as close as possible to weights in wanted_weights
// wanted_weights = { [key]: weight }
// cur_items = key[]
function adjust_distribution(wanted_weights, cur_items)
{
const item_map = {};
for (let i = 0; i < cur_items.length; i++)
{
const item = cur_items[i];
item_map[item] = (item_map[item] || { target: 0, cur: [] });
item_map[item].cur.push(i);
}
let total_weight = 0;
for (const item in wanted_weights)
{
total_weight += Number(wanted_weights[item]) || 0;
}
for (const item in wanted_weights)
{
const weight = wanted_weights[item] / total_weight * cur_items.length;
if (weight > 0)
{
item_map[item] = (item_map[item] || { target: 0, cur: [] });
item_map[item].target = weight;
}
}
const diff = (item) => (item_map[item].cur.length - item_map[item].target);
const most_underweighted = Object.keys(item_map)
.filter(item => item_map[item].target > 0)
.sort((a, b) => diff(a) - diff(b));
// Items with zero target weight MUST never be selected - remove them
// and remap each of them to a most underweighted item
for (const item in item_map)
{
if (!item_map[item].target)
{
const prev = item_map[item];
delete item_map[item];
for (const idx of prev.cur)
{
const move_to = most_underweighted[0];
item_map[move_to].cur.push(idx);
move_leftmost(most_underweighted, diff);
}
}
}
// Other over-weighted items are only moved if it improves the distribution
while (most_underweighted.length > 1)
{
const first = most_underweighted[0];
const last = most_underweighted[most_underweighted.length-1];
const first_diff = diff(first);
const last_diff = diff(last);
if (Math.abs(first_diff+1)+Math.abs(last_diff-1) < Math.abs(first_diff)+Math.abs(last_diff))
{
item_map[first].cur.push(item_map[last].cur.pop());
move_leftmost(most_underweighted, diff);
move_rightmost(most_underweighted, diff);
}
else
{
break;
}
}
const new_items = new Array(cur_items.length);
for (const item in item_map)
{
for (const idx of item_map[item].cur)
{
new_items[idx] = item;
}
}
return new_items;
}
function move_leftmost(sorted_array, diff)
{
// Re-sort by moving the leftmost item to the right if it changes position
const first = sorted_array[0];
const new_diff = diff(first);
let r = 0;
while (r < sorted_array.length-1 && diff(sorted_array[r+1]) <= new_diff)
r++;
if (r > 0)
{
for (let i = 0; i < r; i++)
sorted_array[i] = sorted_array[i+1];
sorted_array[r] = first;
}
}
function move_rightmost(sorted_array, diff)
{
// Re-sort by moving the rightmost item to the left if it changes position
const last = sorted_array[sorted_array.length-1];
const new_diff = diff(last);
let r = sorted_array.length-1;
while (r > 0 && diff(sorted_array[r-1]) > new_diff)
r--;
if (r < sorted_array.length-1)
{
for (let i = sorted_array.length-1; i > r; i--)
sorted_array[i] = sorted_array[i-1];
sorted_array[r] = last;
}
}
// map previous PGs to folded nodes
function fold_prev_pgs(pgs, extracted_nodes)
{
const unmap = {};
for (const new_id in extracted_nodes)
{
for (const sub_node of extracted_nodes[new_id])
{
unmap[sub_node.id] = new_id;
}
}
const mapped_pgs = [];
for (let i = 0; i < pgs.length; i++)
{
mapped_pgs.push(pgs[i].map(osd => (unmap[osd] || osd)));
}
return mapped_pgs;
}
module.exports = {
fold_failure_domains,
unfold_failure_domains,
adjust_distribution,
fold_prev_pgs,
};

View File

@ -98,7 +98,6 @@ async function optimize_initial({ osd_weights, combinator, pg_count, pg_size = 3
score: lp_result.score,
weights: lp_result.vars,
int_pgs,
pg_effsize,
space: eff * pg_effsize,
total_space: total_weight,
};
@ -410,7 +409,6 @@ async function optimize_change({ prev_pgs: prev_int_pgs, osd_weights, combinator
int_pgs: new_pgs,
differs,
osd_differs,
pg_effsize,
space: pg_effsize * pg_list_space_efficiency(new_pgs, osd_weights, pg_minsize, parity_space),
total_space: total_weight,
};

View File

@ -1,108 +0,0 @@
// Copyright (c) Vitaliy Filippov, 2019+
// License: VNPL-1.1 (see README.md for details)
const assert = require('assert');
const { fold_failure_domains, unfold_failure_domains, adjust_distribution } = require('./fold.js');
const DSL = require('./dsl_pgs.js');
const LPOptimizer = require('./lp_optimizer.js');
const stableStringify = require('../stable-stringify.js');
async function run()
{
// Test run adjust_distribution
console.log('adjust_distribution');
const rand = [];
for (let i = 0; i < 100; i++)
{
rand.push(1 + Math.floor(10*Math.random()));
// or rand.push(0);
}
const adj = adjust_distribution({ 1: 1, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1, 7: 1, 8: 1, 9: 1, 10: 1 }, rand);
//console.log(rand.join(' '));
console.log(rand.reduce((a, c) => { a[c] = (a[c]||0)+1; return a; }, {}));
//console.log(adj.join(' '));
console.log(adj.reduce((a, c) => { a[c] = (a[c]||0)+1; return a; }, {}));
console.log('Movement: '+rand.reduce((a, c, i) => a+(rand[i] != adj[i] ? 1 : 0), 0)+'/'+rand.length);
console.log('\nfold_failure_domains');
console.log(JSON.stringify(fold_failure_domains(
[
{ id: 1, level: 'osd', size: 1, parent: 'disk1' },
{ id: 2, level: 'osd', size: 2, parent: 'disk1' },
{ id: 'disk1', level: 'disk', parent: 'host1' },
{ id: 'host1', level: 'host', parent: 'dc1' },
{ id: 'dc1', level: 'dc' },
],
[ [ [ 'dc' ], [ 'host' ] ] ]
), 0, 2));
console.log('\nfold_failure_domains empty rules');
console.log(JSON.stringify(fold_failure_domains(
[
{ id: 1, level: 'osd', size: 1, parent: 'disk1' },
{ id: 2, level: 'osd', size: 2, parent: 'disk1' },
{ id: 'disk1', level: 'disk', parent: 'host1' },
{ id: 'host1', level: 'host', parent: 'dc1' },
{ id: 'dc1', level: 'dc' },
],
[]
), 0, 2));
console.log('\noptimize_folded');
// 5 DCs, 2 hosts per DC, 10 OSD per host
const nodes = [];
for (let i = 1; i <= 100; i++)
{
nodes.push({ id: i, level: 'osd', size: 1, parent: 'host'+(1+(0|((i-1)/10))) });
}
for (let i = 1; i <= 10; i++)
{
nodes.push({ id: 'host'+i, level: 'host', parent: 'dc'+(1+(0|((i-1)/2))) });
}
for (let i = 1; i <= 5; i++)
{
nodes.push({ id: 'dc'+i, level: 'dc' });
}
// Check rules
const rules = DSL.parse_level_indexes({ dc: '112233', host: '123456' }, [ 'dc', 'host', 'osd' ]);
assert.deepEqual(rules, [[],[["dc","=",1],["host","!=",[1]]],[["dc","!=",[1]]],[["dc","=",3],["host","!=",[3]]],[["dc","!=",[1,3]]],[["dc","=",5],["host","!=",[5]]]]);
// Check tree folding
const { nodes: folded_nodes, leaves: folded_leaves } = fold_failure_domains(nodes, rules);
const expected_folded = [];
const expected_leaves = {};
for (let i = 1; i <= 10; i++)
{
expected_folded.push({ id: 100+i, name: 'host'+i, level: 'host', size: 10, parent: 'dc'+(1+(0|((i-1)/2))) });
expected_leaves[100+i] = [ ...new Array(10).keys() ].map(k => ({ id: 10*(i-1)+k+1, level: 'osd', size: 1, parent: 'host'+i }));
}
for (let i = 1; i <= 5; i++)
{
expected_folded.push({ id: 'dc'+i, level: 'dc' });
}
assert.equal(stableStringify(folded_nodes), stableStringify(expected_folded));
assert.equal(stableStringify(folded_leaves), stableStringify(expected_leaves));
// Now optimise it
console.log('1000 PGs, EC 112233');
const leaf_weights = folded_nodes.reduce((a, c) => { if (Number(c.id)) { a[c.id] = c.size; } return a; }, {});
let res = await LPOptimizer.optimize_initial({
osd_weights: leaf_weights,
combinator: new DSL.RuleCombinator(folded_nodes, rules, 10000, false),
pg_size: 6,
pg_count: 1000,
ordered: false,
});
LPOptimizer.print_change_stats(res, false);
assert.equal(res.space, 100, 'Initial distribution');
const unfolded_res = { ...res };
unfolded_res.int_pgs = unfold_failure_domains(res.int_pgs, null, folded_leaves);
const osd_weights = nodes.reduce((a, c) => { if (Number(c.id)) { a[c.id] = c.size; } return a; }, {});
unfolded_res.space = unfolded_res.pg_effsize * LPOptimizer.pg_list_space_efficiency(unfolded_res.int_pgs, osd_weights, 0, 1);
LPOptimizer.print_change_stats(unfolded_res, false);
assert.equal(res.space, 100, 'Initial distribution');
}
run().catch(console.error);

View File

@ -15,7 +15,7 @@ function get_osd_tree(global_config, state)
const stat = state.osd.stats[osd_num];
const osd_cfg = state.config.osd[osd_num];
let reweight = osd_cfg == null ? 1 : Number(osd_cfg.reweight);
if (isNaN(reweight) || reweight < 0 || reweight > 0)
if (reweight < 0 || isNaN(reweight))
reweight = 1;
if (stat && stat.size && reweight && (state.osd.state[osd_num] || Number(stat.time) >= down_time ||
osd_cfg && osd_cfg.noout))
@ -87,7 +87,7 @@ function make_hier_tree(global_config, tree)
tree[''] = { children: [] };
for (const node_id in tree)
{
if (node_id === '' || !(tree[node_id].children||[]).length && (tree[node_id].size||0) <= 0)
if (node_id === '' || tree[node_id].level === 'osd' && (!tree[node_id].size || tree[node_id].size <= 0))
{
continue;
}
@ -107,10 +107,10 @@ function make_hier_tree(global_config, tree)
deleted = 0;
for (const node_id in tree)
{
if (!(tree[node_id].children||[]).length && (tree[node_id].size||0) <= 0)
if (tree[node_id].level !== 'osd' && (!tree[node_id].children || !tree[node_id].children.length))
{
const parent = tree[node_id].parent;
if (parent && tree[parent])
if (parent)
{
tree[parent].children = tree[parent].children.filter(c => c != tree[node_id]);
}

View File

@ -1,6 +1,6 @@
{
"name": "vitastor-mon",
"version": "2.2.0",
"version": "2.1.0",
"description": "Vitastor SDS monitor service",
"main": "mon-main.js",
"scripts": {
@ -19,6 +19,6 @@
"eslint-plugin-node": "^11.1.0"
},
"engines": {
"node": ">=12.0.0"
"node": ">=12.1.0"
}
}

View File

@ -3,7 +3,6 @@
const { RuleCombinator } = require('./lp_optimizer/dsl_pgs.js');
const { SimpleCombinator, flatten_tree } = require('./lp_optimizer/simple_pgs.js');
const { fold_failure_domains, unfold_failure_domains, fold_prev_pgs } = require('./lp_optimizer/fold.js');
const { validate_pool_cfg, get_pg_rules } = require('./pool_config.js');
const LPOptimizer = require('./lp_optimizer/lp_optimizer.js');
const { scale_pg_count } = require('./pg_utils.js');
@ -161,6 +160,7 @@ async function generate_pool_pgs(state, global_config, pool_id, osd_tree, levels
pool_cfg.bitmap_granularity || global_config.bitmap_granularity || 4096,
pool_cfg.immediate_commit || global_config.immediate_commit || 'all'
);
pool_tree = make_hier_tree(global_config, pool_tree);
// First try last_clean_pgs to minimize data movement
let prev_pgs = [];
for (const pg in ((state.history.last_clean_pgs.items||{})[pool_id]||{}))
@ -175,19 +175,14 @@ async function generate_pool_pgs(state, global_config, pool_id, osd_tree, levels
prev_pgs[pg-1] = [ ...state.pg.config.items[pool_id][pg].osd_set ];
}
}
const use_rules = !global_config.use_old_pg_combinator || pool_cfg.level_placement || pool_cfg.raw_placement;
const rules = use_rules ? get_pg_rules(pool_id, pool_cfg, global_config.placement_levels) : null;
const folded = fold_failure_domains(Object.values(pool_tree), use_rules ? rules : [ [ [ pool_cfg.failure_domain ] ] ]);
// FIXME: Remove/merge make_hier_tree() step somewhere, however it's needed to remove empty nodes
const folded_tree = make_hier_tree(global_config, folded.nodes);
const old_pg_count = prev_pgs.length;
const optimize_cfg = {
osd_weights: folded.nodes.reduce((a, c) => { if (Number(c.id)) { a[c.id] = c.size; } return a; }, {}),
combinator: use_rules
osd_weights: Object.values(pool_tree).filter(item => item.level === 'osd').reduce((a, c) => { a[c.id] = c.size; return a; }, {}),
combinator: !global_config.use_old_pg_combinator || pool_cfg.level_placement || pool_cfg.raw_placement
// new algorithm:
? new RuleCombinator(folded_tree, rules, pool_cfg.max_osd_combinations)
? new RuleCombinator(pool_tree, get_pg_rules(pool_id, pool_cfg, global_config.placement_levels), pool_cfg.max_osd_combinations)
// old algorithm:
: new SimpleCombinator(flatten_tree(folded_tree[''].children, levels, pool_cfg.failure_domain, 'osd'), pool_cfg.pg_size, pool_cfg.max_osd_combinations),
: new SimpleCombinator(flatten_tree(pool_tree[''].children, levels, pool_cfg.failure_domain, 'osd'), pool_cfg.pg_size, pool_cfg.max_osd_combinations),
pg_count: pool_cfg.pg_count,
pg_size: pool_cfg.pg_size,
pg_minsize: pool_cfg.pg_minsize,
@ -207,11 +202,12 @@ async function generate_pool_pgs(state, global_config, pool_id, osd_tree, levels
for (const pg of prev_pgs)
{
while (pg.length < pool_cfg.pg_size)
{
pg.push(0);
}
}
const folded_prev_pgs = fold_prev_pgs(prev_pgs, folded.leaves);
optimize_result = await LPOptimizer.optimize_change({
prev_pgs: folded_prev_pgs,
prev_pgs,
...optimize_cfg,
});
}
@ -219,10 +215,6 @@ async function generate_pool_pgs(state, global_config, pool_id, osd_tree, levels
{
optimize_result = await LPOptimizer.optimize_initial(optimize_cfg);
}
optimize_result.int_pgs = unfold_failure_domains(optimize_result.int_pgs, prev_pgs, folded.leaves);
const osd_weights = Object.values(pool_tree).reduce((a, c) => { if (c.level === 'osd') { a[c.id] = c.size; } return a; }, {});
optimize_result.space = optimize_result.pg_effsize * LPOptimizer.pg_list_space_efficiency(optimize_result.int_pgs,
osd_weights, optimize_cfg.pg_minsize, 1);
console.log(`Pool ${pool_id} (${pool_cfg.name || 'unnamed'}):`);
LPOptimizer.print_change_stats(optimize_result);
let pg_effsize = pool_cfg.pg_size;

View File

@ -40,11 +40,6 @@ async function run()
console.log("/etc/systemd/system/vitastor-etcd.service already exists");
process.exit(1);
}
if (!in_docker && fs.existsSync("/etc/systemd/system/etcd.service"))
{
console.log("/etc/systemd/system/etcd.service already exists");
process.exit(1);
}
const config = JSON.parse(fs.readFileSync(config_path, { encoding: 'utf-8' }));
if (!config.etcd_address)
{
@ -71,7 +66,7 @@ async function run()
console.log('etcd for Vitastor configured. Run `systemctl enable --now vitastor-etcd` to start etcd');
process.exit(0);
}
await system(`mkdir -p /var/lib/etcd/vitastor`);
await system(`mkdir -p /var/lib/etcd`);
fs.writeFileSync(
"/etc/systemd/system/vitastor-etcd.service",
`[Unit]
@ -82,14 +77,14 @@ Wants=network-online.target local-fs.target time-sync.target
[Service]
Restart=always
Environment=GOGC=50
ExecStart=etcd --name ${etcd_name} --data-dir /var/lib/etcd/vitastor \\
ExecStart=etcd --name ${etcd_name} --data-dir /var/lib/etcd \\
--snapshot-count 10000 --advertise-client-urls http://${etcds[num]}:2379 --listen-client-urls http://${etcds[num]}:2379 \\
--initial-advertise-peer-urls http://${etcds[num]}:2380 --listen-peer-urls http://${etcds[num]}:2380 \\
--initial-cluster-token vitastor-etcd-1 --initial-cluster ${etcd_cluster} \\
--initial-cluster-state new --max-txn-ops=100000 --max-request-bytes=104857600 \\
--auto-compaction-retention=10 --auto-compaction-mode=revision
WorkingDirectory=/var/lib/etcd/vitastor
ExecStartPre=+chown -R etcd /var/lib/etcd/vitastor
WorkingDirectory=/var/lib/etcd
ExecStartPre=+chown -R etcd /var/lib/etcd
User=etcd
PrivateTmp=false
TasksMax=infinity
@ -102,9 +97,8 @@ WantedBy=multi-user.target
`);
await system(`useradd etcd`);
await system(`systemctl daemon-reload`);
// Disable distribution etcd unit and enable our one
await system(`systemctl disable --now etcd`);
await system(`systemctl enable --now vitastor-etcd`);
await system(`systemctl enable etcd`);
await system(`systemctl start etcd`);
process.exit(0);
}

View File

@ -87,25 +87,11 @@ function sum_op_stats(all_osd, prev_stats)
for (const k in derived[type][op])
{
sum_diff[type][op] = sum_diff[type][op] || {};
if (k == 'lat')
sum_diff[type][op].lat = (sum_diff[type][op].lat || 0n) + derived[type][op].lat*derived[type][op].iops;
else
sum_diff[type][op][k] = (sum_diff[type][op][k] || 0n) + derived[type][op][k];
sum_diff[type][op][k] = (sum_diff[type][op][k] || 0n) + derived[type][op][k];
}
}
}
}
// Calculate average (weighted by iops) op latency across all OSDs
for (const type in sum_diff)
{
for (const op in sum_diff[type])
{
if (sum_diff[type][op].lat)
{
sum_diff[type][op].lat /= sum_diff[type][op].iops;
}
}
}
return sum_diff;
}
@ -285,7 +271,8 @@ function sum_inode_stats(state, prev_stats)
const op_st = inode_stats[pool_id][inode_num][op];
op_st.bps += op_diff.bps;
op_st.iops += op_diff.iops;
op_st.lat += op_diff.lat*op_diff.iops;
op_st.lat += op_diff.lat;
op_st.n_osd = (op_st.n_osd || 0) + 1;
}
}
}
@ -298,8 +285,11 @@ function sum_inode_stats(state, prev_stats)
for (const op of [ 'read', 'write', 'delete' ])
{
const op_st = inode_stats[pool_id][inode_num][op];
if (op_st.lat)
op_st.lat /= op_st.iops;
if (op_st.n_osd)
{
op_st.lat /= BigInt(op_st.n_osd);
delete op_st.n_osd;
}
if (op_st.bps > 0 || op_st.iops > 0)
nonzero = true;
}

View File

@ -1,6 +1,6 @@
{
"name": "vitastor",
"version": "2.2.0",
"version": "2.1.0",
"description": "Low-level native bindings to Vitastor client library",
"main": "index.js",
"keywords": [

View File

@ -50,7 +50,7 @@ from cinder.volume import configuration
from cinder.volume import driver
from cinder.volume import volume_utils
VITASTOR_VERSION = '2.2.0'
VITASTOR_VERSION = '2.1.0'
LOG = logging.getLogger(__name__)

View File

@ -1,11 +1,11 @@
Name: vitastor
Version: 2.2.0
Version: 2.1.0
Release: 1%{?dist}
Summary: Vitastor, a fast software-defined clustered block storage
License: Vitastor Network Public License 1.1
URL: https://vitastor.io/
Source0: vitastor-2.2.0.el7.tar.gz
Source0: vitastor-2.1.0.el7.tar.gz
BuildRequires: liburing-devel >= 0.6
BuildRequires: gperftools-devel

View File

@ -1,11 +1,11 @@
Name: vitastor
Version: 2.2.0
Version: 2.1.0
Release: 1%{?dist}
Summary: Vitastor, a fast software-defined clustered block storage
License: Vitastor Network Public License 1.1
URL: https://vitastor.io/
Source0: vitastor-2.2.0.el8.tar.gz
Source0: vitastor-2.1.0.el8.tar.gz
BuildRequires: liburing-devel >= 0.6
BuildRequires: gperftools-devel

View File

@ -1,11 +1,11 @@
Name: vitastor
Version: 2.2.0
Version: 2.1.0
Release: 1%{?dist}
Summary: Vitastor, a fast software-defined clustered block storage
License: Vitastor Network Public License 1.1
URL: https://vitastor.io/
Source0: vitastor-2.2.0.el9.tar.gz
Source0: vitastor-2.1.0.el9.tar.gz
BuildRequires: liburing-devel >= 0.6
BuildRequires: gperftools-devel

View File

@ -19,7 +19,7 @@ if("${CMAKE_INSTALL_PREFIX}" MATCHES "^/usr/local/?$")
set(CMAKE_INSTALL_RPATH "${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_LIBDIR}")
endif()
add_definitions(-DVITASTOR_VERSION="2.2.0")
add_definitions(-DVITASTOR_VERSION="2.1.0")
add_definitions(-D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -Wall -Wno-sign-compare -Wno-comment -Wno-parentheses -Wno-pointer-arith -fdiagnostics-color=always -fno-omit-frame-pointer -I ${CMAKE_SOURCE_DIR}/src)
add_link_options(-fno-omit-frame-pointer)
if (${WITH_ASAN})

View File

@ -266,8 +266,6 @@ class blockstore_impl_t
int throttle_threshold_us = 50;
// Maximum writes between automatically added fsync operations
uint64_t autosync_writes = 128;
// Log level (0-10)
int log_level = 0;
/******* END OF OPTIONS *******/
struct ring_consumer_t ring_consumer;

View File

@ -113,13 +113,10 @@ int blockstore_journal_check_t::check_available(blockstore_op_t *op, int entries
if (!right_dir && next_pos >= bs->journal.used_start-bs->journal.block_size)
{
// No space in the journal. Wait until used_start changes.
if (bs->log_level > 5)
{
printf(
"Ran out of journal space (used_start=%08jx, next_free=%08jx, dirty_start=%08jx)\n",
bs->journal.used_start, bs->journal.next_free, bs->journal.dirty_start
);
}
printf(
"Ran out of journal space (used_start=%08jx, next_free=%08jx, dirty_start=%08jx)\n",
bs->journal.used_start, bs->journal.next_free, bs->journal.dirty_start
);
PRIV(op)->wait_for = WAIT_JOURNAL;
bs->flusher->request_trim();
PRIV(op)->wait_detail = bs->journal.used_start;

View File

@ -101,7 +101,6 @@ void blockstore_impl_t::parse_config(blockstore_config_t & config, bool init)
config["journal_no_same_sector_overwrites"] == "1" || config["journal_no_same_sector_overwrites"] == "yes";
journal.inmemory = config["inmemory_journal"] != "false" && config["inmemory_journal"] != "0" &&
config["inmemory_journal"] != "no";
log_level = strtoull(config["log_level"].c_str(), NULL, 10);
// Validate
if (journal.sector_count < 2)
{

View File

@ -93,7 +93,7 @@ add_executable(test_cluster_client
EXCLUDE_FROM_ALL
../test/test_cluster_client.cpp
pg_states.cpp osd_ops.cpp cluster_client.cpp cluster_client_list.cpp cluster_client_wb.cpp msgr_op.cpp ../test/mock/messenger.cpp msgr_stop.cpp
etcd_state_client.cpp ../util/timerfd_manager.cpp ../util/addr_util.cpp ../util/str_util.cpp ../util/json_util.cpp ../../json11/json11.cpp
etcd_state_client.cpp ../util/timerfd_manager.cpp ../util/str_util.cpp ../util/json_util.cpp ../../json11/json11.cpp
)
target_compile_definitions(test_cluster_client PUBLIC -D__MOCK__)
target_include_directories(test_cluster_client BEFORE PUBLIC ${CMAKE_SOURCE_DIR}/src/test/mock)

View File

@ -3,7 +3,6 @@
#include <stdexcept>
#include <assert.h>
#include "pg_states.h"
#include "cluster_client_impl.h"
#include "json_util.h"
@ -58,7 +57,6 @@ cluster_client_t::cluster_client_t(ring_loop_t *ringloop, timerfd_manager_t *tfd
st_cli.on_change_osd_state_hook = [this](uint64_t peer_osd) { on_change_osd_state_hook(peer_osd); };
st_cli.on_change_pool_config_hook = [this]() { on_change_pool_config_hook(); };
st_cli.on_change_pg_state_hook = [this](pool_id_t pool_id, pg_num_t pg_num, osd_num_t prev_primary) { on_change_pg_state_hook(pool_id, pg_num, prev_primary); };
st_cli.on_change_node_placement_hook = [this]() { on_change_node_placement_hook(); };
st_cli.on_load_pgs_hook = [this](bool success) { on_load_pgs_hook(success); };
st_cli.on_reload_hook = [this]() { st_cli.load_global_config(); };
@ -472,95 +470,11 @@ void cluster_client_t::on_load_config_hook(json11::Json::object & etcd_global_co
}
// log_level
log_level = config["log_level"].uint64_value();
// hostname
conf_hostname = config["hostname"].string_value();
auto new_hostname = conf_hostname != "" ? conf_hostname : gethostname_str();
if (new_hostname != client_hostname)
{
self_tree_metrics.clear();
client_hostname = new_hostname;
}
msgr.parse_config(config);
st_cli.parse_config(config);
st_cli.load_pgs();
}
osd_num_t cluster_client_t::select_random_osd(const std::vector<osd_num_t> & osds)
{
osd_num_t alive_set[osds.size()];
int alive_count = 0;
for (auto & osd_num: osds)
{
if (!st_cli.peer_states[osd_num].is_null())
alive_set[alive_count++] = osd_num;
}
if (!alive_count)
return 0;
return alive_set[lrand48() % alive_count];
}
osd_num_t cluster_client_t::select_nearest_osd(const std::vector<osd_num_t> & osds)
{
if (!self_tree_metrics.size())
{
std::string cur_id = client_hostname;
int metric = 0;
while (self_tree_metrics.find(cur_id) == self_tree_metrics.end())
{
self_tree_metrics[cur_id] = metric++;
json11::Json cur_placement = st_cli.node_placement[cur_id];
cur_id = cur_placement["parent"].string_value();
}
if (cur_id != "")
{
self_tree_metrics[""] = metric++;
}
}
osd_num_t best_osd = 0;
int best_metric = -1;
for (auto & osd_num: osds)
{
int metric = -1;
auto met_it = osd_tree_metrics.find(osd_num);
if (met_it != osd_tree_metrics.end())
{
metric = met_it->second;
}
else
{
auto & peer_state = st_cli.peer_states[osd_num];
if (!peer_state.is_null())
{
metric = self_tree_metrics[""];
bool first = true;
std::string cur_id = std::to_string(osd_num);
std::set<std::string> seen;
while (seen.find(cur_id) == seen.end())
{
seen.insert(cur_id);
json11::Json cur_placement = st_cli.node_placement[cur_id];
std::string cur_parent = cur_placement["parent"].string_value();
cur_id = (!first || cur_parent != "" ? cur_parent : peer_state["host"].string_value());
first = false;
auto self_it = self_tree_metrics.find(cur_id);
if (self_it != self_tree_metrics.end())
{
metric = self_it->second;
break;
}
}
}
osd_tree_metrics[osd_num] = metric;
}
if (metric >= 0 && (best_metric < 0 || metric < best_metric))
{
best_metric = metric;
best_osd = osd_num;
}
}
return best_osd;
}
void cluster_client_t::on_load_pgs_hook(bool success)
{
for (auto pool_item: st_cli.pool_config)
@ -632,7 +546,6 @@ bool cluster_client_t::get_immediate_commit(uint64_t inode)
void cluster_client_t::on_change_osd_state_hook(uint64_t peer_osd)
{
osd_tree_metrics.erase(peer_osd);
if (msgr.wanted_peers.find(peer_osd) != msgr.wanted_peers.end())
{
msgr.connect_peer(peer_osd, st_cli.peer_states[peer_osd]);
@ -640,12 +553,6 @@ void cluster_client_t::on_change_osd_state_hook(uint64_t peer_osd)
}
}
void cluster_client_t::on_change_node_placement_hook()
{
osd_tree_metrics.clear();
self_tree_metrics.clear();
}
bool cluster_client_t::is_ready()
{
return pgs_loaded;
@ -1314,17 +1221,6 @@ int cluster_client_t::try_send(cluster_op_t *op, int i)
!pg_it->second.pause && pg_it->second.cur_primary)
{
osd_num_t primary_osd = pg_it->second.cur_primary;
if (pool_cfg.local_reads != POOL_LOCAL_READ_PRIMARY &&
pool_cfg.scheme == POOL_SCHEME_REPLICATED &&
(op->opcode == OSD_OP_READ || op->opcode == OSD_OP_READ_BITMAP || op->opcode == OSD_OP_READ_CHAIN_BITMAP) &&
(pg_it->second.cur_state == PG_ACTIVE || pg_it->second.cur_state == (PG_ACTIVE|PG_LEFT_ON_DEAD)))
{
osd_num_t nearest_osd = pool_cfg.local_reads == POOL_LOCAL_READ_NEAREST
? select_nearest_osd(pg_it->second.target_set)
: select_random_osd(pg_it->second.target_set);
if (nearest_osd)
primary_osd = nearest_osd;
}
part->osd_num = primary_osd;
auto peer_it = msgr.osd_peer_fds.find(primary_osd);
if (peer_it != msgr.osd_peer_fds.end())
@ -1348,6 +1244,7 @@ int cluster_client_t::try_send(cluster_op_t *op, int i)
.req = { .rw = {
.header = {
.magic = SECONDARY_OSD_OP_MAGIC,
.id = next_op_id(),
.opcode = op->opcode == OSD_OP_READ_BITMAP || op->opcode == OSD_OP_READ_CHAIN_BITMAP ? OSD_OP_READ : op->opcode,
},
.inode = op->cur_inode,
@ -1456,6 +1353,7 @@ void cluster_client_t::send_sync(cluster_op_t *op, cluster_op_part_t *part)
.req = {
.hdr = {
.magic = SECONDARY_OSD_OP_MAGIC,
.id = next_op_id(),
.opcode = OSD_OP_SYNC,
},
},
@ -1600,3 +1498,8 @@ void cluster_client_t::copy_part_bitmap(cluster_op_t *op, cluster_op_part_t *par
part_len--;
}
}
uint64_t cluster_client_t::next_op_id()
{
return msgr.next_subop_id++;
}

View File

@ -86,8 +86,8 @@ class cluster_client_t
#ifdef __MOCK__
public:
#endif
timerfd_manager_t *tfd = NULL;
ring_loop_t *ringloop = NULL;
timerfd_manager_t *tfd;
ring_loop_t *ringloop;
std::map<pool_id_t, uint64_t> pg_counts;
std::map<pool_pg_num_t, osd_num_t> pg_primary;
@ -100,7 +100,6 @@ public:
uint64_t client_max_buffered_bytes = 0;
uint64_t client_max_buffered_ops = 0;
uint64_t client_max_writeback_iodepth = 0;
std::string conf_hostname;
int log_level = 0;
int client_retry_interval = 50; // ms
@ -108,10 +107,6 @@ public:
bool client_retry_enospc = true;
int client_wait_up_timeout = 16; // sec (for listings)
std::string client_hostname;
std::map<std::string, int> self_tree_metrics;
std::map<osd_num_t, int> osd_tree_metrics;
int retry_timeout_id = -1;
int retry_timeout_duration = 0;
std::vector<cluster_op_t*> offline_ops;
@ -157,6 +152,7 @@ public:
//inline uint32_t get_bs_bitmap_granularity() { return st_cli.global_bitmap_granularity; }
//inline uint64_t get_bs_block_size() { return st_cli.global_block_size; }
uint64_t next_op_id();
#ifndef __MOCK__
protected:
@ -166,14 +162,11 @@ protected:
protected:
bool affects_osd(uint64_t inode, uint64_t offset, uint64_t len, osd_num_t osd);
bool affects_pg(uint64_t inode, uint64_t offset, uint64_t len, pool_id_t pool_id, pg_num_t pg_num);
void on_load_config_hook(json11::Json::object & config);
void on_load_pgs_hook(bool success);
void on_change_pool_config_hook();
void on_change_pg_state_hook(pool_id_t pool_id, pg_num_t pg_num, osd_num_t prev_primary);
void on_change_osd_state_hook(uint64_t peer_osd);
void on_change_node_placement_hook();
void execute_internal(cluster_op_t *op);
void unshift_op(cluster_op_t *op);
int continue_rw(cluster_op_t *op);
@ -199,8 +192,5 @@ protected:
bool check_finish_listing(inode_list_t *lst);
void continue_raw_ops(osd_num_t peer_osd);
osd_num_t select_random_osd(const std::vector<osd_num_t> & osds);
osd_num_t select_nearest_osd(const std::vector<osd_num_t> & osds);
friend class writeback_cache_t;
};

View File

@ -342,6 +342,7 @@ void cluster_client_t::send_list(inode_list_osd_t *cur_list)
.sec_list = {
.header = {
.magic = SECONDARY_OSD_OP_MAGIC,
.id = next_op_id(),
.opcode = OSD_OP_SEC_LIST,
},
.list_pg = cur_list->pg->pg_num,

View File

@ -922,19 +922,6 @@ void etcd_state_client_t::parse_state(const etcd_kv_t & kv)
pc.used_for_app = "fs:"+pc.used_for_app;
else
pc.used_for_app = pool_item.second["used_for_app"].as_string();
// Local Read Configuration
std::string local_reads = pool_item.second["local_reads"].string_value();
if (local_reads == "nearest")
pc.local_reads = POOL_LOCAL_READ_NEAREST;
else if (local_reads == "random")
pc.local_reads = POOL_LOCAL_READ_RANDOM;
else if (local_reads == "" || local_reads == "primary")
pc.local_reads = POOL_LOCAL_READ_PRIMARY;
else
{
pc.local_reads = POOL_LOCAL_READ_PRIMARY;
fprintf(stderr, "Warning: Pool %u has invalid local_reads, using 'primary'\n", pool_id);
}
// Immediate Commit Mode
pc.immediate_commit = pool_item.second["immediate_commit"].is_string()
? parse_immediate_commit(pool_item.second["immediate_commit"].string_value(), IMMEDIATE_ALL)
@ -1269,13 +1256,6 @@ void etcd_state_client_t::parse_state(const etcd_kv_t & kv)
}
}
}
else if (key == etcd_prefix+"/config/node_placement")
{
// <etcd_prefix>/config/node_placement
node_placement = value;
if (on_change_node_placement_hook)
on_change_node_placement_hook();
}
}
uint32_t etcd_state_client_t::parse_immediate_commit(const std::string & immediate_commit_str, uint32_t default_value)

View File

@ -25,10 +25,6 @@
#define IMMEDIATE_ALL 2
#endif
#define POOL_LOCAL_READ_PRIMARY 0
#define POOL_LOCAL_READ_NEAREST 1
#define POOL_LOCAL_READ_RANDOM 2
struct etcd_kv_t
{
std::string key;
@ -52,22 +48,21 @@ struct pg_config_t
struct pool_config_t
{
bool exists = false;
pool_id_t id = 0;
bool exists;
pool_id_t id;
std::string name;
uint64_t scheme = 0;
uint64_t pg_size = 0, pg_minsize = 0, parity_chunks = 0;
uint32_t data_block_size = 0, bitmap_granularity = 0, immediate_commit = 0;
uint64_t pg_count = 0;
uint64_t real_pg_count = 0;
uint64_t scheme;
uint64_t pg_size, pg_minsize, parity_chunks;
uint32_t data_block_size, bitmap_granularity, immediate_commit;
uint64_t pg_count;
uint64_t real_pg_count;
std::string failure_domain;
uint64_t max_osd_combinations = 0;
uint64_t pg_stripe_size = 0;
uint64_t max_osd_combinations;
uint64_t pg_stripe_size;
std::map<pg_num_t, pg_config_t> pg_config;
uint64_t scrub_interval = 0;
uint64_t scrub_interval;
std::string used_for_app;
int backfillfull = 0;
int local_reads = 0;
int backfillfull;
};
struct inode_config_t
@ -135,7 +130,6 @@ public:
std::set<osd_num_t> seen_peers;
std::map<inode_t, inode_config_t> inode_config;
std::map<std::string, inode_t> inode_by_name;
json11::Json node_placement;
std::function<void(std::map<std::string, etcd_kv_t> &)> on_change_hook;
std::function<void(json11::Json::object &)> on_load_config_hook;
@ -146,7 +140,6 @@ public:
std::function<void(pool_id_t, pg_num_t, osd_num_t)> on_change_pg_state_hook;
std::function<void(pool_id_t, pg_num_t)> on_change_pg_history_hook;
std::function<void(osd_num_t)> on_change_osd_state_hook;
std::function<void()> on_change_node_placement_hook;
std::function<void()> on_reload_hook;
std::function<void(inode_t, bool)> on_inode_change_hook;
std::function<void(http_co_t *)> on_start_watcher_hook;

View File

@ -167,10 +167,6 @@ void osd_messenger_t::init()
}
}
#endif
if (ringloop)
{
has_sendmsg_zc = ringloop->has_sendmsg_zc();
}
if (ringloop && iothread_count > 0)
{
for (int i = 0; i < iothread_count; i++)
@ -217,6 +213,7 @@ void osd_messenger_t::init()
op->req = (osd_any_op_t){
.hdr = {
.magic = SECONDARY_OSD_OP_MAGIC,
.id = this->next_subop_id++,
.opcode = OSD_OP_PING,
},
};
@ -332,9 +329,6 @@ void osd_messenger_t::parse_config(const json11::Json & config)
this->receive_buffer_size = 65536;
this->use_sync_send_recv = config["use_sync_send_recv"].bool_value() ||
config["use_sync_send_recv"].uint64_value();
this->min_zerocopy_send_size = config["min_zerocopy_send_size"].is_null()
? DEFAULT_MIN_ZEROCOPY_SEND_SIZE
: (int)config["min_zerocopy_send_size"].int64_value();
this->peer_connect_interval = config["peer_connect_interval"].uint64_value();
if (!this->peer_connect_interval)
this->peer_connect_interval = 5;
@ -628,19 +622,13 @@ void osd_messenger_t::check_peer_config(osd_client_t *cl)
.show_conf = {
.header = {
.magic = SECONDARY_OSD_OP_MAGIC,
.id = this->next_subop_id++,
.opcode = OSD_OP_SHOW_CONFIG,
},
},
};
json11::Json::object payload;
if (osd_num)
{
// Inform that we're OSD <osd_num>
payload["osd_num"] = osd_num;
}
payload["features"] = json11::Json::object{ { "check_sequencing", true } };
#ifdef WITH_RDMA
if (!use_rdmacm && rdma_contexts.size())
if (rdma_contexts.size())
{
// Choose the right context for the selected network
msgr_rdma_context_t *selected_ctx = choose_rdma_context(cl);
@ -654,20 +642,19 @@ void osd_messenger_t::check_peer_config(osd_client_t *cl)
cl->rdma_conn = msgr_rdma_connection_t::create(selected_ctx, rdma_max_send, rdma_max_recv, rdma_max_sge, rdma_max_msg);
if (cl->rdma_conn)
{
payload["connect_rdma"] = cl->rdma_conn->addr.to_string();
payload["rdma_max_msg"] = cl->rdma_conn->max_msg;
json11::Json payload = json11::Json::object {
{ "connect_rdma", cl->rdma_conn->addr.to_string() },
{ "rdma_max_msg", cl->rdma_conn->max_msg },
};
std::string payload_str = payload.dump();
op->req.show_conf.json_len = payload_str.size();
op->buf = malloc_or_die(payload_str.size());
op->iov.push_back(op->buf, payload_str.size());
memcpy(op->buf, payload_str.c_str(), payload_str.size());
}
}
}
#endif
if (payload.size())
{
std::string payload_str = json11::Json(payload).dump();
op->req.show_conf.json_len = payload_str.size();
op->buf = malloc_or_die(payload_str.size());
op->iov.push_back(op->buf, payload_str.size());
memcpy(op->buf, payload_str.c_str(), payload_str.size());
}
op->callback = [this, cl](osd_op_t *op)
{
std::string json_err;
@ -714,7 +701,7 @@ void osd_messenger_t::check_peer_config(osd_client_t *cl)
return;
}
#ifdef WITH_RDMA
if (!use_rdmacm && cl->rdma_conn && config["rdma_address"].is_string())
if (cl->rdma_conn && config["rdma_address"].is_string())
{
msgr_rdma_address_t addr;
if (!msgr_rdma_address_t::from_string(config["rdma_address"].string_value().c_str(), &addr) ||
@ -773,15 +760,12 @@ void osd_messenger_t::accept_connections(int listen_fd)
fcntl(peer_fd, F_SETFL, fcntl(peer_fd, F_GETFL, 0) | O_NONBLOCK);
int one = 1;
setsockopt(peer_fd, SOL_TCP, TCP_NODELAY, &one, sizeof(one));
auto cl = new osd_client_t();
clients[peer_fd] = cl;
cl->is_incoming = true;
cl->peer_addr = addr;
cl->peer_addr = addr;
cl->peer_port = ntohs(((sockaddr_in*)&addr)->sin_port);
cl->peer_fd = peer_fd;
cl->peer_state = PEER_CONNECTED;
cl->in_buf = malloc_or_die(receive_buffer_size);
clients[peer_fd] = new osd_client_t();
clients[peer_fd]->peer_addr = addr;
clients[peer_fd]->peer_port = ntohs(((sockaddr_in*)&addr)->sin_port);
clients[peer_fd]->peer_fd = peer_fd;
clients[peer_fd]->peer_state = PEER_CONNECTED;
clients[peer_fd]->in_buf = malloc_or_die(receive_buffer_size);
// Add FD to epoll
tfd->set_fd_handler(peer_fd, false, [this](int peer_fd, int epoll_events)
{
@ -816,8 +800,7 @@ bool osd_messenger_t::is_rdma_enabled()
{
return rdma_contexts.size() > 0;
}
#endif
#ifdef WITH_RDMACM
bool osd_messenger_t::is_use_rdmacm()
{
return use_rdmacm;
@ -913,7 +896,6 @@ static const char* local_only_params[] = {
"tcp_header_buffer_size",
"use_rdma",
"use_sync_send_recv",
"min_zerocopy_send_size",
};
static const char **local_only_end = local_only_params + (sizeof(local_only_params)/sizeof(local_only_params[0]));

View File

@ -32,8 +32,6 @@
#define VITASTOR_CONFIG_PATH "/etc/vitastor/vitastor.conf"
#define DEFAULT_MIN_ZEROCOPY_SEND_SIZE 32*1024
#define MSGR_SENDP_HDR 1
#define MSGR_SENDP_FREE 2
@ -60,7 +58,6 @@ struct osd_client_t
int ping_time_remaining = 0;
int idle_time_remaining = 0;
osd_num_t osd_num = 0;
bool is_incoming = false;
void *in_buf = NULL;
@ -76,16 +73,12 @@ struct osd_client_t
int read_remaining = 0;
int read_state = 0;
osd_op_buf_list_t recv_list;
uint64_t read_op_id = 1;
bool check_sequencing = false;
bool enable_pg_locks = false;
// Incoming operations
std::vector<osd_op_t*> received_ops;
// Outbound operations
std::map<uint64_t, osd_op_t*> sent_ops;
uint64_t send_op_id = 0;
// PGs dirtied by this client's primary-writes
std::set<pool_pg_num_t> dirty_pgs;
@ -95,7 +88,6 @@ struct osd_client_t
int write_state = 0;
std::vector<iovec> send_list, next_send_list;
std::vector<msgr_sendp_t> outbox, next_outbox;
std::vector<osd_op_t*> zc_free_list;
~osd_client_t();
};
@ -105,7 +97,6 @@ struct osd_wanted_peer_t
json11::Json raw_address_list;
json11::Json address_list;
int port = 0;
// FIXME: Remove separate WITH_RDMACM?
#ifdef WITH_RDMACM
int rdmacm_port = 0;
#endif
@ -184,7 +175,6 @@ protected:
int osd_ping_timeout = 0;
int log_level = 0;
bool use_sync_send_recv = false;
int min_zerocopy_send_size = DEFAULT_MIN_ZEROCOPY_SEND_SIZE;
int iothread_count = 0;
#ifdef WITH_RDMA
@ -211,11 +201,11 @@ protected:
std::vector<osd_op_t*> set_immediate_ops;
public:
timerfd_manager_t *tfd = NULL;
ring_loop_t *ringloop = NULL;
bool has_sendmsg_zc = false;
timerfd_manager_t *tfd;
ring_loop_t *ringloop;
// osd_num_t is only for logging and asserts
osd_num_t osd_num;
uint64_t next_subop_id = 1;
std::map<int, osd_client_t*> clients;
std::map<osd_num_t, osd_wanted_peer_t> wanted_peers;
std::map<uint64_t, int> osd_peer_fds;
@ -271,7 +261,7 @@ protected:
void cancel_op(osd_op_t *op);
bool try_send(osd_client_t *cl);
void handle_send(int result, bool prev, bool more, osd_client_t *cl);
void handle_send(int result, osd_client_t *cl);
bool handle_read(int result, osd_client_t *cl);
bool handle_read_buffer(osd_client_t *cl, void *curbuf, int remain);
@ -296,7 +286,6 @@ protected:
msgr_rdma_context_t* rdmacm_create_qp(rdma_cm_id *cmid);
void rdmacm_accept(rdma_cm_event *ev);
void rdmacm_try_connect_peer(uint64_t peer_osd, const std::string & addr, int rdmacm_port, int fallback_tcp_port);
void rdmacm_set_conn_timeout(rdmacm_connecting_t *conn);
void rdmacm_on_connect_peer_error(rdma_cm_id *cmid, int res);
void rdmacm_address_resolved(rdma_cm_event *ev);
void rdmacm_route_resolved(rdma_cm_event *ev);

View File

@ -70,7 +70,6 @@ msgr_rdma_context_t::~msgr_rdma_context_t()
msgr_rdma_connection_t::~msgr_rdma_connection_t()
{
ctx->reserve_cqe(-max_send-max_recv);
#ifdef WITH_RDMACM
if (qp && !cmid)
ibv_destroy_qp(qp);
if (cmid)
@ -80,10 +79,6 @@ msgr_rdma_connection_t::~msgr_rdma_connection_t()
rdma_destroy_qp(cmid);
rdma_destroy_id(cmid);
}
#else
if (qp)
ibv_destroy_qp(qp);
#endif
if (recv_buffers.size())
{
for (auto b: recv_buffers)
@ -803,9 +798,6 @@ void osd_messenger_t::handle_rdma_events(msgr_rdma_context_t *rdma_context)
}
if (!is_send)
{
// Reset OSD ping state - client is obviously alive
cl->ping_time_remaining = 0;
cl->idle_time_remaining = osd_idle_timeout;
rc->cur_recv--;
if (!handle_read_buffer(cl, rc->recv_buffers[rc->next_recv_buf].buf, wc[i].byte_len))
{

View File

@ -70,7 +70,7 @@ void osd_messenger_t::rdmacm_destroy_listener(rdma_cm_id *listener)
void osd_messenger_t::handle_rdmacm_events()
{
// rdma_destroy_id infinitely waits for pthread_cond if called before all events are acked :-(...
// rdma_destroy_id infinitely waits for pthread_cond if called before all events are acked :-(
std::vector<rdma_cm_event> events_copy;
while (1)
{
@ -83,15 +83,7 @@ void osd_messenger_t::handle_rdmacm_events()
fprintf(stderr, "Failed to get RDMA-CM event: %s (code %d)\n", strerror(errno), errno);
exit(1);
}
// ...so we save a copy of all events EXCEPT connection requests, otherwise they sometimes fail with EVENT_DISCONNECT
if (ev->event == RDMA_CM_EVENT_CONNECT_REQUEST)
{
rdmacm_accept(ev);
}
else
{
events_copy.push_back(*ev);
}
events_copy.push_back(*ev);
r = rdma_ack_cm_event(ev);
if (r != 0)
{
@ -104,7 +96,7 @@ void osd_messenger_t::handle_rdmacm_events()
auto ev = &evl;
if (ev->event == RDMA_CM_EVENT_CONNECT_REQUEST)
{
// Do nothing, handled above
rdmacm_accept(ev);
}
else if (ev->event == RDMA_CM_EVENT_CONNECT_ERROR ||
ev->event == RDMA_CM_EVENT_REJECTED ||
@ -295,34 +287,29 @@ void osd_messenger_t::rdmacm_accept(rdma_cm_event *ev)
rdma_destroy_id(ev->id);
return;
}
// Wait for RDMA_CM_ESTABLISHED, and enable the connection only after it
auto conn = new rdmacm_connecting_t;
rdma_context->cm_refs++;
// Wrap into a new msgr_rdma_connection_t
msgr_rdma_connection_t *conn = new msgr_rdma_connection_t;
conn->ctx = rdma_context;
conn->max_send = rdma_max_send;
conn->max_recv = rdma_max_recv;
conn->max_sge = rdma_max_sge > rdma_context->attrx.orig_attr.max_sge
? rdma_context->attrx.orig_attr.max_sge : rdma_max_sge;
conn->max_msg = rdma_max_msg;
conn->cmid = ev->id;
conn->peer_fd = fake_fd;
conn->parsed_addr = *(sockaddr_storage*)rdma_get_peer_addr(ev->id);
conn->rdma_context = rdma_context;
rdmacm_set_conn_timeout(conn);
rdmacm_connecting[ev->id] = conn;
fprintf(stderr, "[OSD %ju] new client %d: connection from %s via RDMA-CM\n", this->osd_num, conn->peer_fd,
addr_to_string(conn->parsed_addr).c_str());
}
void osd_messenger_t::rdmacm_set_conn_timeout(rdmacm_connecting_t *conn)
{
conn->timeout_ms = peer_connect_timeout*1000;
if (peer_connect_timeout > 0)
{
conn->timeout_id = tfd->set_timer(1000*peer_connect_timeout, false, [this, cmid = conn->cmid](int timer_id)
{
auto conn = rdmacm_connecting.at(cmid);
conn->timeout_id = -1;
if (conn->peer_osd)
fprintf(stderr, "RDMA-CM connection to %s timed out\n", conn->addr.c_str());
else
fprintf(stderr, "Incoming RDMA-CM connection from %s timed out\n", addr_to_string(conn->parsed_addr).c_str());
rdmacm_on_connect_peer_error(cmid, -EPIPE);
});
}
conn->qp = ev->id->qp;
auto cl = new osd_client_t();
cl->peer_fd = fake_fd;
cl->peer_state = PEER_RDMA;
cl->peer_addr = *(sockaddr_storage*)rdma_get_peer_addr(ev->id);
cl->in_buf = malloc_or_die(receive_buffer_size);
cl->rdma_conn = conn;
clients[fake_fd] = cl;
rdmacm_connections[ev->id] = cl;
// Add initial receive request(s)
try_recv_rdma(cl);
fprintf(stderr, "[OSD %ju] new client %d: connection from %s via RDMA-CM\n", this->osd_num, fake_fd,
addr_to_string(cl->peer_addr).c_str());
}
void osd_messenger_t::rdmacm_on_connect_peer_error(rdma_cm_id *cmid, int res)
@ -345,18 +332,15 @@ void osd_messenger_t::rdmacm_on_connect_peer_error(rdma_cm_id *cmid, int res)
}
rdmacm_connecting.erase(cmid);
delete conn;
if (peer_osd)
if (!disable_tcp)
{
if (!disable_tcp)
{
// Fall back to TCP instead of just reporting the error to on_connect_peer()
try_connect_peer_tcp(peer_osd, addr.c_str(), tcp_port);
}
else
{
// TCP is disabled
on_connect_peer(peer_osd, res == 0 ? -EINVAL : (res > 0 ? -res : res));
}
// Fall back to TCP instead of just reporting the error to on_connect_peer()
try_connect_peer_tcp(peer_osd, addr.c_str(), tcp_port);
}
else
{
// TCP is disabled
on_connect_peer(peer_osd, res == 0 ? -EINVAL : (res > 0 ? -res : res));
}
}
@ -390,8 +374,6 @@ void osd_messenger_t::rdmacm_try_connect_peer(uint64_t peer_osd, const std::stri
on_connect_peer(peer_osd, res);
return;
}
if (log_level > 0)
fprintf(stderr, "Trying to connect to OSD %ju at %s:%d via RDMA-CM\n", peer_osd, addr.c_str(), rdmacm_port);
auto conn = new rdmacm_connecting_t;
rdmacm_connecting[cmid] = conn;
conn->cmid = cmid;
@ -401,7 +383,19 @@ void osd_messenger_t::rdmacm_try_connect_peer(uint64_t peer_osd, const std::stri
conn->parsed_addr = sa;
conn->rdmacm_port = rdmacm_port;
conn->tcp_port = fallback_tcp_port;
rdmacm_set_conn_timeout(conn);
conn->timeout_ms = peer_connect_timeout*1000;
conn->timeout_id = -1;
if (peer_connect_timeout > 0)
{
conn->timeout_id = tfd->set_timer(1000*peer_connect_timeout, false, [this, cmid](int timer_id)
{
auto conn = rdmacm_connecting.at(cmid);
conn->timeout_id = -1;
fprintf(stderr, "RDMA-CM connection to %s timed out\n", conn->addr.c_str());
rdmacm_on_connect_peer_error(cmid, -EPIPE);
return;
});
}
if (rdma_resolve_addr(cmid, NULL, (sockaddr*)&conn->parsed_addr, conn->timeout_ms) != 0)
{
auto res = -errno;
@ -500,7 +494,7 @@ void osd_messenger_t::rdmacm_established(rdma_cm_event *ev)
// Wrap into a new msgr_rdma_connection_t
msgr_rdma_connection_t *rc = new msgr_rdma_connection_t;
rc->ctx = conn->rdma_context;
rc->ctx->cm_refs++; // FIXME now unused, count also connecting_t's when used
rc->ctx->cm_refs++;
rc->max_send = rdma_max_send;
rc->max_recv = rdma_max_recv;
rc->max_sge = rdma_max_sge > rc->ctx->attrx.orig_attr.max_sge
@ -510,7 +504,6 @@ void osd_messenger_t::rdmacm_established(rdma_cm_event *ev)
rc->qp = conn->cmid->qp;
// And an osd_client_t
auto cl = new osd_client_t();
cl->is_incoming = true;
cl->peer_addr = conn->parsed_addr;
cl->peer_port = conn->rdmacm_port;
cl->peer_fd = conn->peer_fd;
@ -521,20 +514,14 @@ void osd_messenger_t::rdmacm_established(rdma_cm_event *ev)
cl->rdma_conn = rc;
clients[conn->peer_fd] = cl;
if (conn->timeout_id >= 0)
{
tfd->clear_timer(conn->timeout_id);
}
delete conn;
rdmacm_connecting.erase(cmid);
rdmacm_connections[cmid] = cl;
if (log_level > 0 && peer_osd)
{
if (log_level > 0)
fprintf(stderr, "Successfully connected with OSD %ju using RDMA-CM\n", peer_osd);
}
// Add initial receive request(s)
try_recv_rdma(cl);
if (peer_osd)
{
check_peer_config(cl);
}
osd_peer_fds[peer_osd] = cl->peer_fd;
on_connect_peer(peer_osd, cl->peer_fd);
}

View File

@ -214,7 +214,6 @@ bool osd_messenger_t::handle_read_buffer(osd_client_t *cl, void *curbuf, int rem
bool osd_messenger_t::handle_finished_read(osd_client_t *cl)
{
// Reset OSD ping state
cl->ping_time_remaining = 0;
cl->idle_time_remaining = osd_idle_timeout;
cl->recv_list.reset();
@ -223,19 +222,7 @@ bool osd_messenger_t::handle_finished_read(osd_client_t *cl)
if (cl->read_op->req.hdr.magic == SECONDARY_OSD_REPLY_MAGIC)
return handle_reply_hdr(cl);
else if (cl->read_op->req.hdr.magic == SECONDARY_OSD_OP_MAGIC)
{
if (cl->check_sequencing)
{
if (cl->read_op->req.hdr.id != cl->read_op_id)
{
fprintf(stderr, "Warning: operation sequencing is broken on client %d, stopping client\n", cl->peer_fd);
stop_client(cl->peer_fd);
return false;
}
cl->read_op_id++;
}
handle_op_hdr(cl);
}
else
{
fprintf(stderr, "Received garbage: magic=%jx id=%ju opcode=%jx from %d\n", cl->read_op->req.hdr.magic, cl->read_op->req.hdr.id, cl->read_op->req.hdr.opcode, cl->peer_fd);

View File

@ -14,7 +14,6 @@ void osd_messenger_t::outbox_push(osd_op_t *cur_op)
if (cur_op->op_type == OSD_OP_OUT)
{
clock_gettime(CLOCK_REALTIME, &cur_op->tv_begin);
cur_op->req.hdr.id = ++cl->send_op_id;
}
else
{
@ -204,24 +203,8 @@ bool osd_messenger_t::try_send(osd_client_t *cl)
cl->write_msg.msg_iovlen = cl->send_list.size() < IOV_MAX ? cl->send_list.size() : IOV_MAX;
cl->refs++;
ring_data_t* data = ((ring_data_t*)sqe->user_data);
data->callback = [this, cl](ring_data_t *data) { handle_send(data->res, data->prev, data->more, cl); };
bool use_zc = has_sendmsg_zc && min_zerocopy_send_size >= 0;
if (use_zc && min_zerocopy_send_size > 0)
{
size_t avg_size = 0;
for (size_t i = 0; i < cl->write_msg.msg_iovlen; i++)
avg_size += cl->write_msg.msg_iov[i].iov_len;
if (avg_size/cl->write_msg.msg_iovlen < min_zerocopy_send_size)
use_zc = false;
}
if (use_zc)
{
my_uring_prep_sendmsg_zc(sqe, peer_fd, &cl->write_msg, MSG_WAITALL);
}
else
{
my_uring_prep_sendmsg(sqe, peer_fd, &cl->write_msg, MSG_WAITALL);
}
data->callback = [this, cl](ring_data_t *data) { handle_send(data->res, cl); };
my_uring_prep_sendmsg(sqe, peer_fd, &cl->write_msg, 0);
if (iothread)
{
iothread->add_sqe(sqe_local);
@ -237,7 +220,7 @@ bool osd_messenger_t::try_send(osd_client_t *cl)
{
result = -errno;
}
handle_send(result, false, false, cl);
handle_send(result, cl);
}
return true;
}
@ -257,16 +240,10 @@ void osd_messenger_t::send_replies()
write_ready_clients.clear();
}
void osd_messenger_t::handle_send(int result, bool prev, bool more, osd_client_t *cl)
void osd_messenger_t::handle_send(int result, osd_client_t *cl)
{
if (!prev)
{
cl->write_msg.msg_iovlen = 0;
}
if (!more)
{
cl->refs--;
}
cl->write_msg.msg_iovlen = 0;
cl->refs--;
if (cl->peer_state == PEER_STOPPED)
{
if (cl->refs <= 0)
@ -284,16 +261,6 @@ void osd_messenger_t::handle_send(int result, bool prev, bool more, osd_client_t
}
if (result >= 0)
{
if (prev)
{
// Second notification - only free a batch of postponed ops
int i = 0;
for (; i < cl->zc_free_list.size() && cl->zc_free_list[i]; i++)
delete cl->zc_free_list[i];
if (i > 0)
cl->zc_free_list.erase(cl->zc_free_list.begin(), cl->zc_free_list.begin()+i+1);
return;
}
int done = 0;
while (result > 0 && done < cl->send_list.size())
{
@ -303,10 +270,7 @@ void osd_messenger_t::handle_send(int result, bool prev, bool more, osd_client_t
if (cl->outbox[done].flags & MSGR_SENDP_FREE)
{
// Reply fully sent
if (more)
cl->zc_free_list.push_back(cl->outbox[done].op);
else
delete cl->outbox[done].op;
delete cl->outbox[done].op;
}
result -= iov.iov_len;
done++;
@ -318,12 +282,6 @@ void osd_messenger_t::handle_send(int result, bool prev, bool more, osd_client_t
break;
}
}
if (more)
{
auto expected = cl->send_list.size() < IOV_MAX ? cl->send_list.size() : IOV_MAX;
assert(done == expected);
cl->zc_free_list.push_back(NULL); // end marker
}
if (done > 0)
{
cl->send_list.erase(cl->send_list.begin(), cl->send_list.begin()+done);

View File

@ -23,5 +23,4 @@ const char* osd_op_names[] = {
"sec_read_bmp",
"scrub",
"describe",
"sec_lock",
};

View File

@ -31,13 +31,10 @@
#define OSD_OP_SEC_READ_BMP 16
#define OSD_OP_SCRUB 17
#define OSD_OP_DESCRIBE 18
#define OSD_OP_SEC_LOCK 19
#define OSD_OP_MAX 19
#define OSD_OP_MAX 18
#define OSD_RW_MAX 64*1024*1024
#define OSD_PROTOCOL_VERSION 1
#define OSD_OP_RECOVERY_RELATED (uint32_t)1
#define OSD_OP_IGNORE_PG_LOCK (uint32_t)2
// Memory alignment for direct I/O (usually 512 bytes)
#ifndef DIRECT_IO_ALIGNMENT
@ -59,9 +56,6 @@
#define OSD_DEL_SUPPORT_LEFT_ON_DEAD 1
#define OSD_DEL_LEFT_ON_DEAD 2
#define OSD_SEC_LOCK_PG 1
#define OSD_SEC_UNLOCK_PG 2
// common request and reply headers
struct __attribute__((__packed__)) osd_op_header_t
{
@ -100,7 +94,7 @@ struct __attribute__((__packed__)) osd_op_sec_rw_t
uint32_t len;
// bitmap/attribute length - bitmap comes after header, but before data
uint32_t attr_len;
// OSD_OP_RECOVERY_RELATED, OSD_OP_IGNORE_PG_LOCK
// the only possible flag is OSD_OP_RECOVERY_RELATED
uint32_t flags;
};
@ -122,7 +116,7 @@ struct __attribute__((__packed__)) osd_op_sec_del_t
object_id oid;
// delete version (automatic or specific)
uint64_t version;
// OSD_OP_RECOVERY_RELATED, OSD_OP_IGNORE_PG_LOCK
// the only possible flag is OSD_OP_RECOVERY_RELATED
uint32_t flags;
uint32_t pad0;
};
@ -137,7 +131,7 @@ struct __attribute__((__packed__)) osd_reply_sec_del_t
struct __attribute__((__packed__)) osd_op_sec_sync_t
{
osd_op_header_t header;
// OSD_OP_RECOVERY_RELATED, OSD_OP_IGNORE_PG_LOCK
// the only possible flag is OSD_OP_RECOVERY_RELATED
uint32_t flags;
uint32_t pad0;
};
@ -153,7 +147,7 @@ struct __attribute__((__packed__)) osd_op_sec_stab_t
osd_op_header_t header;
// obj_ver_id array length in bytes
uint64_t len;
// OSD_OP_RECOVERY_RELATED, OSD_OP_IGNORE_PG_LOCK
// the only possible flag is OSD_OP_RECOVERY_RELATED
uint32_t flags;
uint32_t pad0;
};
@ -171,8 +165,6 @@ struct __attribute__((__packed__)) osd_op_sec_read_bmp_t
osd_op_header_t header;
// obj_ver_id array length in bytes
uint64_t len;
// OSD_OP_RECOVERY_RELATED, OSD_OP_IGNORE_PG_LOCK
uint32_t flags;
};
struct __attribute__((__packed__)) osd_reply_sec_read_bmp_t
@ -181,7 +173,7 @@ struct __attribute__((__packed__)) osd_reply_sec_read_bmp_t
osd_reply_header_t header;
};
// show configuration and remember peer information
// show configuration
struct __attribute__((__packed__)) osd_op_show_config_t
{
osd_op_header_t header;
@ -311,25 +303,6 @@ struct __attribute__((__packed__)) osd_reply_describe_item_t
osd_num_t osd_num; // OSD number
};
// lock/unlock PG for use by a primary OSD
struct __attribute__((__packed__)) osd_op_sec_lock_t
{
osd_op_header_t header;
// OSD_SEC_LOCK_PG or OSD_SEC_UNLOCK_PG
uint64_t flags;
// Pool ID and PG number
uint64_t pool_id;
uint64_t pg_num;
// PG state as calculated by the primary OSD
uint64_t pg_state;
};
struct __attribute__((__packed__)) osd_reply_sec_lock_t
{
osd_reply_header_t header;
uint64_t cur_primary;
};
// FIXME it would be interesting to try to unify blockstore_op and osd_op formats
union osd_any_op_t
{
@ -340,7 +313,6 @@ union osd_any_op_t
osd_op_sec_stab_t sec_stab;
osd_op_sec_read_bmp_t sec_read_bmp;
osd_op_sec_list_t sec_list;
osd_op_sec_lock_t sec_lock;
osd_op_show_config_t show_conf;
osd_op_rw_t rw;
osd_op_sync_t sync;
@ -357,7 +329,6 @@ union osd_any_reply_t
osd_reply_sec_stab_t sec_stab;
osd_reply_sec_read_bmp_t sec_read_bmp;
osd_reply_sec_list_t sec_list;
osd_reply_sec_lock_t sec_lock;
osd_reply_show_config_t show_conf;
osd_reply_rw_t rw;
osd_reply_del_t del;

View File

@ -6,7 +6,7 @@ includedir=${prefix}/@CMAKE_INSTALL_INCLUDEDIR@
Name: Vitastor
Description: Vitastor client library
Version: 2.2.0
Version: 2.1.0
Libs: -L${libdir} -lvitastor_client
Cflags: -I${includedir}

View File

@ -185,7 +185,6 @@ static const char* help_text =
" --immediate_commit all Put pool only on OSDs with this or larger immediate_commit (none < small < all)\n"
" --level_placement <rules> Use additional failure domain rules (example: \"dc=112233\")\n"
" --raw_placement <rules> Specify raw PG generation rules (see documentation for details)\n"
" --local_reads primary Local read policy for replicated pools: primary, nearest or random\n"
" --primary_affinity_tags tags Prefer to put primary copies on OSDs with all specified tags\n"
" --scrub_interval <time> Enable regular scrubbing for this pool. Format: number + unit s/m/h/d/M/y\n"
" --used_for_app fs:<name> Mark pool as used for VitastorFS with metadata in image <name>\n"
@ -283,7 +282,6 @@ static json11::Json::object parse_args(int narg, const char *args[])
!strcmp(opt, "readonly") || !strcmp(opt, "readwrite") ||
!strcmp(opt, "force") || !strcmp(opt, "reverse") ||
!strcmp(opt, "allow-data-loss") || !strcmp(opt, "allow_data_loss") ||
!strcmp(opt, "allow-up") || !strcmp(opt, "allow_up") ||
!strcmp(opt, "down-ok") || !strcmp(opt, "down_ok") ||
!strcmp(opt, "dry-run") || !strcmp(opt, "dry_run") ||
!strcmp(opt, "help") || !strcmp(opt, "all") ||

View File

@ -147,6 +147,7 @@ struct cli_describe_t
.describe = (osd_op_describe_t){
.header = (osd_op_header_t){
.magic = SECONDARY_OSD_OP_MAGIC,
.id = parent->cli->next_op_id(),
.opcode = OSD_OP_DESCRIBE,
},
.object_state = object_state,

View File

@ -159,6 +159,7 @@ struct cli_fix_t
.describe = {
.header = {
.magic = SECONDARY_OSD_OP_MAGIC,
.id = parent->cli->next_op_id(),
.opcode = OSD_OP_DESCRIBE,
},
.min_inode = obj.inode,
@ -193,6 +194,7 @@ struct cli_fix_t
.sec_del = {
.header = {
.magic = SECONDARY_OSD_OP_MAGIC,
.id = parent->cli->next_op_id(),
.opcode = OSD_OP_SEC_DELETE,
},
.oid = {
@ -200,7 +202,6 @@ struct cli_fix_t
.stripe = op->req.describe.min_offset | items[i].role,
},
.version = 0,
.flags = OSD_OP_IGNORE_PG_LOCK,
},
};
rm_op->callback = [this, primary_osd, rm_osd_num, rm_count, &obj](osd_op_t *rm_op)
@ -241,6 +242,7 @@ struct cli_fix_t
.rw = {
.header = {
.magic = SECONDARY_OSD_OP_MAGIC,
.id = parent->cli->next_op_id(),
.opcode = OSD_OP_SCRUB,
},
.inode = obj.inode,

View File

@ -58,12 +58,6 @@ struct osd_changer_t
state = 100;
return;
}
if (set_reweight && new_reweight > 1)
{
result = (cli_result_t){ .err = EINVAL, .text = "Reweight can't be larger than 1" };
state = 100;
return;
}
parent->etcd_txn(json11::Json::object {
{ "success", json11::Json::array {
json11::Json::object {

View File

@ -44,10 +44,10 @@ std::string validate_pool_config(json11::Json::object & new_cfg, json11::Json ol
new_cfg["parity_chunks"] = parity_chunks;
}
if (new_cfg["scheme"].string_value() == "")
if (old_cfg.is_null() && new_cfg["scheme"].string_value() == "")
{
// Default scheme
new_cfg["scheme"] = old_cfg.is_null() ? "replicated" : old_cfg["scheme"];
new_cfg["scheme"] = "replicated";
}
if (new_cfg.find("pg_minsize") == new_cfg.end() && (old_cfg.is_null() || new_cfg.find("pg_size") != new_cfg.end()))
{
@ -91,7 +91,7 @@ std::string validate_pool_config(json11::Json::object & new_cfg, json11::Json ol
}
else if (key == "name" || key == "scheme" || key == "immediate_commit" ||
key == "failure_domain" || key == "root_node" || key == "scrub_interval" || key == "used_for_app" ||
key == "used_for_fs" || key == "raw_placement" || key == "local_reads")
key == "used_for_fs" || key == "raw_placement")
{
if (!value.is_string())
{
@ -165,10 +165,6 @@ std::string validate_pool_config(json11::Json::object & new_cfg, json11::Json ol
new_cfg["used_for_app"] = "fs:"+new_cfg["used_for_fs"].string_value();
new_cfg.erase("used_for_fs");
}
if (new_cfg.find("local_reads") != new_cfg.end() && new_cfg["local_reads"].string_value() == "primary")
{
new_cfg.erase("local_reads");
}
// Prevent autovivification of object keys. Now we don't modify the config, we just check it
json11::Json cfg = new_cfg;
@ -344,19 +340,5 @@ std::string validate_pool_config(json11::Json::object & new_cfg, json11::Json ol
}
}
// local_reads
if (!cfg["local_reads"].is_null())
{
auto lr = cfg["local_reads"].string_value();
if (lr != "" && lr != "primary" && lr != "nearest" && lr != "random")
{
return "local_reads must be '', 'primary', 'nearest' or 'random', but it is "+cfg["local_reads"].string_value();
}
if (lr != "" && lr != "primary" && scheme != POOL_SCHEME_REPLICATED)
{
return "EC pools don't support localized reads, please clear local_reads or set it to 'primary'";
}
}
return "";
}

View File

@ -504,7 +504,6 @@ resume_3:
{ "failure_domain", "Failure domain" },
{ "root_node", "Root node" },
{ "osd_tags_fmt", "OSD tags" },
{ "local_reads", "Local read policy" },
{ "primary_affinity_tags_fmt", "Primary affinity" },
{ "block_size_fmt", "Block size" },
{ "bitmap_granularity_fmt", "Bitmap granularity" },

View File

@ -5,7 +5,6 @@
#include "cli.h"
#include "cluster_client.h"
#include "str_util.h"
#include "json_util.h"
#include "epoll_manager.h"
#include <algorithm>
@ -15,7 +14,7 @@ struct rm_osd_t
{
cli_tool_t *parent;
bool dry_run, force_warning, force_dataloss, allow_up;
bool dry_run, force_warning, force_dataloss;
uint64_t etcd_tx_retry_ms = 500;
uint64_t etcd_tx_retries = 10000;
std::vector<uint64_t> osd_ids;
@ -23,8 +22,8 @@ struct rm_osd_t
int state = 0;
cli_result_t result;
std::set<osd_num_t> to_remove;
std::vector<osd_num_t> still_up;
std::set<uint64_t> to_remove;
std::set<uint64_t> to_restart;
json11::Json::array pool_effects;
json11::Json::array history_updates, history_checks;
json11::Json new_pgs, new_clean_pgs;
@ -64,17 +63,8 @@ struct rm_osd_t
}
to_remove.insert(osd_id);
}
is_warning = is_dataloss = false;
// Check if OSDs are still up
for (auto osd_id: to_remove)
{
if (parent->cli->st_cli.peer_states.find(osd_id) != parent->cli->st_cli.peer_states.end())
{
is_warning = true;
still_up.push_back(osd_id);
}
}
// Check if OSDs are still used in data distribution
is_warning = is_dataloss = false;
for (auto & pp: parent->cli->st_cli.pool_config)
{
// Will OSD deletion make pool incomplete / down / degraded?
@ -168,9 +158,6 @@ struct rm_osd_t
: strtoupper(e["effect"].string_value())+" PGs"))
)+" after deleting OSD(s).\n";
}
if (still_up.size() && !allow_up)
error += (still_up.size() == 1 ? "OSD " : "OSDs ") + implode(", ", still_up) +
(still_up.size() == 1 ? "is" : "are") + " still up. Use `vitastor-disk purge` to delete them.\n";
if (is_dataloss && !force_dataloss && !dry_run)
error += "OSDs not deleted. Please move data to other OSDs or bypass this check with --allow-data-loss if you know what you are doing.\n";
else if (is_warning && !force_warning && !dry_run)
@ -476,7 +463,6 @@ std::function<bool(cli_result_t &)> cli_tool_t::start_rm_osd(json11::Json cfg)
auto rm_osd = new rm_osd_t();
rm_osd->parent = this;
rm_osd->dry_run = cfg["dry_run"].bool_value();
rm_osd->allow_up = cfg["allow_up"].bool_value();
rm_osd->force_dataloss = cfg["allow_data_loss"].bool_value();
rm_osd->force_warning = rm_osd->force_dataloss || cfg["force"].bool_value();
if (!cfg["etcd_tx_retries"].is_null())

View File

@ -435,7 +435,7 @@ int disk_tool_t::purge_devices(const std::vector<std::string> & devices)
printf("%s\n", json11::Json(result).dump().c_str());
return 0;
}
std::vector<std::string> rm_osd_cli = { "vitastor-cli", "rm-osd", "--allow-up" };
std::vector<std::string> rm_osd_cli = { "vitastor-cli", "rm-osd" };
for (auto osd_num: osd_numbers)
{
rm_osd_cli.push_back(std::to_string(osd_num));

View File

@ -17,8 +17,6 @@
#include "str_util.h"
#include "vitastor_kv.h"
#define KV_LIST_BUF_SIZE 65536
const char *exe_name = NULL;
class kv_cli_t
@ -292,26 +290,10 @@ void kv_cli_t::next_cmd()
struct kv_cli_list_t
{
vitastorkv_dbw_t *db = NULL;
std::string buf;
void *handle = NULL;
int format = 0;
int n = 0;
std::function<void(int)> cb;
void write(const std::string & str)
{
if (buf.capacity() < KV_LIST_BUF_SIZE)
buf.reserve(KV_LIST_BUF_SIZE);
if (buf.size() + str.size() > buf.capacity())
flush();
buf.append(str.data(), str.size());
}
void flush()
{
::write(1, buf.data(), buf.size());
buf.resize(0);
}
};
std::vector<std::string> kv_cli_t::parse_cmd(const std::string & str)
@ -622,10 +604,11 @@ void kv_cli_t::handle_cmd(const std::vector<std::string> & cmd, std::function<vo
if (res < 0)
{
if (res != -ENOENT)
{
fprintf(stderr, "Error: %s (code %d)\n", strerror(-res), res);
}
if (lst->format == 2)
lst->write("\n}\n");
lst->flush();
printf("\n}\n");
lst->db->list_close(lst->handle);
lst->cb(res == -ENOENT ? 0 : res);
delete lst;
@ -633,27 +616,11 @@ void kv_cli_t::handle_cmd(const std::vector<std::string> & cmd, std::function<vo
else
{
if (lst->format == 2)
{
lst->write(lst->n ? ",\n " : "{\n ");
lst->write(addslashes(key));
lst->write(": ");
lst->write(addslashes(value));
}
printf(lst->n ? ",\n %s: %s" : "{\n %s: %s", addslashes(key).c_str(), addslashes(value).c_str());
else if (lst->format == 1)
{
lst->write("set ");
lst->write(auto_addslashes(key));
lst->write(" ");
lst->write(value);
lst->write("\n");
}
printf("set %s %s\n", auto_addslashes(key).c_str(), value.c_str());
else
{
lst->write(key);
lst->write(" = ");
lst->write(value);
lst->write("\n");
}
printf("%s = %s\n", key.c_str(), value.c_str());
lst->n++;
lst->db->list_next(lst->handle, NULL);
}

View File

@ -870,7 +870,7 @@ static void get_block(kv_db_t *db, uint64_t offset, int cur_level, int recheck_p
}
// Block already in cache, we can proceed
blk->usage = db->usage_counter;
db->cli->msgr.ringloop->set_immediate([=] { cb(0, BLK_UPDATING); });
cb(0, BLK_UPDATING);
return;
}
cluster_op_t *op = new cluster_op_t;

View File

@ -22,8 +22,8 @@ int nfs3_fsstat_proc(void *opaque, rpc_op_t *rop)
{
auto ttb = pst_it->second["total_raw_tb"].number_value();
auto ftb = (pst_it->second["total_raw_tb"].number_value() - pst_it->second["used_raw_tb"].number_value());
tbytes = ttb / pst_it->second["raw_to_usable"].number_value() * ((uint64_t)1<<40);
fbytes = ftb / pst_it->second["raw_to_usable"].number_value() * ((uint64_t)1<<40);
tbytes = ttb / pst_it->second["raw_to_usable"].number_value() * ((uint64_t)2<<40);
fbytes = ftb / pst_it->second["raw_to_usable"].number_value() * ((uint64_t)2<<40);
}
*reply = (FSSTAT3res){
.status = NFS3_OK,

View File

@ -210,7 +210,6 @@ resume_4:
st->res = res;
kv_continue_create(st, 5);
});
return;
resume_5:
if (st->res < 0)
{

View File

@ -13,12 +13,6 @@ void kv_read_inode(nfs_proxy_t *proxy, uint64_t ino,
std::function<void(int res, const std::string & value, json11::Json ientry)> cb,
bool allow_cache)
{
if (!ino)
{
// Zero value can not exist
cb(-ENOENT, "", json11::Json());
return;
}
auto key = kv_inode_key(ino);
proxy->db->get(key, [=](int res, const std::string & value)
{
@ -55,7 +49,7 @@ int kv_nfs3_getattr_proc(void *opaque, rpc_op_t *rop)
auto ino = kv_fh_inode(fh);
if (self->parent->trace)
fprintf(stderr, "[%d] GETATTR %ju\n", self->nfs_fd, ino);
if (!kv_fh_valid(fh) || !ino)
if (!kv_fh_valid(fh))
{
*reply = (GETATTR3res){ .status = NFS3ERR_INVAL };
rpc_queue_reply(rop);

View File

@ -43,30 +43,9 @@ int kv_nfs3_lookup_proc(void *opaque, rpc_op_t *rop)
uint64_t ino = direntry["ino"].uint64_value();
kv_read_inode(self->parent, ino, [=](int res, const std::string & value, json11::Json ientry)
{
if (res == -ENOENT)
if (res < 0)
{
*reply = (LOOKUP3res){
.status = NFS3_OK,
.resok = (LOOKUP3resok){
.object = xdr_copy_string(rop->xdrs, kv_fh(ino)),
.obj_attributes = {
.attributes_follow = 1,
.attributes = (fattr3){
.type = (ftype3)0,
.mode = 0666,
.nlink = 1,
.fsid = self->parent->fsid,
.fileid = ino,
},
},
},
};
rpc_queue_reply(rop);
return;
}
else if (res < 0)
{
*reply = (LOOKUP3res){ .status = vitastor_nfs_map_err(res) };
*reply = (LOOKUP3res){ .status = vitastor_nfs_map_err(res == -ENOENT ? -EIO : res) };
rpc_queue_reply(rop);
return;
}

View File

@ -89,23 +89,12 @@ resume_1:
resume_2:
if (st->res < 0)
{
if (st->res == -ENOENT)
{
// Just delete direntry and skip inode
fprintf(stderr, "direntry %s references a non-existing inode %ju, deleting\n",
kv_direntry_key(st->dir_ino, st->filename).c_str(), st->ino);
st->ino = 0;
}
else
{
fprintf(stderr, "error reading inode %s: %s (code %d)\n",
kv_inode_key(st->ino).c_str(), strerror(-st->res), st->res);
auto cb = std::move(st->cb);
cb(st->res);
return;
}
fprintf(stderr, "error reading inode %s: %s (code %d)\n",
kv_inode_key(st->ino).c_str(), strerror(-st->res), st->res);
auto cb = std::move(st->cb);
cb(st->res);
return;
}
else
{
std::string err;
st->ientry = json11::Json::parse(st->ientry_text, err);

View File

@ -271,12 +271,6 @@ void osd_t::parse_config(bool init)
inode_vanish_time = config["inode_vanish_time"].uint64_value();
if (!inode_vanish_time)
inode_vanish_time = 60;
enable_pg_locks = config["enable_pg_locks"].is_null() || json_is_true(config["enable_pg_locks"]);
bool old_pg_locks_localize_only = pg_locks_localize_only;
pg_locks_localize_only = config["enable_pg_locks"].is_null();
pg_lock_retry_interval_ms = config["pg_lock_retry_interval"].uint64_value();
if (pg_lock_retry_interval_ms <= 1)
pg_lock_retry_interval_ms = 100;
auto old_auto_scrub = auto_scrub;
auto_scrub = json_is_true(config["auto_scrub"]);
global_scrub_interval = parse_time(config["scrub_interval"].string_value());
@ -342,10 +336,6 @@ void osd_t::parse_config(bool init)
{
apply_recovery_tune_interval();
}
if (old_pg_locks_localize_only != pg_locks_localize_only)
{
apply_pg_locks_localize_only();
}
}
void osd_t::bind_socket()
@ -457,7 +447,6 @@ void osd_t::exec_op(osd_op_t *cur_op)
}
if (readonly &&
cur_op->req.hdr.opcode != OSD_OP_SEC_READ &&
cur_op->req.hdr.opcode != OSD_OP_SEC_LOCK &&
cur_op->req.hdr.opcode != OSD_OP_SEC_LIST &&
cur_op->req.hdr.opcode != OSD_OP_READ &&
cur_op->req.hdr.opcode != OSD_OP_SEC_READ_BMP &&

View File

@ -92,12 +92,6 @@ struct recovery_stat_t
uint64_t count, usec, bytes;
};
struct osd_pg_lock_t
{
osd_num_t primary_osd = 0;
uint64_t state = 0;
};
class osd_t
{
// config
@ -146,9 +140,6 @@ class osd_t
uint32_t scrub_list_limit = 1000;
bool scrub_find_best = true;
uint64_t scrub_ec_max_bruteforce = 100;
bool enable_pg_locks = false;
bool pg_locks_localize_only = false;
uint64_t pg_lock_retry_interval_ms = 100;
// cluster state
@ -168,7 +159,6 @@ class osd_t
// peers and PGs
std::map<pool_pg_num_t, osd_pg_lock_t> pg_locks;
std::map<pool_id_t, pg_num_t> pg_counts;
std::map<pool_pg_num_t, pg_t> pgs;
std::set<pool_pg_num_t> dirty_pgs;
@ -249,8 +239,6 @@ class osd_t
void on_change_etcd_state_hook(std::map<std::string, etcd_kv_t> & changes);
void on_load_config_hook(json11::Json::object & changes);
void on_reload_config_hook(json11::Json::object & changes);
void on_change_pool_config_hook();
void apply_pg_locks_localize_only();
json11::Json on_load_pgs_checks_hook();
void on_load_pgs_hook(bool success);
void bind_socket();
@ -280,16 +268,11 @@ class osd_t
void repeer_pgs(osd_num_t osd_num);
void start_pg_peering(pg_t & pg);
void drop_dirty_pg_connections(pool_pg_num_t pg);
void record_pg_lock(pg_t & pg, osd_num_t peer_osd, uint64_t pg_state);
void relock_pg(pg_t & pg);
void submit_list_subop(osd_num_t role_osd, pg_peering_state_t *ps);
void discard_list_subop(osd_op_t *list_op);
bool stop_pg(pg_t & pg);
void reset_pg(pg_t & pg);
void finish_stop_pg(pg_t & pg);
void rm_inflight(pg_t & pg);
void continue_pg(pg_t & pg);
bool continue_pg_peering(pg_t & pg);
// flushing, recovery and backfill
void submit_pg_flush_ops(pg_t & pg);
@ -316,13 +299,10 @@ class osd_t
void finish_op(osd_op_t *cur_op, int retval);
// secondary ops
bool sec_check_pg_lock(osd_num_t primary_osd, const object_id &oid);
void exec_sync_stab_all(osd_op_t *cur_op);
void exec_show_config(osd_op_t *cur_op);
void exec_secondary(osd_op_t *cur_op);
void exec_secondary_real(osd_op_t *cur_op);
void exec_sec_read_bmp(osd_op_t *cur_op);
void exec_sec_lock(osd_op_t *cur_op);
void secondary_op_callback(osd_op_t *cur_op);
// primary ops
@ -330,7 +310,6 @@ class osd_t
bool prepare_primary_rw(osd_op_t *cur_op);
void continue_primary_read(osd_op_t *cur_op);
void continue_primary_scrub(osd_op_t *cur_op);
void continue_local_secondary_read(osd_op_t *cur_op);
void continue_primary_describe(osd_op_t *cur_op);
void continue_primary_list(osd_op_t *cur_op);
void continue_primary_write(osd_op_t *cur_op);
@ -368,13 +347,13 @@ class osd_t
uint64_t* get_object_osd_set(pg_t &pg, object_id &oid, pg_osd_set_state_t **object_state);
void continue_chained_read(osd_op_t *cur_op);
int submit_chained_read_requests(pg_t *pg, osd_op_t *cur_op);
int submit_chained_read_requests(pg_t & pg, osd_op_t *cur_op);
void check_corrupted_chained(pg_t & pg, osd_op_t *cur_op);
void send_chained_read_results(pg_t *pg, osd_op_t *cur_op);
void send_chained_read_results(pg_t & pg, osd_op_t *cur_op);
std::vector<osd_chain_read_t> collect_chained_read_requests(osd_op_t *cur_op);
int collect_bitmap_requests(osd_op_t *cur_op, pg_t & pg, std::vector<bitmap_request_t> & bitmap_requests);
int submit_bitmap_subops(osd_op_t *cur_op, pg_t & pg);
int read_bitmaps(osd_op_t *cur_op, pg_t *pg, int base_state);
int read_bitmaps(osd_op_t *cur_op, pg_t & pg, int base_state);
inline pg_num_t map_to_pg(object_id oid, uint64_t pg_stripe_size)
{

View File

@ -65,7 +65,6 @@ void osd_t::init_cluster()
st_cli.tfd = tfd;
st_cli.log_level = log_level;
st_cli.on_change_osd_state_hook = [this](osd_num_t peer_osd) { on_change_osd_state_hook(peer_osd); };
st_cli.on_change_pool_config_hook = [this]() { on_change_pool_config_hook(); };
st_cli.on_change_backfillfull_hook = [this](pool_id_t pool_id) { on_change_backfillfull_hook(pool_id); };
st_cli.on_change_pg_history_hook = [this](pool_id_t pool_id, pg_num_t pg_num) { on_change_pg_history_hook(pool_id, pg_num); };
st_cli.on_change_hook = [this](std::map<std::string, etcd_kv_t> & changes) { on_change_etcd_state_hook(changes); };
@ -154,19 +153,23 @@ bool osd_t::check_peer_config(osd_client_t *cl, json11::Json conf)
return false;
}
}
cl->enable_pg_locks = conf["features"]["pg_locks"].bool_value();
return true;
}
json11::Json osd_t::get_osd_state()
{
std::vector<char> hostname;
hostname.resize(1024);
while (gethostname(hostname.data(), hostname.size()) < 0 && errno == ENAMETOOLONG)
hostname.resize(hostname.size()+1024);
hostname.resize(strnlen(hostname.data(), hostname.size()));
json11::Json::object st;
st["state"] = "up";
if (bind_addresses.size() != 1 || bind_addresses[0] != "0.0.0.0")
st["addresses"] = bind_addresses;
else
st["addresses"] = getifaddr_list();
st["host"] = gethostname_str();
st["host"] = std::string(hostname.data(), hostname.size());
st["version"] = VITASTOR_VERSION;
st["port"] = listening_port;
#ifdef WITH_RDMACM
@ -416,28 +419,6 @@ void osd_t::on_change_osd_state_hook(osd_num_t peer_osd)
}
}
void osd_t::on_change_pool_config_hook()
{
apply_pg_locks_localize_only();
}
void osd_t::apply_pg_locks_localize_only()
{
for (auto & pp: pgs)
{
auto pool_it = st_cli.pool_config.find(pp.first.pool_id);
if (pool_it == st_cli.pool_config.end())
{
continue;
}
auto & pool_cfg = pool_it->second;
auto & pg = pp.second;
pg.disable_pg_locks = pg_locks_localize_only &&
pool_cfg.scheme == POOL_SCHEME_REPLICATED &&
pool_cfg.local_reads == POOL_LOCAL_READ_PRIMARY;
}
}
void osd_t::on_change_backfillfull_hook(pool_id_t pool_id)
{
if (!(peering_state & (OSD_RECOVERING | OSD_FLUSHING_PGS)))
@ -715,27 +696,20 @@ void osd_t::apply_pg_count()
// The external tool must wait for all PGs to come down before changing PG count
// If it doesn't wait, a restarted OSD may apply the new count immediately which will lead to bugs
// So an OSD just dies if it detects PG count change while there are active PGs
int still_active_primary = 0;
int still_active = 0;
for (auto & kv: pgs)
{
if (kv.first.pool_id == pool_item.first && (kv.second.state & PG_ACTIVE))
{
still_active_primary++;
still_active++;
}
}
int still_active_secondary = 0;
for (auto lock_it = pg_locks.lower_bound((pool_pg_num_t){ .pool_id = pool_item.first, .pg_num = 0 });
lock_it != pg_locks.end() && lock_it->first.pool_id == pool_item.first; lock_it++)
{
still_active_secondary++;
}
if (still_active_primary > 0 || still_active_secondary > 0)
if (still_active > 0)
{
printf(
"[OSD %ju] PG count change detected for pool %u (new is %ju, old is %u),"
" but %u PG(s) are still active as primary and %u as secondary. This is not allowed. Exiting\n",
this->osd_num, pool_item.first, pool_item.second.real_pg_count, pg_counts[pool_item.first],
still_active_primary, still_active_secondary
" but %u PG(s) are still active. This is not allowed. Exiting\n",
this->osd_num, pool_item.first, pool_item.second.real_pg_count, pg_counts[pool_item.first], still_active
);
force_stop(1);
return;
@ -862,23 +836,22 @@ void osd_t::apply_pg_config()
}
}
auto & pg = this->pgs[{ .pool_id = pool_id, .pg_num = pg_num }];
pg.state = pg_cfg.cur_primary == this->osd_num ? PG_PEERING : PG_STARTING;
pg.scheme = pool_item.second.scheme;
pg.pg_cursize = 0;
pg.pg_size = pool_item.second.pg_size;
pg.pg_minsize = pool_item.second.pg_minsize;
pg.pg_data_size = pool_item.second.scheme == POOL_SCHEME_REPLICATED
? 1 : pool_item.second.pg_size - pool_item.second.parity_chunks;
pg.pool_id = pool_id;
pg.pg_num = pg_num;
pg.reported_epoch = pg_cfg.epoch;
pg.target_history = pg_cfg.target_history;
pg.all_peers = vec_all_peers;
pg.next_scrub = pg_cfg.next_scrub;
pg.target_set = pg_cfg.target_set;
pg.disable_pg_locks = pg_locks_localize_only &&
pool_item.second.scheme == POOL_SCHEME_REPLICATED &&
pool_item.second.local_reads == POOL_LOCAL_READ_PRIMARY;
pg = (pg_t){
.state = pg_cfg.cur_primary == this->osd_num ? PG_PEERING : PG_STARTING,
.scheme = pool_item.second.scheme,
.pg_cursize = 0,
.pg_size = pool_item.second.pg_size,
.pg_minsize = pool_item.second.pg_minsize,
.pg_data_size = pool_item.second.scheme == POOL_SCHEME_REPLICATED
? 1 : pool_item.second.pg_size - pool_item.second.parity_chunks,
.pool_id = pool_id,
.pg_num = pg_num,
.reported_epoch = pg_cfg.epoch,
.target_history = pg_cfg.target_history,
.all_peers = vec_all_peers,
.next_scrub = pg_cfg.next_scrub,
.target_set = pg_cfg.target_set,
};
if (pg.scheme == POOL_SCHEME_EC)
{
use_ec(pg.pg_size, pg.pg_data_size, true);

View File

@ -150,7 +150,14 @@ void osd_t::handle_flush_op(bool rollback, pool_id_t pool_id, pg_num_t pg_num, p
{
continue_primary_write(op);
}
continue_pg(pg);
if ((pg.state & PG_STOPPING) && pg.inflight == 0 && !pg.flush_batch)
{
finish_stop_pg(pg);
}
else if ((pg.state & PG_REPEERING) && pg.inflight == 0 && !pg.flush_batch)
{
start_pg_peering(pg);
}
}
}
@ -202,6 +209,7 @@ bool osd_t::submit_flush_op(pool_id_t pool_id, pg_num_t pg_num, pg_flush_batch_t
.sec_stab = {
.header = {
.magic = SECONDARY_OSD_OP_MAGIC,
.id = msgr.next_subop_id++,
.opcode = (uint64_t)(rollback ? OSD_OP_SEC_ROLLBACK : OSD_OP_SEC_STABILIZE),
},
.len = count * sizeof(obj_ver_id),
@ -247,8 +255,7 @@ bool osd_t::pick_next_recovery(osd_recovery_op_t &op)
restart:
for (auto pg_it = pgs.lower_bound(recovery_last_pg); pg_it != pgs.end(); pg_it++)
{
auto & src = recovery_last_degraded ? pg_it->second.degraded_objects : pg_it->second.misplaced_objects;
if ((pg_it->second.state & mask) == check && src.size() > 0)
if ((pg_it->second.state & mask) == check)
{
auto pool_it = st_cli.pool_config.find(pg_it->first.pool_id);
if (pool_it != st_cli.pool_config.end() && pool_it->second.backfillfull)
@ -257,6 +264,8 @@ bool osd_t::pick_next_recovery(osd_recovery_op_t &op)
recovery_last_pg.pool_id++;
goto restart;
}
auto & src = recovery_last_degraded ? pg_it->second.degraded_objects : pg_it->second.misplaced_objects;
assert(src.size() > 0);
// Restart scanning from the next object
for (auto obj_it = src.upper_bound(recovery_last_oid); obj_it != src.end(); obj_it++)
{

View File

@ -21,8 +21,28 @@ void osd_t::handle_peers()
{
if (p.second.state == PG_PEERING)
{
if (continue_pg_peering(p.second))
if (!p.second.peering_state->list_ops.size())
{
p.second.calc_object_states(log_level);
report_pg_state(p.second);
schedule_scrub(p.second);
incomplete_objects += p.second.incomplete_objects.size();
misplaced_objects += p.second.misplaced_objects.size();
// FIXME: degraded objects may currently include misplaced, too! Report them separately?
degraded_objects += p.second.degraded_objects.size();
if (p.second.state & PG_HAS_UNCLEAN)
peering_state = peering_state | OSD_FLUSHING_PGS;
else if (p.second.state & (PG_HAS_DEGRADED | PG_HAS_MISPLACED))
{
peering_state = peering_state | OSD_RECOVERING;
if (p.second.state & PG_HAS_DEGRADED)
{
// Restart recovery from degraded objects
recovery_last_degraded = true;
recovery_last_pg = {};
recovery_last_oid = {};
}
}
ringloop->wakeup();
return;
}
@ -75,16 +95,6 @@ void osd_t::handle_peers()
void osd_t::repeer_pgs(osd_num_t peer_osd)
{
if (msgr.osd_peer_fds.find(peer_osd) == msgr.osd_peer_fds.end())
{
for (auto lock_it = pg_locks.begin(); lock_it != pg_locks.end(); )
{
if (lock_it->second.primary_osd == peer_osd)
pg_locks.erase(lock_it++);
else
lock_it++;
}
}
// Re-peer affected PGs
for (auto & p: pgs)
{
@ -104,7 +114,7 @@ void osd_t::repeer_pgs(osd_num_t peer_osd)
{
// Repeer this pg
printf("[PG %u/%u] Repeer because of OSD %ju\n", pg.pool_id, pg.pg_num, peer_osd);
if (!(pg.state & (PG_ACTIVE | PG_REPEERING)) || pg.can_repeer())
if (!(pg.state & (PG_ACTIVE | PG_REPEERING)) || pg.inflight == 0 && !pg.flush_batch)
{
start_pg_peering(pg);
}
@ -185,6 +195,7 @@ void osd_t::start_pg_peering(pg_t & pg)
pg.state = PG_PEERING;
this->peering_state |= OSD_PEERING_PGS;
reset_pg(pg);
report_pg_state(pg);
drop_dirty_pg_connections({ .pool_id = pg.pool_id, .pg_num = pg.pg_num });
// Try to connect with current peers if they're up, but we don't have connections to them
// Otherwise we may erroneously decide that the pg is incomplete :-)
@ -204,7 +215,8 @@ void osd_t::start_pg_peering(pg_t & pg)
{
// Wait until all OSDs are either connected or their /osd/state disappears from etcd
pg.state = PG_INCOMPLETE;
// Fall through to cleanup list results
report_pg_state(pg);
return;
}
// Calculate current write OSD set
pg.pg_cursize = 0;
@ -230,6 +242,8 @@ void osd_t::start_pg_peering(pg_t & pg)
// because such PGs don't flush unstable entries on secondary OSDs so they can't remove these
// entries from their journals...
pg.state = PG_INCOMPLETE;
report_pg_state(pg);
return;
}
std::set<osd_num_t> cur_peers;
std::set<osd_num_t> dead_peers;
@ -264,6 +278,8 @@ void osd_t::start_pg_peering(pg_t & pg)
if (nonzero >= pg.pg_data_size && found < pg.pg_data_size)
{
pg.state = PG_INCOMPLETE;
report_pg_state(pg);
return;
}
}
}
@ -302,7 +318,6 @@ void osd_t::start_pg_peering(pg_t & pg)
delete pg.peering_state;
pg.peering_state = NULL;
}
report_pg_state(pg);
return;
}
if (!pg.peering_state)
@ -311,203 +326,16 @@ void osd_t::start_pg_peering(pg_t & pg)
pg.peering_state->pool_id = pg.pool_id;
pg.peering_state->pg_num = pg.pg_num;
}
pg.peering_state->locked = false;
pg.peering_state->lists_done = false;
report_pg_state(pg);
}
bool osd_t::continue_pg_peering(pg_t & pg)
{
if (pg.peering_state->locked)
for (osd_num_t peer_osd: cur_peers)
{
pg.peering_state->lists_done = true;
for (osd_num_t peer_osd: pg.cur_peers)
if (pg.peering_state->list_ops.find(peer_osd) != pg.peering_state->list_ops.end() ||
pg.peering_state->list_results.find(peer_osd) != pg.peering_state->list_results.end())
{
if (pg.peering_state->list_results.find(peer_osd) == pg.peering_state->list_results.end())
{
pg.peering_state->lists_done = false;
}
if (pg.peering_state->list_ops.find(peer_osd) != pg.peering_state->list_ops.end() ||
pg.peering_state->list_results.find(peer_osd) != pg.peering_state->list_results.end())
{
continue;
}
submit_list_subop(peer_osd, pg.peering_state);
}
}
if (pg.peering_state->lists_done)
{
pg.calc_object_states(log_level);
report_pg_state(pg);
schedule_scrub(pg);
incomplete_objects += pg.incomplete_objects.size();
misplaced_objects += pg.misplaced_objects.size();
// FIXME: degraded objects may currently include misplaced, too! Report them separately?
degraded_objects += pg.degraded_objects.size();
if (pg.state & PG_HAS_UNCLEAN)
this->peering_state = peering_state | OSD_FLUSHING_PGS;
else if (pg.state & (PG_HAS_DEGRADED | PG_HAS_MISPLACED))
{
this->peering_state = peering_state | OSD_RECOVERING;
if (pg.state & PG_HAS_DEGRADED)
{
// Restart recovery from degraded objects
this->recovery_last_degraded = true;
this->recovery_last_pg = {};
this->recovery_last_oid = {};
}
}
return true;
}
return false;
}
void osd_t::record_pg_lock(pg_t & pg, osd_num_t peer_osd, uint64_t pg_state)
{
if (!pg_state)
pg.lock_peers.erase(peer_osd);
else
pg.lock_peers[peer_osd] = pg_state;
}
void osd_t::relock_pg(pg_t & pg)
{
if (!enable_pg_locks || pg.disable_pg_locks && !pg.lock_peers.size())
{
if (pg.state & PG_PEERING)
pg.peering_state->locked = true;
continue_pg(pg);
return;
}
if (pg.inflight_locks > 0 || pg.lock_waiting)
{
return;
}
// Check that lock_peers are equal to cur_peers and correct the difference, if any
uint64_t wanted_state = pg.state;
std::vector<osd_num_t> diff_osds;
if (!(pg.state & (PG_STOPPING | PG_OFFLINE | PG_INCOMPLETE)) && !pg.disable_pg_locks)
{
for (osd_num_t peer_osd: pg.cur_peers)
{
if (peer_osd != this->osd_num)
{
auto lock_it = pg.lock_peers.find(peer_osd);
if (lock_it == pg.lock_peers.end())
diff_osds.push_back(peer_osd);
else
{
if (lock_it->second != wanted_state)
diff_osds.push_back(peer_osd);
lock_it->second |= ((uint64_t)1 << 63);
}
}
}
}
int relock_osd_count = diff_osds.size();
for (auto & lp: pg.lock_peers)
{
if (!(lp.second & ((uint64_t)1 << 63)))
diff_osds.push_back(lp.first);
lp.second &= ~((uint64_t)1 << 63);
}
if (!diff_osds.size())
{
if (pg.state & PG_PEERING)
pg.peering_state->locked = true;
continue_pg(pg);
return;
}
pg.inflight_locks++;
for (int i = 0; i < diff_osds.size(); i++)
{
bool unlock_peer = (i >= relock_osd_count);
uint64_t new_state = unlock_peer ? 0 : pg.state;
auto peer_osd = diff_osds[i];
auto peer_fd_it = msgr.osd_peer_fds.find(peer_osd);
if (peer_fd_it == msgr.osd_peer_fds.end())
{
if (unlock_peer)
{
// Peer is dead - unlocked automatically
record_pg_lock(pg, peer_osd, new_state);
diff_osds.erase(diff_osds.begin()+(i--));
}
continue;
}
int peer_fd = peer_fd_it->second;
auto cl = msgr.clients.at(peer_fd);
if (!cl->enable_pg_locks)
{
// Peer does not support locking - just instantly remember the lock as successful
record_pg_lock(pg, peer_osd, new_state);
diff_osds.erase(diff_osds.begin()+(i--));
continue;
}
pg.inflight_locks++;
osd_op_t *op = new osd_op_t();
op->op_type = OSD_OP_OUT;
op->peer_fd = peer_fd;
op->req = (osd_any_op_t){
.sec_lock = {
.header = {
.magic = SECONDARY_OSD_OP_MAGIC,
.opcode = OSD_OP_SEC_LOCK,
},
.flags = (uint64_t)(unlock_peer ? OSD_SEC_UNLOCK_PG : OSD_SEC_LOCK_PG),
.pool_id = pg.pool_id,
.pg_num = pg.pg_num,
.pg_state = new_state,
},
};
op->callback = [this, peer_osd](osd_op_t *op)
{
pool_pg_num_t pg_id = { .pool_id = (pool_id_t)op->req.sec_lock.pool_id, .pg_num = (pg_num_t)op->req.sec_lock.pg_num };
auto pg_it = pgs.find(pg_id);
if (pg_it == pgs.end())
{
return;
}
auto & pg = pg_it->second;
if (op->reply.hdr.retval == 0)
{
record_pg_lock(pg_it->second, peer_osd, op->req.sec_lock.pg_state);
}
else if (op->reply.hdr.retval != -EPIPE)
{
printf(
(op->reply.hdr.retval == -ENOENT
? "Failed to %1$s PG %2$u/%3$u on OSD %4$ju - peer didn't load PG info yet\n"
: (op->reply.sec_lock.cur_primary
? "Failed to %1$s PG %2$u/%3$u on OSD %4$ju - taken by OSD %6$ju (retval=%5$jd)\n"
: "Failed to %1$s PG %2$u/%3$u on OSD %4$ju - retval=%5$jd\n")),
op->req.sec_lock.flags == OSD_SEC_UNLOCK_PG ? "unlock" : "lock",
pg_id.pool_id, pg_id.pg_num, peer_osd, op->reply.hdr.retval, op->reply.sec_lock.cur_primary
);
// Retry relocking/unlocking PG after a short time
pg.lock_waiting = true;
tfd->set_timer(pg_lock_retry_interval_ms, false, [this, pg_id](int)
{
auto pg_it = pgs.find(pg_id);
if (pg_it != pgs.end())
{
pg_it->second.lock_waiting = false;
relock_pg(pg_it->second);
}
});
}
pg.inflight_locks--;
relock_pg(pg);
delete op;
};
msgr.outbox_push(op);
submit_list_subop(peer_osd, pg.peering_state);
}
if (pg.state & PG_PEERING)
{
pg.peering_state->locked = !diff_osds.size();
}
pg.inflight_locks--;
continue_pg(pg);
ringloop->wakeup();
}
void osd_t::submit_list_subop(osd_num_t role_osd, pg_peering_state_t *ps)
@ -555,20 +383,15 @@ void osd_t::submit_list_subop(osd_num_t role_osd, pg_peering_state_t *ps)
}
else
{
auto role_fd_it = msgr.osd_peer_fds.find(role_osd);
if (role_fd_it == msgr.osd_peer_fds.end())
{
printf("Failed to get object list from OSD %ju because it is disconnected\n", role_osd);
return;
}
// Peer
osd_op_t *op = new osd_op_t();
op->op_type = OSD_OP_OUT;
op->peer_fd = role_fd_it->second;
op->peer_fd = msgr.osd_peer_fds.at(role_osd);
op->req = (osd_any_op_t){
.sec_list = {
.header = {
.magic = SECONDARY_OSD_OP_MAGIC,
.id = msgr.next_subop_id++,
.opcode = OSD_OP_SEC_LIST,
},
.list_pg = ps->pg_num,
@ -656,8 +479,13 @@ bool osd_t::stop_pg(pg_t & pg)
return false;
}
drop_dirty_pg_connections({ .pool_id = pg.pool_id, .pg_num = pg.pg_num });
pg.state = pg.state & ~PG_STARTING & ~PG_PEERING & ~PG_INCOMPLETE & ~PG_ACTIVE & ~PG_REPEERING & ~PG_OFFLINE | PG_STOPPING;
if (pg.can_stop())
if (!(pg.state & (PG_ACTIVE | PG_REPEERING)))
{
finish_stop_pg(pg);
return true;
}
pg.state = pg.state & ~PG_ACTIVE & ~PG_REPEERING | PG_STOPPING;
if (pg.inflight == 0 && !pg.flush_batch)
{
finish_stop_pg(pg);
}
@ -738,33 +566,9 @@ void osd_t::report_pg_state(pg_t & pg)
pg_cfg.target_history = pg.target_history;
pg_cfg.all_peers = pg.all_peers;
}
relock_pg(pg);
if (pg.state == PG_OFFLINE && !this->pg_config_applied)
{
apply_pg_config();
}
report_pg_states();
}
void osd_t::rm_inflight(pg_t & pg)
{
pg.inflight--;
assert(pg.inflight >= 0);
continue_pg(pg);
}
void osd_t::continue_pg(pg_t & pg)
{
if ((pg.state & PG_STOPPING) && pg.can_stop())
{
finish_stop_pg(pg);
}
else if ((pg.state & PG_REPEERING) && pg.can_repeer())
{
start_pg_peering(pg);
}
else if ((pg.state & PG_PEERING) && pg.peering_state->locked)
{
continue_pg_peering(pg);
}
}

View File

@ -489,13 +489,3 @@ void pg_t::print_state()
total_count
);
}
bool pg_t::can_stop()
{
return inflight == 0 && inflight_locks == 0 && !lock_peers.size() && !flush_batch;
}
bool pg_t::can_repeer()
{
return inflight == 0 && !flush_batch;
}

View File

@ -49,8 +49,6 @@ struct pg_peering_state_t
std::map<osd_num_t, pg_list_result_t> list_results;
pool_id_t pool_id = 0;
pg_num_t pg_num = 0;
bool locked = false;
bool lists_done = false;
};
struct obj_piece_id_t
@ -89,7 +87,6 @@ struct pg_t
pool_id_t pool_id = 0;
pg_num_t pg_num = 0;
uint64_t clean_count = 0, total_count = 0;
bool disable_pg_locks = false;
// epoch number - should increase with each non-clean activation of the PG
uint64_t epoch = 0, reported_epoch = 0;
// target history and all potential peers
@ -107,10 +104,6 @@ struct pg_t
// cur_set is the current set of connected peer OSDs for this PG
// cur_set = (role => osd_num or UINT64_MAX if missing). role numbers begin with zero
std::vector<osd_num_t> cur_set;
// locked peer list => pg state reported to the peer
std::map<osd_num_t, uint64_t> lock_peers;
int inflight_locks = 0;
bool lock_waiting = false;
// same thing in state_dict-like format
pg_osd_set_t cur_loc_set;
// moved object map. by default, each object is considered to reside on cur_set.
@ -132,9 +125,6 @@ struct pg_t
pg_osd_set_state_t* add_object_to_state(const object_id oid, const uint64_t state, const pg_osd_set_t & osd_set);
void calc_object_states(int log_level);
void print_state();
bool can_stop();
bool can_repeer();
void rm_inflight();
};
inline bool operator < (const pg_obj_loc_t &a, const pg_obj_loc_t &b)

View File

@ -37,20 +37,7 @@ bool osd_t::prepare_primary_rw(osd_op_t *cur_op)
};
pg_num_t pg_num = (oid.stripe/pool_cfg.pg_stripe_size) % pg_counts[pool_id] + 1; // like map_to_pg()
auto pg_it = pgs.find({ .pool_id = pool_id, .pg_num = pg_num });
if (pg_it == pgs.end() || pg_it->second.state == PG_OFFLINE)
{
// Check for a local replicated read from secondary OSD
auto lock_it = cur_op->req.hdr.opcode == OSD_OP_READ && pool_cfg.scheme == POOL_SCHEME_REPLICATED
? pg_locks.find({ .pool_id = pool_id, .pg_num = pg_num })
: pg_locks.end();
if (lock_it == pg_locks.end() || lock_it->second.state != PG_ACTIVE && lock_it->second.state != (PG_ACTIVE|PG_LEFT_ON_DEAD))
{
// FIXME: Change EPIPE to something else
finish_op(cur_op, -EPIPE);
return false;
}
}
else if (!(pg_it->second.state & PG_ACTIVE))
if (pg_it == pgs.end() || !(pg_it->second.state & PG_ACTIVE))
{
// This OSD is not primary for this PG or the PG is inactive
// FIXME: Allow reads from PGs degraded under pg_minsize, but don't allow writes
@ -82,7 +69,7 @@ bool osd_t::prepare_primary_rw(osd_op_t *cur_op)
// Find parents from the same pool. Optimized reads only work within pools
while (inode_it != st_cli.inode_config.end() &&
inode_it->second.parent_id &&
INODE_POOL(inode_it->second.parent_id) == pool_cfg.id)
INODE_POOL(inode_it->second.parent_id) == pg_it->second.pool_id)
{
// Check for loops - FIXME check it in etcd_state_client
if (inode_it->second.parent_id == cur_op->req.rw.inode ||
@ -122,7 +109,7 @@ bool osd_t::prepare_primary_rw(osd_op_t *cur_op)
);
void *data_buf = (uint8_t*)op_data + sizeof(osd_primary_op_data_t);
op_data->pg_num = pg_num;
op_data->pg = pg_it == pgs.end() ? NULL : &pg_it->second;
op_data->pg = &pg_it->second;
op_data->oid = oid;
op_data->stripes = (osd_rmw_stripe_t*)data_buf;
op_data->stripe_count = stripe_count;
@ -157,7 +144,7 @@ bool osd_t::prepare_primary_rw(osd_op_t *cur_op)
chain_num++;
auto inode_it = st_cli.inode_config.find(cur_op->req.rw.inode);
while (inode_it != st_cli.inode_config.end() && inode_it->second.parent_id &&
INODE_POOL(inode_it->second.parent_id) == pool_cfg.id &&
INODE_POOL(inode_it->second.parent_id) == pg_it->second.pool_id &&
// Check for loops
inode_it->second.parent_id != cur_op->req.rw.inode)
{
@ -167,10 +154,7 @@ bool osd_t::prepare_primary_rw(osd_op_t *cur_op)
chain_num++;
}
}
if (op_data->pg)
{
op_data->pg->inflight++;
}
pg_it->second.inflight++;
return true;
}
@ -210,7 +194,6 @@ void osd_t::continue_primary_read(osd_op_t *cur_op)
return;
}
osd_primary_op_data_t *op_data = cur_op->op_data;
pg_t *pg = op_data->pg;
if (op_data->chain_size)
{
continue_chained_read(cur_op);
@ -223,10 +206,11 @@ void osd_t::continue_primary_read(osd_op_t *cur_op)
resume_0:
cur_op->reply.rw.bitmap_len = 0;
{
auto & pg = *op_data->pg;
if (cur_op->req.rw.len == 0)
{
// len=0 => bitmap read
for (int role = 0; role < (pg ? pg->pg_data_size : 1); role++)
for (int role = 0; role < pg.pg_data_size; role++)
{
op_data->stripes[role].read_start = 0;
op_data->stripes[role].read_end = UINT32_MAX;
@ -234,48 +218,40 @@ resume_0:
}
else
{
for (int role = 0; role < (pg ? pg->pg_data_size : 1); role++)
for (int role = 0; role < pg.pg_data_size; role++)
{
op_data->stripes[role].read_start = op_data->stripes[role].req_start;
op_data->stripes[role].read_end = op_data->stripes[role].req_end;
}
}
// Determine version
if (pg)
{
auto vo_it = pg->ver_override.find(op_data->oid);
op_data->target_ver = vo_it != pg->ver_override.end() ? vo_it->second : UINT64_MAX;
// PG may have degraded or misplaced objects
op_data->prev_set = get_object_osd_set(*pg, op_data->oid, &op_data->object_state);
}
else
{
op_data->target_ver = UINT64_MAX;
op_data->prev_set = &this->osd_num;
}
if (!pg || pg->state == PG_ACTIVE || pg->scheme == POOL_SCHEME_REPLICATED)
auto vo_it = pg.ver_override.find(op_data->oid);
op_data->target_ver = vo_it != pg.ver_override.end() ? vo_it->second : UINT64_MAX;
// PG may have degraded or misplaced objects
op_data->prev_set = get_object_osd_set(pg, op_data->oid, &op_data->object_state);
if (pg.state == PG_ACTIVE || pg.scheme == POOL_SCHEME_REPLICATED)
{
// Fast happy-path
if (pg && pg->scheme == POOL_SCHEME_REPLICATED &&
if (pg.scheme == POOL_SCHEME_REPLICATED &&
op_data->object_state && (op_data->object_state->state & OBJ_INCOMPLETE))
{
finish_op(cur_op, -EIO);
return;
}
cur_op->buf = alloc_read_buffer(op_data->stripes, pg ? pg->pg_data_size : 1, 0);
cur_op->buf = alloc_read_buffer(op_data->stripes, pg.pg_data_size, 0);
submit_primary_subops(SUBMIT_RMW_READ, op_data->target_ver, op_data->prev_set, cur_op);
op_data->st = 1;
}
else
{
if (extend_missing_stripes(op_data->stripes, op_data->prev_set, pg->pg_data_size, pg->pg_size) < 0)
if (extend_missing_stripes(op_data->stripes, op_data->prev_set, pg.pg_data_size, pg.pg_size) < 0)
{
finish_op(cur_op, -EIO);
return;
}
// Submit reads
op_data->degraded = 1;
cur_op->buf = alloc_read_buffer(op_data->stripes, pg->pg_size, 0);
cur_op->buf = alloc_read_buffer(op_data->stripes, pg.pg_size, 0);
submit_primary_subops(SUBMIT_RMW_READ, op_data->target_ver, op_data->prev_set, cur_op);
op_data->st = 1;
}
@ -285,32 +261,32 @@ resume_1:
resume_2:
if (op_data->errors > 0)
{
if (pg && (op_data->errcode == -EIO || op_data->errcode == -EDOM))
if (op_data->errcode == -EIO || op_data->errcode == -EDOM)
{
// I/O or checksum error
// FIXME: ref = true ideally... because new_state != state is not necessarily true if it's freed and recreated
op_data->object_state = mark_object_corrupted(*pg, op_data->oid, op_data->object_state, op_data->stripes, false);
op_data->object_state = mark_object_corrupted(*op_data->pg, op_data->oid, op_data->object_state, op_data->stripes, false);
goto resume_0;
}
finish_op(cur_op, op_data->errcode);
return;
}
cur_op->reply.rw.version = op_data->fact_ver;
cur_op->reply.rw.bitmap_len = (pg ? pg->pg_data_size : 1) * clean_entry_bitmap_size;
cur_op->reply.rw.bitmap_len = op_data->pg->pg_data_size * clean_entry_bitmap_size;
if (op_data->degraded)
{
// Reconstruct missing stripes
osd_rmw_stripe_t *stripes = op_data->stripes;
if (pg->scheme == POOL_SCHEME_XOR)
if (op_data->pg->scheme == POOL_SCHEME_XOR)
{
reconstruct_stripes_xor(stripes, pg->pg_size, clean_entry_bitmap_size);
reconstruct_stripes_xor(stripes, op_data->pg->pg_size, clean_entry_bitmap_size);
}
else if (pg->scheme == POOL_SCHEME_EC)
else if (op_data->pg->scheme == POOL_SCHEME_EC)
{
reconstruct_stripes_ec(stripes, pg->pg_size, pg->pg_data_size, clean_entry_bitmap_size);
reconstruct_stripes_ec(stripes, op_data->pg->pg_size, op_data->pg->pg_data_size, clean_entry_bitmap_size);
}
cur_op->iov.push_back(op_data->stripes[0].bmp_buf, cur_op->reply.rw.bitmap_len);
for (int role = 0; role < pg->pg_size; role++)
for (int role = 0; role < op_data->pg->pg_size; role++)
{
if (stripes[role].req_end != 0)
{
@ -656,7 +632,7 @@ void osd_t::remove_object_from_state(object_id & oid, pg_osd_set_state_t **objec
{
this->misplaced_objects--;
pg.misplaced_objects.erase(oid);
if (!pg.misplaced_objects.size() && !pg.copies_to_delete_after_sync.size())
if (!pg.misplaced_objects.size())
{
pg.state = pg.state & ~PG_HAS_MISPLACED;
changed = true;

View File

@ -7,7 +7,7 @@
void osd_t::continue_chained_read(osd_op_t *cur_op)
{
osd_primary_op_data_t *op_data = cur_op->op_data;
auto pg = op_data->pg;
auto & pg = *op_data->pg;
if (op_data->st == 1)
goto resume_1;
else if (op_data->st == 2)
@ -17,7 +17,7 @@ void osd_t::continue_chained_read(osd_op_t *cur_op)
else if (op_data->st == 4)
goto resume_4;
cur_op->reply.rw.bitmap_len = 0;
for (int role = 0; role < (pg ? pg->pg_data_size : 1); role++)
for (int role = 0; role < pg.pg_data_size; role++)
{
op_data->stripes[role].read_start = op_data->stripes[role].req_start;
op_data->stripes[role].read_end = op_data->stripes[role].req_end;
@ -40,10 +40,10 @@ resume_3:
resume_4:
if (op_data->errors > 0)
{
if (pg && (op_data->errcode == -EIO || op_data->errcode == -EDOM))
if (op_data->errcode == -EIO || op_data->errcode == -EDOM)
{
// Handle corrupted reads and retry...
check_corrupted_chained(*pg, cur_op);
check_corrupted_chained(pg, cur_op);
free(cur_op->buf);
cur_op->buf = NULL;
free(op_data->chain_reads);
@ -63,30 +63,31 @@ resume_4:
finish_op(cur_op, cur_op->req.rw.len);
}
int osd_t::read_bitmaps(osd_op_t *cur_op, pg_t *pg, int base_state)
int osd_t::read_bitmaps(osd_op_t *cur_op, pg_t & pg, int base_state)
{
osd_primary_op_data_t *op_data = cur_op->op_data;
if (op_data->st == base_state)
goto resume_0;
else if (op_data->st == base_state+1)
goto resume_1;
if (!pg || pg->state == PG_ACTIVE && pg->scheme == POOL_SCHEME_REPLICATED)
if (pg.state == PG_ACTIVE && pg.scheme == POOL_SCHEME_REPLICATED)
{
// Happy path for clean replicated PGs (all bitmaps are available locally)
osd_primary_op_data_t *op_data = cur_op->op_data;
for (int chain_num = 0; chain_num < op_data->chain_size; chain_num++)
{
object_id cur_oid = { .inode = op_data->read_chain[chain_num], .stripe = op_data->oid.stripe };
auto vo_it = pg.ver_override.find(cur_oid);
auto read_version = (vo_it != pg.ver_override.end() ? vo_it->second : UINT64_MAX);
// Read bitmap synchronously from the local database
bs->read_bitmap(
cur_oid, UINT64_MAX, (uint8_t*)op_data->snapshot_bitmaps + chain_num*clean_entry_bitmap_size,
cur_oid, read_version, (uint8_t*)op_data->snapshot_bitmaps + chain_num*clean_entry_bitmap_size,
!chain_num ? &cur_op->reply.rw.version : NULL
);
}
}
else
{
if (submit_bitmap_subops(cur_op, *pg) < 0)
if (submit_bitmap_subops(cur_op, pg) < 0)
{
// Failure
finish_op(cur_op, -EIO);
@ -100,32 +101,32 @@ resume_0:
return 1;
}
resume_1:
if (pg->scheme != POOL_SCHEME_REPLICATED)
if (pg.scheme != POOL_SCHEME_REPLICATED)
{
for (int chain_num = 0; chain_num < op_data->chain_size; chain_num++)
{
// Check if we need to reconstruct any bitmaps
for (int i = 0; i < pg->pg_size; i++)
for (int i = 0; i < pg.pg_size; i++)
{
if (op_data->missing_flags[chain_num*pg->pg_size + i])
if (op_data->missing_flags[chain_num*pg.pg_size + i])
{
osd_rmw_stripe_t local_stripes[pg->pg_size];
for (i = 0; i < pg->pg_size; i++)
osd_rmw_stripe_t local_stripes[pg.pg_size];
for (i = 0; i < pg.pg_size; i++)
{
local_stripes[i] = (osd_rmw_stripe_t){
.bmp_buf = (uint8_t*)op_data->snapshot_bitmaps + (chain_num*pg->pg_size + i)*clean_entry_bitmap_size,
.bmp_buf = (uint8_t*)op_data->snapshot_bitmaps + (chain_num*pg.pg_size + i)*clean_entry_bitmap_size,
.read_start = 1,
.read_end = 1,
.missing = op_data->missing_flags[chain_num*pg->pg_size + i] && true,
.missing = op_data->missing_flags[chain_num*pg.pg_size + i] && true,
};
}
if (pg->scheme == POOL_SCHEME_XOR)
if (pg.scheme == POOL_SCHEME_XOR)
{
reconstruct_stripes_xor(local_stripes, pg->pg_size, clean_entry_bitmap_size);
reconstruct_stripes_xor(local_stripes, pg.pg_size, clean_entry_bitmap_size);
}
else if (pg->scheme == POOL_SCHEME_EC)
else if (pg.scheme == POOL_SCHEME_EC)
{
reconstruct_stripes_ec(local_stripes, pg->pg_size, pg->pg_data_size, clean_entry_bitmap_size);
reconstruct_stripes_ec(local_stripes, pg.pg_size, pg.pg_data_size, clean_entry_bitmap_size);
}
break;
}
@ -138,7 +139,6 @@ resume_1:
int osd_t::collect_bitmap_requests(osd_op_t *cur_op, pg_t & pg, std::vector<bitmap_request_t> & bitmap_requests)
{
assert(&pg);
osd_primary_op_data_t *op_data = cur_op->op_data;
for (int chain_num = 0; chain_num < op_data->chain_size; chain_num++)
{
@ -216,7 +216,6 @@ int osd_t::collect_bitmap_requests(osd_op_t *cur_op, pg_t & pg, std::vector<bitm
int osd_t::submit_bitmap_subops(osd_op_t *cur_op, pg_t & pg)
{
assert(&pg);
osd_primary_op_data_t *op_data = cur_op->op_data;
std::vector<bitmap_request_t> *bitmap_requests = new std::vector<bitmap_request_t>();
if (collect_bitmap_requests(cur_op, pg, *bitmap_requests) < 0)
@ -267,6 +266,7 @@ int osd_t::submit_bitmap_subops(osd_op_t *cur_op, pg_t & pg)
.sec_read_bmp = {
.header = {
.magic = SECONDARY_OSD_OP_MAGIC,
.id = msgr.next_subop_id++,
.opcode = OSD_OP_SEC_READ_BMP,
},
.len = sizeof(obj_ver_id)*(i+1-prev),
@ -383,12 +383,12 @@ std::vector<osd_chain_read_t> osd_t::collect_chained_read_requests(osd_op_t *cur
return chain_reads;
}
int osd_t::submit_chained_read_requests(pg_t *pg, osd_op_t *cur_op)
int osd_t::submit_chained_read_requests(pg_t & pg, osd_op_t *cur_op)
{
// Decide which parts of which objects we need to read based on bitmaps
osd_primary_op_data_t *op_data = cur_op->op_data;
auto chain_reads = collect_chained_read_requests(cur_op);
int stripe_count = (!pg || pg->scheme == POOL_SCHEME_REPLICATED ? 1 : pg->pg_size);
int stripe_count = (pg.scheme == POOL_SCHEME_REPLICATED ? 1 : pg.pg_size);
op_data->chain_read_count = chain_reads.size();
op_data->chain_reads = (osd_chain_read_t*)calloc_or_die(
1, sizeof(osd_chain_read_t) * chain_reads.size()
@ -409,23 +409,23 @@ int osd_t::submit_chained_read_requests(pg_t *pg, osd_op_t *cur_op)
object_id cur_oid = { .inode = chain_reads[cri].inode, .stripe = op_data->oid.stripe };
// FIXME: maybe introduce split_read_stripes to shorten these lines and to remove read_start=req_start
osd_rmw_stripe_t *stripes = chain_stripes + chain_reads[cri].chain_pos*stripe_count;
split_stripes(pg ? pg->pg_data_size : 1, bs_block_size, chain_reads[cri].offset, chain_reads[cri].len, stripes);
if ((!pg || pg->scheme == POOL_SCHEME_REPLICATED) && !stripes[0].req_end)
split_stripes(pg.pg_data_size, bs_block_size, chain_reads[cri].offset, chain_reads[cri].len, stripes);
if (pg.scheme == POOL_SCHEME_REPLICATED && !stripes[0].req_end)
{
continue;
}
for (int role = 0; role < (pg ? pg->pg_data_size : 1); role++)
for (int role = 0; role < pg.pg_data_size; role++)
{
stripes[role].read_start = stripes[role].req_start;
stripes[role].read_end = stripes[role].req_end;
}
uint64_t *cur_set = pg ? pg->cur_set.data() : &this->osd_num;
if (pg && pg->state != PG_ACTIVE)
uint64_t *cur_set = pg.cur_set.data();
if (pg.state != PG_ACTIVE)
{
cur_set = get_object_osd_set(*pg, cur_oid, &op_data->chain_states[chain_reads[cri].chain_pos]);
if (pg->scheme != POOL_SCHEME_REPLICATED)
cur_set = get_object_osd_set(pg, cur_oid, &op_data->chain_states[chain_reads[cri].chain_pos]);
if (pg.scheme != POOL_SCHEME_REPLICATED)
{
if (extend_missing_stripes(stripes, cur_set, pg->pg_data_size, pg->pg_size) < 0)
if (extend_missing_stripes(stripes, cur_set, pg.pg_data_size, pg.pg_size) < 0)
{
free(op_data->chain_reads);
op_data->chain_reads = NULL;
@ -446,14 +446,14 @@ int osd_t::submit_chained_read_requests(pg_t *pg, osd_op_t *cur_op)
}
}
}
if (!pg || pg->scheme == POOL_SCHEME_REPLICATED)
if (pg.scheme == POOL_SCHEME_REPLICATED)
{
n_subops++;
read_buffer_size += stripes[0].read_end - stripes[0].read_start;
}
else
{
for (int role = 0; role < pg->pg_size; role++)
for (int role = 0; role < pg.pg_size; role++)
{
if (stripes[role].read_end > 0 && cur_set[role] != 0)
n_subops++;
@ -491,23 +491,19 @@ int osd_t::submit_chained_read_requests(pg_t *pg, osd_op_t *cur_op)
for (int cri = 0; cri < chain_reads.size(); cri++)
{
osd_rmw_stripe_t *stripes = chain_stripes + chain_reads[cri].chain_pos*stripe_count;
if ((!pg || pg->scheme == POOL_SCHEME_REPLICATED) && !stripes[0].req_end)
if (pg.scheme == POOL_SCHEME_REPLICATED && !stripes[0].req_end)
{
continue;
}
object_id cur_oid = { .inode = chain_reads[cri].inode, .stripe = op_data->oid.stripe };
uint64_t target_ver = UINT64_MAX;
if (pg)
{
auto vo_it = pg->ver_override.find(cur_oid);
target_ver = vo_it != pg->ver_override.end() ? vo_it->second : UINT64_MAX;
}
auto vo_it = pg.ver_override.find(cur_oid);
uint64_t target_ver = vo_it != pg.ver_override.end() ? vo_it->second : UINT64_MAX;
auto cur_state = op_data->chain_states[chain_reads[cri].chain_pos];
uint64_t *cur_set = (!pg ? &this->osd_num : (pg->state != PG_ACTIVE && cur_state ? cur_state->read_target.data() : pg->cur_set.data()));
uint64_t *cur_set = (pg.state != PG_ACTIVE && cur_state ? cur_state->read_target.data() : pg.cur_set.data());
int zero_read = -1;
if (!pg || pg->scheme == POOL_SCHEME_REPLICATED)
if (pg.scheme == POOL_SCHEME_REPLICATED)
{
for (int role = 0; role < (pg ? pg->pg_size : 1); role++)
for (int role = 0; role < pg.pg_size; role++)
if (cur_set[role] == this->osd_num || zero_read == -1)
zero_read = role;
}
@ -519,7 +515,6 @@ int osd_t::submit_chained_read_requests(pg_t *pg, osd_op_t *cur_op)
void osd_t::check_corrupted_chained(pg_t & pg, osd_op_t *cur_op)
{
assert(&pg);
osd_primary_op_data_t *op_data = cur_op->op_data;
int stripe_count = (pg.scheme == POOL_SCHEME_REPLICATED ? 1 : pg.pg_size);
osd_rmw_stripe_t *chain_stripes = (osd_rmw_stripe_t*)(
@ -545,32 +540,33 @@ void osd_t::check_corrupted_chained(pg_t & pg, osd_op_t *cur_op)
}
}
void osd_t::send_chained_read_results(pg_t *pg, osd_op_t *cur_op)
void osd_t::send_chained_read_results(pg_t & pg, osd_op_t *cur_op)
{
osd_primary_op_data_t *op_data = cur_op->op_data;
int stripe_count = (!pg || pg->scheme == POOL_SCHEME_REPLICATED ? 1 : pg->pg_size);
int stripe_count = (pg.scheme == POOL_SCHEME_REPLICATED ? 1 : pg.pg_size);
osd_rmw_stripe_t *chain_stripes = (osd_rmw_stripe_t*)(
(uint8_t*)op_data->chain_reads + sizeof(osd_chain_read_t) * op_data->chain_read_count
);
// Reconstruct parts if needed
if (op_data->degraded)
{
int stripe_count = (pg.scheme == POOL_SCHEME_REPLICATED ? 1 : pg.pg_size);
for (int cri = 0; cri < op_data->chain_read_count; cri++)
{
// Reconstruct missing stripes
osd_rmw_stripe_t *stripes = chain_stripes + op_data->chain_reads[cri].chain_pos*stripe_count;
if (pg->scheme == POOL_SCHEME_XOR)
if (pg.scheme == POOL_SCHEME_XOR)
{
reconstruct_stripes_xor(stripes, pg->pg_size, clean_entry_bitmap_size);
reconstruct_stripes_xor(stripes, pg.pg_size, clean_entry_bitmap_size);
}
else if (pg->scheme == POOL_SCHEME_EC)
else if (pg.scheme == POOL_SCHEME_EC)
{
reconstruct_stripes_ec(stripes, pg->pg_size, pg->pg_data_size, clean_entry_bitmap_size);
reconstruct_stripes_ec(stripes, pg.pg_size, pg.pg_data_size, clean_entry_bitmap_size);
}
}
}
// Send bitmap
cur_op->reply.rw.bitmap_len = (pg ? pg->pg_data_size : 1) * clean_entry_bitmap_size;
cur_op->reply.rw.bitmap_len = pg.pg_data_size * clean_entry_bitmap_size;
cur_op->iov.push_back(op_data->stripes[0].bmp_buf, cur_op->reply.rw.bitmap_len);
// And finally compose the result
uint64_t sent = 0;

View File

@ -67,20 +67,26 @@ void osd_t::finish_op(osd_op_t *cur_op, int retval)
if (cur_op->req.hdr.opcode == OSD_OP_DELETE)
{
if (cur_op->op_data)
{
inode_stats[cur_op->req.rw.inode].op_bytes[inode_st_op] += (cur_op->op_data->pg
? cur_op->op_data->pg->pg_data_size : 1) * bs_block_size;
}
inode_stats[cur_op->req.rw.inode].op_bytes[inode_st_op] += cur_op->op_data->pg->pg_data_size * bs_block_size;
}
else
inode_stats[cur_op->req.rw.inode].op_bytes[inode_st_op] += cur_op->req.rw.len;
}
if (cur_op->op_data)
{
if (cur_op->op_data->pg)
if (cur_op->op_data->pg_num > 0)
{
auto & pg = *cur_op->op_data->pg;
rm_inflight(pg);
pg.inflight--;
assert(pg.inflight >= 0);
if ((pg.state & PG_STOPPING) && pg.inflight == 0 && !pg.flush_batch)
{
finish_stop_pg(pg);
}
else if ((pg.state & PG_REPEERING) && pg.inflight == 0 && !pg.flush_batch)
{
start_pg_peering(pg);
}
}
assert(!cur_op->op_data->subops);
free(cur_op->op_data);
@ -120,10 +126,10 @@ void osd_t::submit_primary_subops(int submit_type, uint64_t op_version, const ui
bool wr = submit_type == SUBMIT_WRITE;
osd_primary_op_data_t *op_data = cur_op->op_data;
osd_rmw_stripe_t *stripes = op_data->stripes;
bool rep = !op_data->pg || op_data->pg->scheme == POOL_SCHEME_REPLICATED;
bool rep = op_data->pg->scheme == POOL_SCHEME_REPLICATED;
// Allocate subops
int n_subops = 0, zero_read = -1;
for (int role = 0; role < (op_data->pg ? op_data->pg->pg_size : 1); role++)
for (int role = 0; role < op_data->pg->pg_size; role++)
{
if (osd_set[role] == this->osd_num || osd_set[role] != 0 && zero_read == -1)
zero_read = role;
@ -146,11 +152,11 @@ void osd_t::submit_primary_subops(int submit_type, uint64_t op_version, const ui
int osd_t::submit_primary_subop_batch(int submit_type, inode_t inode, uint64_t op_version,
osd_rmw_stripe_t *stripes, const uint64_t* osd_set, osd_op_t *cur_op, int subop_idx, int zero_read)
{
bool rep = !cur_op->op_data->pg || cur_op->op_data->pg->scheme == POOL_SCHEME_REPLICATED;
bool rep = cur_op->op_data->pg->scheme == POOL_SCHEME_REPLICATED;
bool wr = submit_type == SUBMIT_WRITE;
osd_primary_op_data_t *op_data = cur_op->op_data;
int i = subop_idx;
for (int role = 0; role < (op_data->pg ? op_data->pg->pg_size : 1); role++)
for (int role = 0; role < op_data->pg->pg_size; role++)
{
// We always submit zero-length writes to all replicas, even if the stripe is not modified
if (!(wr || !rep && stripes[role].read_end != 0 || zero_read == role || submit_type == SUBMIT_SCRUB_READ))
@ -227,6 +233,7 @@ void osd_t::submit_primary_subop(osd_op_t *cur_op, osd_op_t *subop,
subop->req.sec_rw = (osd_op_sec_rw_t){
.header = {
.magic = SECONDARY_OSD_OP_MAGIC,
.id = msgr.next_subop_id++,
.opcode = (uint64_t)(wr ? (cur_op->op_data->pg->scheme == POOL_SCHEME_REPLICATED ? OSD_OP_SEC_WRITE_STABLE : OSD_OP_SEC_WRITE) : OSD_OP_SEC_READ),
},
.oid = {
@ -428,14 +435,6 @@ void osd_t::handle_primary_subop(osd_op_t *subop, osd_op_t *cur_op)
retval, expected, peer_osd
);
}
else if (opcode == OSD_OP_SEC_DELETE)
{
printf(
"delete subop to %jx:%jx v%ju failed on osd %jd: retval = %d (expected %d)\n",
subop->req.sec_del.oid.inode, subop->req.sec_del.oid.stripe, subop->req.sec_del.version,
peer_osd, retval, expected
);
}
else
{
printf(
@ -453,16 +452,15 @@ void osd_t::handle_primary_subop(osd_op_t *subop, osd_op_t *cur_op)
{
op_data->errcode = retval;
}
op_data->errors++;
if (subop->peer_fd >= 0 && retval != -EDOM && retval != -ERANGE &&
(retval != -ENOSPC || opcode != OSD_OP_SEC_WRITE && opcode != OSD_OP_SEC_WRITE_STABLE) &&
(retval != -EIO || opcode != OSD_OP_SEC_READ))
{
// Drop connection on unexpected errors
msgr.stop_client(subop->peer_fd);
op_data->drops++;
msgr.stop_client(subop->peer_fd);
}
// Increase op_data->errors after stop_client to prevent >= n_subops running twice
op_data->errors++;
}
else
{
@ -595,6 +593,7 @@ void osd_t::submit_primary_del_batch(osd_op_t *cur_op, obj_ver_osd_t *chunks_to_
subops[i].req = (osd_any_op_t){ .sec_del = {
.header = {
.magic = SECONDARY_OSD_OP_MAGIC,
.id = msgr.next_subop_id++,
.opcode = OSD_OP_SEC_DELETE,
},
.oid = chunk.oid,
@ -654,6 +653,7 @@ int osd_t::submit_primary_sync_subops(osd_op_t *cur_op)
subops[i].req = (osd_any_op_t){ .sec_sync = {
.header = {
.magic = SECONDARY_OSD_OP_MAGIC,
.id = msgr.next_subop_id++,
.opcode = OSD_OP_SEC_SYNC,
},
.flags = cur_op->peer_fd == SELF_FD && cur_op->req.hdr.opcode != OSD_OP_SCRUB ? OSD_OP_RECOVERY_RELATED : 0,
@ -712,6 +712,7 @@ void osd_t::submit_primary_stab_subops(osd_op_t *cur_op)
subops[i].req = (osd_any_op_t){ .sec_stab = {
.header = {
.magic = SECONDARY_OSD_OP_MAGIC,
.id = msgr.next_subop_id++,
.opcode = OSD_OP_SEC_STABILIZE,
},
.len = (uint64_t)(stab_osd.len * sizeof(obj_ver_id)),
@ -805,6 +806,7 @@ void osd_t::submit_primary_rollback_subops(osd_op_t *cur_op, const uint64_t* osd
subop->req = (osd_any_op_t){ .sec_stab = {
.header = {
.magic = SECONDARY_OSD_OP_MAGIC,
.id = msgr.next_subop_id++,
.opcode = OSD_OP_SEC_ROLLBACK,
},
.len = sizeof(obj_ver_id),

View File

@ -80,17 +80,15 @@ resume_2:
this->unstable_writes.clear();
}
{
op_data->dirty_pg_count = dirty_pgs.size();
op_data->dirty_osd_count = dirty_osds.size();
void *dirty_buf = malloc_or_die(
sizeof(pool_pg_num_t)*dirty_pgs.size() +
sizeof(uint64_t)*dirty_pgs.size() +
sizeof(osd_num_t)*dirty_osds.size() +
sizeof(obj_ver_osd_t)*this->copies_to_delete_after_sync_count
);
op_data->dirty_pgs = (pool_pg_num_t*)dirty_buf;
uint64_t *pg_del_counts = (uint64_t*)((uint8_t*)op_data->dirty_pgs + (sizeof(pool_pg_num_t))*op_data->dirty_pg_count);
op_data->dirty_osds = (osd_num_t*)((uint8_t*)pg_del_counts + 8*op_data->dirty_pg_count);
op_data->dirty_osds = (osd_num_t*)((uint8_t*)dirty_buf + sizeof(pool_pg_num_t)*dirty_pgs.size());
op_data->dirty_pg_count = dirty_pgs.size();
op_data->dirty_osd_count = dirty_osds.size();
if (this->copies_to_delete_after_sync_count)
{
op_data->copies_to_delete_count = 0;
@ -105,16 +103,16 @@ resume_2:
sizeof(obj_ver_osd_t)*pg.copies_to_delete_after_sync.size()
);
op_data->copies_to_delete_count += pg.copies_to_delete_after_sync.size();
this->copies_to_delete_after_sync_count -= pg.copies_to_delete_after_sync.size();
pg.copies_to_delete_after_sync.clear();
}
assert(this->copies_to_delete_after_sync_count == 0);
}
int dpg = 0;
for (auto dirty_pg_num: dirty_pgs)
{
auto & pg = pgs.at(dirty_pg_num);
pg.inflight++;
op_data->dirty_pgs[dpg] = dirty_pg_num;
pg_del_counts[dpg] = pg.copies_to_delete_after_sync.size();
dpg++;
pgs.at(dirty_pg_num).inflight++;
op_data->dirty_pgs[dpg++] = dirty_pg_num;
}
dirty_pgs.clear();
dpg = 0;
@ -185,6 +183,23 @@ resume_6:
}
}
}
if (op_data->copies_to_delete)
{
// Return 'copies to delete' back into respective PGs
for (int i = 0; i < op_data->copies_to_delete_count; i++)
{
auto & w = op_data->copies_to_delete[i];
auto & pg = pgs.at((pool_pg_num_t){
.pool_id = INODE_POOL(w.oid.inode),
.pg_num = map_to_pg(w.oid, st_cli.pool_config.at(INODE_POOL(w.oid.inode)).pg_stripe_size),
});
if (pg.state & PG_ACTIVE)
{
pg.copies_to_delete_after_sync.push_back(w);
copies_to_delete_after_sync_count++;
}
}
}
}
else if (op_data->copies_to_delete)
{
@ -198,22 +213,6 @@ resume_8:
{
goto resume_6;
}
{
uint64_t *pg_del_counts = (uint64_t*)((uint8_t*)op_data->dirty_pgs + (sizeof(pool_pg_num_t))*op_data->dirty_pg_count);
for (int i = 0; i < op_data->dirty_pg_count; i++)
{
auto & pg = pgs.at(op_data->dirty_pgs[i]);
auto n = pg_del_counts[i];
assert(copies_to_delete_after_sync_count >= n);
copies_to_delete_after_sync_count -= n;
pg.copies_to_delete_after_sync.erase(pg.copies_to_delete_after_sync.begin(), pg.copies_to_delete_after_sync.begin()+n);
if (!pg.misplaced_objects.size() && !pg.copies_to_delete_after_sync.size() && (pg.state & PG_HAS_MISPLACED))
{
pg.state = pg.state & ~PG_HAS_MISPLACED;
report_pg_state(pg);
}
}
}
if (immediate_commit == IMMEDIATE_NONE)
{
// Mark OSDs as dirty because deletions have to be synced too!
@ -227,7 +226,15 @@ resume_8:
for (int i = 0; i < op_data->dirty_pg_count; i++)
{
auto & pg = pgs.at(op_data->dirty_pgs[i]);
rm_inflight(pg);
pg.inflight--;
if ((pg.state & PG_STOPPING) && pg.inflight == 0 && !pg.flush_batch)
{
finish_stop_pg(pg);
}
else if ((pg.state & PG_REPEERING) && pg.inflight == 0 && !pg.flush_batch)
{
start_pg_peering(pg);
}
}
// FIXME: Free those in the destructor (not here)?
free(op_data->dirty_pgs);

View File

@ -301,38 +301,6 @@ resume_12:
}
if (op_data->object_state)
{
// Any kind of a non-clean object can have extra chunks, because we don't record objects
// as degraded & misplaced or incomplete & misplaced at the same time. So try to remove extra chunks
if (immediate_commit != IMMEDIATE_ALL)
{
// We can't remove extra chunks yet if fsyncs are explicit, because
// new copies may not be committed to stable storage yet
// We can only remove extra chunks after a successful SYNC for this PG
for (auto & chunk: op_data->object_state->osd_set)
{
// Check is the same as in submit_primary_del_subops()
if (pg.scheme == POOL_SCHEME_REPLICATED
? !contains_osd(pg.cur_set.data(), pg.pg_size, chunk.osd_num)
: (chunk.osd_num != pg.cur_set[chunk.role]))
{
pg.copies_to_delete_after_sync.push_back((obj_ver_osd_t){
.osd_num = chunk.osd_num,
.oid = {
.inode = op_data->oid.inode,
.stripe = op_data->oid.stripe | (pg.scheme == POOL_SCHEME_REPLICATED ? 0 : chunk.role),
},
.version = op_data->fact_ver,
});
copies_to_delete_after_sync_count++;
}
}
if (pg.copies_to_delete_after_sync.size() && !(pg.state & PG_HAS_MISPLACED))
{
// PG can't be active+clean until extra copies aren't removed, so mark it as PG_HAS_MISPLACED
pg.state |= PG_HAS_MISPLACED;
//this->pg_state_dirty.insert({ .pool_id = pg.pool_id, .pg_num = pg.pg_num });
}
}
// We must forget the unclean state of the object before deleting it
// so the next reads don't accidentally read a deleted version
// And it should be done at the same time as the removal of the version override
@ -341,7 +309,6 @@ resume_12:
}
resume_6:
resume_7:
op_data->n_subops = 0;
if (!remember_unstable_write(cur_op, pg, pg.cur_loc_set, 6))
{
return;
@ -377,21 +344,48 @@ resume_7:
);
recovery_stat[recovery_type].usec += usec;
}
if (immediate_commit == IMMEDIATE_ALL)
// Any kind of a non-clean object can have extra chunks, because we don't record objects
// as degraded & misplaced or incomplete & misplaced at the same time. So try to remove extra chunks
if (immediate_commit != IMMEDIATE_ALL)
{
// We can't remove extra chunks yet if fsyncs are explicit, because
// new copies may not be committed to stable storage yet
// We can only remove extra chunks after a successful SYNC for this PG
for (auto & chunk: op_data->object_state->osd_set)
{
// Check is the same as in submit_primary_del_subops()
if (pg.scheme == POOL_SCHEME_REPLICATED
? !contains_osd(pg.cur_set.data(), pg.pg_size, chunk.osd_num)
: (chunk.osd_num != pg.cur_set[chunk.role]))
{
pg.copies_to_delete_after_sync.push_back((obj_ver_osd_t){
.osd_num = chunk.osd_num,
.oid = {
.inode = op_data->oid.inode,
.stripe = op_data->oid.stripe | (pg.scheme == POOL_SCHEME_REPLICATED ? 0 : chunk.role),
},
.version = op_data->fact_ver,
});
copies_to_delete_after_sync_count++;
}
}
deref_object_state(pg, &op_data->object_state, true);
}
else
{
submit_primary_del_subops(cur_op, pg.cur_set.data(), pg.pg_size, op_data->object_state->osd_set);
}
deref_object_state(pg, &op_data->object_state, true);
if (op_data->n_subops > 0)
{
resume_8:
op_data->st = 8;
return;
resume_9:
if (op_data->errors > 0)
deref_object_state(pg, &op_data->object_state, true);
if (op_data->n_subops > 0)
{
pg_cancel_write_queue(pg, cur_op, op_data->oid, op_data->errcode);
resume_8:
op_data->st = 8;
return;
resume_9:
if (op_data->errors > 0)
{
pg_cancel_write_queue(pg, cur_op, op_data->oid, op_data->errcode);
return;
}
}
}
}

View File

@ -162,7 +162,6 @@ struct reed_sol_matrix_t
int refs = 0;
int *je_data;
uint8_t *isal_data;
int isal_item_size;
// 32 bytes = 256/8 = max pg_size/8
std::map<std::array<uint8_t, 32>, void*> subdata;
std::map<reed_sol_erased_t, void*> decodings;
@ -182,42 +181,20 @@ void use_ec(int pg_size, int pg_minsize, bool use)
}
int *matrix = reed_sol_vandermonde_coding_matrix(pg_minsize, pg_size-pg_minsize, OSD_JERASURE_W);
uint8_t *isal_table = NULL;
int item_size = 8;
#ifdef WITH_ISAL
uint8_t *isal_matrix = (uint8_t*)malloc_or_die(pg_minsize*(pg_size-pg_minsize));
for (int i = 0; i < pg_minsize*(pg_size-pg_minsize); i++)
{
isal_matrix[i] = matrix[i];
}
isal_table = (uint8_t*)calloc_or_die(1, pg_minsize*(pg_size-pg_minsize)*32);
isal_table = (uint8_t*)malloc_or_die(pg_minsize*(pg_size-pg_minsize)*32);
ec_init_tables(pg_minsize, pg_size-pg_minsize, isal_matrix, isal_table);
free(isal_matrix);
for (int i = pg_minsize*(pg_size-pg_minsize)*8; i < pg_minsize*(pg_size-pg_minsize)*32; i++)
{
if (isal_table[i] != 0)
{
// ISA-L GF-NI version uses 8-byte table items
item_size = 32;
break;
}
}
// Sanity check: rows should never consist of all zeroes
uint8_t zero_row[pg_minsize*item_size];
memset(zero_row, 0, pg_minsize*item_size);
for (int i = 0; i < (pg_size-pg_minsize); i++)
{
if (memcmp(isal_table + i*pg_minsize*item_size, zero_row, pg_minsize*item_size) == 0)
{
fprintf(stderr, "BUG or ISA-L incompatibility: EC tables shouldn't have all-zero rows\n");
abort();
}
}
#endif
matrices[key] = (reed_sol_matrix_t){
.refs = 0,
.je_data = matrix,
.isal_data = isal_table,
.isal_item_size = item_size,
};
rs_it = matrices.find(key);
}
@ -258,7 +235,7 @@ static reed_sol_matrix_t* get_ec_matrix(int pg_size, int pg_minsize)
// we don't need it. also it makes an extra allocation of int *erased on every call and doesn't cache
// the decoding matrix.
// all these flaws are fixed in this function:
static void* get_jerasure_decoding_matrix(osd_rmw_stripe_t *stripes, int pg_size, int pg_minsize, int *item_size)
static void* get_jerasure_decoding_matrix(osd_rmw_stripe_t *stripes, int pg_size, int pg_minsize)
{
int edd = 0;
int erased[pg_size];
@ -315,7 +292,6 @@ static void* get_jerasure_decoding_matrix(osd_rmw_stripe_t *stripes, int pg_size
int *erased_copy = (int*)(rectable + 32*smrow*pg_minsize);
memcpy(erased_copy, erased, pg_size*sizeof(int));
matrix->decodings.emplace((reed_sol_erased_t){ .data = erased_copy, .size = pg_size }, rectable);
*item_size = matrix->isal_item_size;
return rectable;
#else
int *dm_ids = (int*)malloc_or_die(sizeof(int)*(pg_minsize + pg_minsize*pg_minsize + pg_size));
@ -379,8 +355,7 @@ static void jerasure_matrix_encode_unaligned(int k, int m, int w, int *matrix, c
#ifdef WITH_ISAL
void reconstruct_stripes_ec(osd_rmw_stripe_t *stripes, int pg_size, int pg_minsize, uint32_t bitmap_size)
{
int item_size = 0;
uint8_t *dectable = (uint8_t*)get_jerasure_decoding_matrix(stripes, pg_size, pg_minsize, &item_size);
uint8_t *dectable = (uint8_t*)get_jerasure_decoding_matrix(stripes, pg_size, pg_minsize);
if (!dectable)
{
return;
@ -403,7 +378,7 @@ void reconstruct_stripes_ec(osd_rmw_stripe_t *stripes, int pg_size, int pg_minsi
}
}
ec_encode_data(
read_end-read_start, pg_minsize, wanted, dectable + wanted_base*item_size*pg_minsize,
read_end-read_start, pg_minsize, wanted, dectable + wanted_base*32*pg_minsize,
data_ptrs, data_ptrs + pg_minsize
);
}
@ -458,7 +433,7 @@ void reconstruct_stripes_ec(osd_rmw_stripe_t *stripes, int pg_size, int pg_minsi
#else
void reconstruct_stripes_ec(osd_rmw_stripe_t *stripes, int pg_size, int pg_minsize, uint32_t bitmap_size)
{
int *dm_ids = (int*)get_jerasure_decoding_matrix(stripes, pg_size, pg_minsize, NULL);
int *dm_ids = (int*)get_jerasure_decoding_matrix(stripes, pg_size, pg_minsize);
if (!dm_ids)
{
return;
@ -1005,7 +980,7 @@ void calc_rmw_parity_ec(osd_rmw_stripe_t *stripes, int pg_size, int pg_minsize,
{
int item_size =
#ifdef WITH_ISAL
matrix->isal_item_size;
32;
#else
sizeof(int);
#endif

View File

@ -65,6 +65,7 @@ void osd_t::scrub_list(pool_pg_num_t pg_id, osd_num_t role_osd, object_id min_oi
.sec_list = {
.header = {
.magic = SECONDARY_OSD_OP_MAGIC,
.id = msgr.next_subop_id++,
.opcode = OSD_OP_SEC_LIST,
},
.list_pg = pg_num,

View File

@ -79,32 +79,6 @@ void osd_t::exec_secondary(osd_op_t *op)
}
}
bool osd_t::sec_check_pg_lock(osd_num_t primary_osd, const object_id &oid)
{
if (!enable_pg_locks)
{
return true;
}
pool_id_t pool_id = INODE_POOL(oid.inode);
auto pool_cfg_it = st_cli.pool_config.find(pool_id);
if (pool_cfg_it == st_cli.pool_config.end())
{
return false;
}
auto ppg = (pool_pg_num_t){ .pool_id = pool_id, .pg_num = map_to_pg(oid, pool_cfg_it->second.pg_stripe_size) };
auto pg_it = pgs.find(ppg);
if (pg_it != pgs.end() && pg_it->second.state != PG_OFFLINE)
{
return false;
}
if (pg_it->second.disable_pg_locks)
{
return true;
}
auto lock_it = pg_locks.find(ppg);
return lock_it != pg_locks.end() && lock_it->second.primary_osd == primary_osd;
}
void osd_t::exec_secondary_real(osd_op_t *cur_op)
{
if (cur_op->req.hdr.opcode == OSD_OP_SEC_LIST &&
@ -115,15 +89,23 @@ void osd_t::exec_secondary_real(osd_op_t *cur_op)
}
if (cur_op->req.hdr.opcode == OSD_OP_SEC_READ_BMP)
{
exec_sec_read_bmp(cur_op);
int n = cur_op->req.sec_read_bmp.len / sizeof(obj_ver_id);
if (n > 0)
{
obj_ver_id *ov = (obj_ver_id*)cur_op->buf;
void *reply_buf = malloc_or_die(n * (8 + clean_entry_bitmap_size));
void *cur_buf = reply_buf;
for (int i = 0; i < n; i++)
{
bs->read_bitmap(ov[i].oid, ov[i].version, (uint8_t*)cur_buf + sizeof(uint64_t), (uint64_t*)cur_buf);
cur_buf = (uint8_t*)cur_buf + (8 + clean_entry_bitmap_size);
}
free(cur_op->buf);
cur_op->buf = reply_buf;
}
finish_op(cur_op, n * (8 + clean_entry_bitmap_size));
return;
}
else if (cur_op->req.hdr.opcode == OSD_OP_SEC_LOCK)
{
exec_sec_lock(cur_op);
return;
}
auto cl = msgr.clients.at(cur_op->peer_fd);
cur_op->bs_op = new blockstore_op_t();
cur_op->bs_op->callback = [this, cur_op](blockstore_op_t* bs_op) { secondary_op_callback(cur_op); };
cur_op->bs_op->opcode = (cur_op->req.hdr.opcode == OSD_OP_SEC_READ ? BS_OP_READ
@ -139,13 +121,6 @@ void osd_t::exec_secondary_real(osd_op_t *cur_op)
cur_op->req.hdr.opcode == OSD_OP_SEC_WRITE ||
cur_op->req.hdr.opcode == OSD_OP_SEC_WRITE_STABLE)
{
if (!(cur_op->req.sec_rw.flags & OSD_OP_IGNORE_PG_LOCK) &&
!sec_check_pg_lock(cl->osd_num, cur_op->req.sec_rw.oid))
{
cur_op->bs_op->retval = -EPIPE;
secondary_op_callback(cur_op);
return;
}
if (cur_op->req.hdr.opcode == OSD_OP_SEC_READ)
{
// Allocate memory for the read operation
@ -168,13 +143,6 @@ void osd_t::exec_secondary_real(osd_op_t *cur_op)
}
else if (cur_op->req.hdr.opcode == OSD_OP_SEC_DELETE)
{
if (!(cur_op->req.sec_del.flags & OSD_OP_IGNORE_PG_LOCK) &&
!sec_check_pg_lock(cl->osd_num, cur_op->req.sec_del.oid))
{
cur_op->bs_op->retval = -EPIPE;
secondary_op_callback(cur_op);
return;
}
cur_op->bs_op->oid = cur_op->req.sec_del.oid;
cur_op->bs_op->version = cur_op->req.sec_del.version;
#ifdef OSD_STUB
@ -189,18 +157,6 @@ void osd_t::exec_secondary_real(osd_op_t *cur_op)
#ifdef OSD_STUB
cur_op->bs_op->retval = 0;
#endif
if (enable_pg_locks && !(cur_op->req.sec_stab.flags & OSD_OP_IGNORE_PG_LOCK))
{
for (int i = 0; i < cur_op->bs_op->len; i++)
{
if (!sec_check_pg_lock(cl->osd_num, ((obj_ver_id*)cur_op->buf)[i].oid))
{
cur_op->bs_op->retval = -EPIPE;
secondary_op_callback(cur_op);
return;
}
}
}
}
else if (cur_op->req.hdr.opcode == OSD_OP_SEC_LIST)
{
@ -236,99 +192,12 @@ void osd_t::exec_secondary_real(osd_op_t *cur_op)
#endif
}
void osd_t::exec_sec_read_bmp(osd_op_t *cur_op)
{
auto cl = msgr.clients.at(cur_op->peer_fd);
int n = cur_op->req.sec_read_bmp.len / sizeof(obj_ver_id);
if (n > 0)
{
obj_ver_id *ov = (obj_ver_id*)cur_op->buf;
void *reply_buf = malloc_or_die(n * (8 + clean_entry_bitmap_size));
void *cur_buf = reply_buf;
for (int i = 0; i < n; i++)
{
if (!sec_check_pg_lock(cl->osd_num, ov[i].oid) &&
!(cur_op->req.sec_read_bmp.flags & OSD_OP_IGNORE_PG_LOCK))
{
free(reply_buf);
cur_op->bs_op->retval = -EPIPE;
secondary_op_callback(cur_op);
return;
}
bs->read_bitmap(ov[i].oid, ov[i].version, (uint8_t*)cur_buf + sizeof(uint64_t), (uint64_t*)cur_buf);
cur_buf = (uint8_t*)cur_buf + (8 + clean_entry_bitmap_size);
}
free(cur_op->buf);
cur_op->buf = reply_buf;
}
finish_op(cur_op, n * (8 + clean_entry_bitmap_size));
}
// Lock/Unlock PG
void osd_t::exec_sec_lock(osd_op_t *cur_op)
{
cur_op->reply.sec_lock.cur_primary = 0;
auto cl = msgr.clients.at(cur_op->peer_fd);
if (!cl->osd_num ||
cur_op->req.sec_lock.flags != OSD_SEC_LOCK_PG &&
cur_op->req.sec_lock.flags != OSD_SEC_UNLOCK_PG ||
cur_op->req.sec_lock.pool_id > ((uint64_t)1<<POOL_ID_BITS) ||
!cur_op->req.sec_lock.pg_num ||
cur_op->req.sec_lock.pg_num > UINT32_MAX)
{
finish_op(cur_op, -EINVAL);
return;
}
auto ppg = (pool_pg_num_t){ .pool_id = (pool_id_t)cur_op->req.sec_lock.pool_id, .pg_num = (pg_num_t)cur_op->req.sec_lock.pg_num };
auto pool_cfg_it = st_cli.pool_config.find(ppg.pool_id);
if (pool_cfg_it == st_cli.pool_config.end() ||
pool_cfg_it->second.real_pg_count < cur_op->req.sec_lock.pg_num)
{
finish_op(cur_op, -ENOENT);
return;
}
auto lock_it = pg_locks.find(ppg);
if (cur_op->req.sec_lock.flags == OSD_SEC_LOCK_PG)
{
if (lock_it != pg_locks.end() && lock_it->second.primary_osd != cl->osd_num)
{
cur_op->reply.sec_lock.cur_primary = lock_it->second.primary_osd;
finish_op(cur_op, -EBUSY);
return;
}
auto primary_pg_it = pgs.find(ppg);
if (primary_pg_it != pgs.end() && primary_pg_it->second.state != PG_OFFLINE)
{
cur_op->reply.sec_lock.cur_primary = this->osd_num;
finish_op(cur_op, -EBUSY);
return;
}
pg_locks[ppg] = (osd_pg_lock_t){
.primary_osd = cl->osd_num,
.state = cur_op->req.sec_lock.pg_state,
};
}
else if (lock_it != pg_locks.end() && lock_it->second.primary_osd == cl->osd_num)
{
pg_locks.erase(lock_it);
}
finish_op(cur_op, 0);
}
void osd_t::exec_show_config(osd_op_t *cur_op)
{
std::string json_err;
json11::Json req_json = cur_op->req.show_conf.json_len > 0
? json11::Json::parse(std::string((char *)cur_op->buf), json_err)
: json11::Json();
auto peer_osd_num = req_json["osd_num"].uint64_value();
auto cl = msgr.clients.at(cur_op->peer_fd);
cl->osd_num = peer_osd_num;
if (req_json["features"]["check_sequencing"].bool_value())
{
cl->check_sequencing = true;
cl->read_op_id = cur_op->req.hdr.id + 1;
}
// Expose sensitive configuration values so peers can check them
json11::Json::object wire_config = json11::Json::object {
{ "osd_num", osd_num },
@ -341,7 +210,6 @@ void osd_t::exec_show_config(osd_op_t *cur_op)
{ "immediate_commit", (immediate_commit == IMMEDIATE_ALL ? "all" :
(immediate_commit == IMMEDIATE_SMALL ? "small" : "none")) },
{ "lease_timeout", etcd_report_interval+(st_cli.max_etcd_attempts*(2*st_cli.etcd_quick_timeout)+999)/1000 },
{ "features", json11::Json::object{ { "pg_locks", true } } },
};
#ifdef WITH_RDMA
if (msgr.is_rdma_enabled())
@ -354,7 +222,7 @@ void osd_t::exec_show_config(osd_op_t *cur_op)
bool ok = msgr.connect_rdma(cur_op->peer_fd, req_json["connect_rdma"].string_value(), req_json["rdma_max_msg"].uint64_value());
if (ok)
{
auto rc = cl->rdma_conn;
auto rc = msgr.clients.at(cur_op->peer_fd)->rdma_conn;
wire_config["rdma_address"] = rc->addr.to_string();
wire_config["rdma_max_msg"] = rc->max_msg;
}

View File

@ -21,9 +21,7 @@ osd_messenger_t::~osd_messenger_t()
void osd_messenger_t::outbox_push(osd_op_t *cur_op)
{
auto cl = clients.at(cur_op->peer_fd);
cur_op->req.hdr.id = ++cl->send_op_id;
cl->sent_ops[cur_op->req.hdr.id] = cur_op;
clients[cur_op->peer_fd]->sent_ops[cur_op->req.hdr.id] = cur_op;
}
void osd_messenger_t::parse_config(const json11::Json & config)

Some files were not shown because too many files have changed in this diff Show More