Compare commits

..

3 Commits

Author SHA1 Message Date
Vitaliy Filippov b75e69761f Add pve-qemu 9.1 patch 2024-12-19 13:06:47 +03:00
Vitaliy Filippov 75808c4149 Document NFS-RDMA 2024-12-19 13:06:47 +03:00
Vitaliy Filippov 9be3d27dc9 Document VitastorFS-based CSI 2024-12-19 13:06:47 +03:00
8 changed files with 61 additions and 12 deletions

View File

@ -0,0 +1,25 @@
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
namespace: vitastor-system
name: vitastor
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.vitastor.io
volumeBindingMode: Immediate
parameters:
# CSI driver can create block-based volumes and VitastorFS-based volumes
# only VitastorFS-based volumes and raw block volumes (without FS) support ReadWriteMany mode
# set this parameter to VitastorFS metadata volume name to use VitastorFS
# if unset, block-based volumes will be created
vitastorfs: "testfs"
# for block-based storage classes, pool ID may be either a string (name) or a number (ID)
# for vitastorFS-based storage classes it must be a string - name of the default pool for FS data
poolId: "testpool"
# volume name prefix for block-based storage classes or NFS subdirectory (including /) for FS-based volumes
volumePrefix: "k8s/"
# you can choose other configuration file if you have it in the config map
# different etcd URLs and prefixes should also be put in the config
#configPath: "/etc/vitastor/vitastor.conf"
allowVolumeExpansion: true

View File

@ -6,9 +6,18 @@
# Kubernetes CSI
Vitastor has a CSI plugin for Kubernetes which supports RWO (and block RWX) volumes.
Vitastor has a CSI plugin for Kubernetes which supports block-based and VitastorFS-based volumes.
To deploy it, take manifests from [csi/deploy/](../../csi/deploy/) directory, put your
Block-based volumes may be formatted and mounted with a normal FS (ext4 or xfs). Such volumes
only support RWO (ReadWriteOnce) mode.
Block-based volumes may also be left without FS and attached into the container as a block
device. Such volumes also support RWX (ReadWriteMany) mode.
VitastorFS-based volumes use a clustered file system and support FS-based RWX (ReadWriteMany)
mode. However, such volumes don't support quotas and snapshots.
To deploy the CSI plugin, take manifests from [csi/deploy/](../../csi/deploy/) directory, put your
Vitastor configuration in [001-csi-config-map.yaml](../../csi/deploy/001-csi-config-map.yaml),
configure storage class in [009-storage-class.yaml](../../csi/deploy/009-storage-class.yaml)
and apply all `NNN-*.yaml` manifests to your Kubernetes installation:
@ -23,16 +32,16 @@ After that you'll be able to create PersistentVolumes.
kernel modules enabled (vdpa, vduse, virtio-vdpa). If your distribution doesn't
have them pre-built - build them yourself ([instructions](../usage/qemu.en.md#vduse)),
I promise it's worth it :-). When VDUSE is unavailable, CSI driver uses [NBD](../usage/nbd.en.md)
to map Vitastor devices. NBD is slower and prone to timeout issues: if Vitastor
cluster becomes unresponsible for more than [nbd_timeout](../config/client.en.md#nbd_timeout),
the NBD device detaches and breaks pods using it.
to map Vitastor devices. NBD is slower and, with kernels older than 5.19, unmountable
if the cluster becomes unresponsible.
## Features
Vitastor CSI supports:
- Kubernetes starting with 1.20 (or 1.17 for older vitastor-csi <= 1.1.0)
- Filesystem RWO (ReadWriteOnce) volumes. Example: [PVC](../../csi/deploy/example-pvc.yaml), [pod](../../csi/deploy/example-test-pod.yaml)
- Block-based FS-formatted RWO (ReadWriteOnce) volumes. Example: [PVC](../../csi/deploy/example-pvc.yaml), [pod](../../csi/deploy/example-test-pod.yaml)
- Raw block RWX (ReadWriteMany) volumes. Example: [PVC](../../csi/deploy/example-pvc-block.yaml), [pod](../../csi/deploy/example-test-pod-block.yaml)
- VitastorFS-based volumes RWX (ReadWriteMany) volumes. Example: [storage class](../../csi/deploy/example-storage-class-fs.yaml)
- Volume expansion
- Volume snapshots. Example: [snapshot class](../../csi/deploy/example-snapshot-class.yaml), [snapshot](../../csi/deploy/example-snapshot.yaml), [clone](../../csi/deploy/example-snapshot-clone.yaml)
- [VDUSE](../usage/qemu.en.md#vduse) (preferred) and [NBD](../usage/nbd.en.md) device mapping methods

View File

@ -6,7 +6,17 @@
# Kubernetes CSI
У Vitastor есть CSI-плагин для Kubernetes, поддерживающий RWO, а также блочные RWX, тома.
У Vitastor есть CSI-плагин для Kubernetes, поддерживающий блочные тома и тома на основе
кластерной ФС VitastorFS.
Блочные тома могут быть отформатированы и примонтированы со стандартной ФС (ext4 или xfs).
Такие тома поддерживают только режим RWO (ReadWriteOnce, одновременный доступ с одного узла).
Блочные тома также могут не форматироваться и подключаться в контейнер в виде блочного устройства.
В таком случае их можно подключать в режиме RWX (ReadWriteMany, одновременный доступ с многих узлов).
Тома на основе VitastorFS используют кластерную ФС и поэтому также поддерживают режим RWX
(ReadWriteMany). Однако, такие тома не поддерживают ограничение размера и снимки.
Для установки возьмите манифесты из директории [csi/deploy/](../../csi/deploy/), поместите
вашу конфигурацию подключения к Vitastor в [csi/deploy/001-csi-config-map.yaml](../../csi/deploy/001-csi-config-map.yaml),
@ -33,6 +43,7 @@ CSI-плагин Vitastor поддерживает:
- Версии Kubernetes, начиная с 1.20 (или с 1.17 для более старых vitastor-csi <= 1.1.0)
- Файловые RWO (ReadWriteOnce) тома. Пример: [PVC](../../csi/deploy/example-pvc.yaml), [под](../../csi/deploy/example-test-pod.yaml)
- Сырые блочные RWX (ReadWriteMany) тома. Пример: [PVC](../../csi/deploy/example-pvc-block.yaml), [под](../../csi/deploy/example-test-pod-block.yaml)
- Основанные на VitastorFS RWX (ReadWriteMany) тома. Пример: [класс хранения](../../csi/deploy/example-storage-class-fs.yaml)
- Расширение размера томов
- Снимки томов. Пример: [класс снимков](../../csi/deploy/example-snapshot-class.yaml), [снимок](../../csi/deploy/example-snapshot.yaml), [клон снимка](../../csi/deploy/example-snapshot-clone.yaml)
- Способы подключения устройств [VDUSE](../usage/qemu.ru.md#vduse) (предпочитаемый) и [NBD](../usage/nbd.ru.md)

View File

@ -65,8 +65,9 @@ All other client-side components are based on the client library:
(at least by now). NBD is an older, non-recommended way to attach disks — you should use
VDUSE whenever you can.
- **[CSI driver](../installation/kubernetes.en.md)** — driver for attaching Vitastor images
as Kubernetes persistent volumes. Works through VDUSE (when available) or NBD — images are
attached as kernel block devices and mounted into containers.
and VitastorFS subdirectories as Kubernetes persistent volumes. Block-based CSI uses
VDUSE (when available) or NBD — images are attached as kernel block devices and mounted
into containers. FS-based CSI uses **[vitastor-nfs](../usage/nfs.en.md)**.
- **Drivers for Proxmox, OpenStack and so on** — pluggable modules for corresponding systems,
allowing to use Vitastor as storage in them.
- **[vitastor-nfs](../usage/nfs.en.md)** — NFS 3.0 server allowing export of two file system variants:

View File

@ -65,8 +65,9 @@
Vitastor нет (по крайней мере, пока). NBD — более старый и нерекомендуемый способ подключения
дисков — вам следует использовать VDUSE всегда, когда это возможно.
- **[CSI драйвер](../installation/kubernetes.ru.md)** — драйвер для подключения Vitastor-образов
в виде персистентных томов (PV) Kubernetes. Работает через VDUSE (если доступно) или через
NBD — образы отражаются в виде блочных устройств и монтируются в контейнеры.
и поддиректорий VitastorFS в виде персистентных томов (PV) Kubernetes. Блочный CSI работает через
VDUSE (когда это возможно) или через NBD — образы отражаются в виде блочных устройств и монтируются
в контейнеры. Файловый CSI использует **[vitastor-nfs](../usage/nfs.ru.md)**.
- **Драйвера Proxmox, OpenStack и т.п.** — подключаемые модули для соответствующих систем,
позволяющие использовать Vitastor как хранилище в оных.
- **[vitastor-nfs](../usage/nfs.ru.md)** — NFS 3.0 сервер, предоставляющий два варианта файловой системы:

View File

@ -36,6 +36,7 @@
- [Clustered file system](../usage/nfs.en.md#vitastorfs)
- [Experimental internal etcd replacement - antietcd](../config/monitor.en.md#use_antietcd)
- [Built-in Prometheus metric exporter](../config/monitor.en.md#enable_prometheus)
- [NFS RDMA support](../usage/nfs.en.md#rdma) (probably also usable for GPUDirect)
## Plugins and tools

View File

@ -38,6 +38,7 @@
- [Кластерная файловая система](../usage/nfs.ru.md#vitastorfs)
- [Экспериментальная встроенная замена etcd - antietcd](../config/monitor.ru.md#use_antietcd)
- [Встроенный Prometheus-экспортер метрик](../config/monitor.ru.md#enable_prometheus)
- [Поддержка NFS RDMA](../usage/nfs.ru.md#rdma) (вероятно, также подходящая для GPUDirect)
## Драйверы и инструменты

View File

@ -96,7 +96,7 @@ Example output (JSON format):
vitastor-nbd netlink-map [/dev/nbdN] (--image <image> | --pool <pool> --inode <inode> --size <size in bytes>)
```
On recent kernel versions it's also possinle to map NBD devices using netlink interface.
On recent kernel versions it's also possible to map NBD devices using netlink interface.
This is an experimental feature because it doesn't solve all issues of NBD. Differences from regular ioctl-based 'map':