Compare commits
2 Commits
b75e69761f
...
7492e0795d
Author | SHA1 | Date |
---|---|---|
Vitaliy Filippov | 7492e0795d | |
Vitaliy Filippov | 6febd1e2cc |
|
@ -1,25 +0,0 @@
|
||||||
---
|
|
||||||
apiVersion: storage.k8s.io/v1
|
|
||||||
kind: StorageClass
|
|
||||||
metadata:
|
|
||||||
namespace: vitastor-system
|
|
||||||
name: vitastor
|
|
||||||
annotations:
|
|
||||||
storageclass.kubernetes.io/is-default-class: "true"
|
|
||||||
provisioner: csi.vitastor.io
|
|
||||||
volumeBindingMode: Immediate
|
|
||||||
parameters:
|
|
||||||
# CSI driver can create block-based volumes and VitastorFS-based volumes
|
|
||||||
# only VitastorFS-based volumes and raw block volumes (without FS) support ReadWriteMany mode
|
|
||||||
# set this parameter to VitastorFS metadata volume name to use VitastorFS
|
|
||||||
# if unset, block-based volumes will be created
|
|
||||||
vitastorfs: "testfs"
|
|
||||||
# for block-based storage classes, pool ID may be either a string (name) or a number (ID)
|
|
||||||
# for vitastorFS-based storage classes it must be a string - name of the default pool for FS data
|
|
||||||
poolId: "testpool"
|
|
||||||
# volume name prefix for block-based storage classes or NFS subdirectory (including /) for FS-based volumes
|
|
||||||
volumePrefix: "k8s/"
|
|
||||||
# you can choose other configuration file if you have it in the config map
|
|
||||||
# different etcd URLs and prefixes should also be put in the config
|
|
||||||
#configPath: "/etc/vitastor/vitastor.conf"
|
|
||||||
allowVolumeExpansion: true
|
|
|
@ -6,18 +6,9 @@
|
||||||
|
|
||||||
# Kubernetes CSI
|
# Kubernetes CSI
|
||||||
|
|
||||||
Vitastor has a CSI plugin for Kubernetes which supports block-based and VitastorFS-based volumes.
|
Vitastor has a CSI plugin for Kubernetes which supports RWO (and block RWX) volumes.
|
||||||
|
|
||||||
Block-based volumes may be formatted and mounted with a normal FS (ext4 or xfs). Such volumes
|
To deploy it, take manifests from [csi/deploy/](../../csi/deploy/) directory, put your
|
||||||
only support RWO (ReadWriteOnce) mode.
|
|
||||||
|
|
||||||
Block-based volumes may also be left without FS and attached into the container as a block
|
|
||||||
device. Such volumes also support RWX (ReadWriteMany) mode.
|
|
||||||
|
|
||||||
VitastorFS-based volumes use a clustered file system and support FS-based RWX (ReadWriteMany)
|
|
||||||
mode. However, such volumes don't support quotas and snapshots.
|
|
||||||
|
|
||||||
To deploy the CSI plugin, take manifests from [csi/deploy/](../../csi/deploy/) directory, put your
|
|
||||||
Vitastor configuration in [001-csi-config-map.yaml](../../csi/deploy/001-csi-config-map.yaml),
|
Vitastor configuration in [001-csi-config-map.yaml](../../csi/deploy/001-csi-config-map.yaml),
|
||||||
configure storage class in [009-storage-class.yaml](../../csi/deploy/009-storage-class.yaml)
|
configure storage class in [009-storage-class.yaml](../../csi/deploy/009-storage-class.yaml)
|
||||||
and apply all `NNN-*.yaml` manifests to your Kubernetes installation:
|
and apply all `NNN-*.yaml` manifests to your Kubernetes installation:
|
||||||
|
@ -32,16 +23,16 @@ After that you'll be able to create PersistentVolumes.
|
||||||
kernel modules enabled (vdpa, vduse, virtio-vdpa). If your distribution doesn't
|
kernel modules enabled (vdpa, vduse, virtio-vdpa). If your distribution doesn't
|
||||||
have them pre-built - build them yourself ([instructions](../usage/qemu.en.md#vduse)),
|
have them pre-built - build them yourself ([instructions](../usage/qemu.en.md#vduse)),
|
||||||
I promise it's worth it :-). When VDUSE is unavailable, CSI driver uses [NBD](../usage/nbd.en.md)
|
I promise it's worth it :-). When VDUSE is unavailable, CSI driver uses [NBD](../usage/nbd.en.md)
|
||||||
to map Vitastor devices. NBD is slower and, with kernels older than 5.19, unmountable
|
to map Vitastor devices. NBD is slower and prone to timeout issues: if Vitastor
|
||||||
if the cluster becomes unresponsible.
|
cluster becomes unresponsible for more than [nbd_timeout](../config/client.en.md#nbd_timeout),
|
||||||
|
the NBD device detaches and breaks pods using it.
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
Vitastor CSI supports:
|
Vitastor CSI supports:
|
||||||
- Kubernetes starting with 1.20 (or 1.17 for older vitastor-csi <= 1.1.0)
|
- Kubernetes starting with 1.20 (or 1.17 for older vitastor-csi <= 1.1.0)
|
||||||
- Block-based FS-formatted RWO (ReadWriteOnce) volumes. Example: [PVC](../../csi/deploy/example-pvc.yaml), [pod](../../csi/deploy/example-test-pod.yaml)
|
- Filesystem RWO (ReadWriteOnce) volumes. Example: [PVC](../../csi/deploy/example-pvc.yaml), [pod](../../csi/deploy/example-test-pod.yaml)
|
||||||
- Raw block RWX (ReadWriteMany) volumes. Example: [PVC](../../csi/deploy/example-pvc-block.yaml), [pod](../../csi/deploy/example-test-pod-block.yaml)
|
- Raw block RWX (ReadWriteMany) volumes. Example: [PVC](../../csi/deploy/example-pvc-block.yaml), [pod](../../csi/deploy/example-test-pod-block.yaml)
|
||||||
- VitastorFS-based volumes RWX (ReadWriteMany) volumes. Example: [storage class](../../csi/deploy/example-storage-class-fs.yaml)
|
|
||||||
- Volume expansion
|
- Volume expansion
|
||||||
- Volume snapshots. Example: [snapshot class](../../csi/deploy/example-snapshot-class.yaml), [snapshot](../../csi/deploy/example-snapshot.yaml), [clone](../../csi/deploy/example-snapshot-clone.yaml)
|
- Volume snapshots. Example: [snapshot class](../../csi/deploy/example-snapshot-class.yaml), [snapshot](../../csi/deploy/example-snapshot.yaml), [clone](../../csi/deploy/example-snapshot-clone.yaml)
|
||||||
- [VDUSE](../usage/qemu.en.md#vduse) (preferred) and [NBD](../usage/nbd.en.md) device mapping methods
|
- [VDUSE](../usage/qemu.en.md#vduse) (preferred) and [NBD](../usage/nbd.en.md) device mapping methods
|
||||||
|
|
|
@ -6,17 +6,7 @@
|
||||||
|
|
||||||
# Kubernetes CSI
|
# Kubernetes CSI
|
||||||
|
|
||||||
У Vitastor есть CSI-плагин для Kubernetes, поддерживающий блочные тома и тома на основе
|
У Vitastor есть CSI-плагин для Kubernetes, поддерживающий RWO, а также блочные RWX, тома.
|
||||||
кластерной ФС VitastorFS.
|
|
||||||
|
|
||||||
Блочные тома могут быть отформатированы и примонтированы со стандартной ФС (ext4 или xfs).
|
|
||||||
Такие тома поддерживают только режим RWO (ReadWriteOnce, одновременный доступ с одного узла).
|
|
||||||
|
|
||||||
Блочные тома также могут не форматироваться и подключаться в контейнер в виде блочного устройства.
|
|
||||||
В таком случае их можно подключать в режиме RWX (ReadWriteMany, одновременный доступ с многих узлов).
|
|
||||||
|
|
||||||
Тома на основе VitastorFS используют кластерную ФС и поэтому также поддерживают режим RWX
|
|
||||||
(ReadWriteMany). Однако, такие тома не поддерживают ограничение размера и снимки.
|
|
||||||
|
|
||||||
Для установки возьмите манифесты из директории [csi/deploy/](../../csi/deploy/), поместите
|
Для установки возьмите манифесты из директории [csi/deploy/](../../csi/deploy/), поместите
|
||||||
вашу конфигурацию подключения к Vitastor в [csi/deploy/001-csi-config-map.yaml](../../csi/deploy/001-csi-config-map.yaml),
|
вашу конфигурацию подключения к Vitastor в [csi/deploy/001-csi-config-map.yaml](../../csi/deploy/001-csi-config-map.yaml),
|
||||||
|
@ -43,7 +33,6 @@ CSI-плагин Vitastor поддерживает:
|
||||||
- Версии Kubernetes, начиная с 1.20 (или с 1.17 для более старых vitastor-csi <= 1.1.0)
|
- Версии Kubernetes, начиная с 1.20 (или с 1.17 для более старых vitastor-csi <= 1.1.0)
|
||||||
- Файловые RWO (ReadWriteOnce) тома. Пример: [PVC](../../csi/deploy/example-pvc.yaml), [под](../../csi/deploy/example-test-pod.yaml)
|
- Файловые RWO (ReadWriteOnce) тома. Пример: [PVC](../../csi/deploy/example-pvc.yaml), [под](../../csi/deploy/example-test-pod.yaml)
|
||||||
- Сырые блочные RWX (ReadWriteMany) тома. Пример: [PVC](../../csi/deploy/example-pvc-block.yaml), [под](../../csi/deploy/example-test-pod-block.yaml)
|
- Сырые блочные RWX (ReadWriteMany) тома. Пример: [PVC](../../csi/deploy/example-pvc-block.yaml), [под](../../csi/deploy/example-test-pod-block.yaml)
|
||||||
- Основанные на VitastorFS RWX (ReadWriteMany) тома. Пример: [класс хранения](../../csi/deploy/example-storage-class-fs.yaml)
|
|
||||||
- Расширение размера томов
|
- Расширение размера томов
|
||||||
- Снимки томов. Пример: [класс снимков](../../csi/deploy/example-snapshot-class.yaml), [снимок](../../csi/deploy/example-snapshot.yaml), [клон снимка](../../csi/deploy/example-snapshot-clone.yaml)
|
- Снимки томов. Пример: [класс снимков](../../csi/deploy/example-snapshot-class.yaml), [снимок](../../csi/deploy/example-snapshot.yaml), [клон снимка](../../csi/deploy/example-snapshot-clone.yaml)
|
||||||
- Способы подключения устройств [VDUSE](../usage/qemu.ru.md#vduse) (предпочитаемый) и [NBD](../usage/nbd.ru.md)
|
- Способы подключения устройств [VDUSE](../usage/qemu.ru.md#vduse) (предпочитаемый) и [NBD](../usage/nbd.ru.md)
|
||||||
|
|
|
@ -65,9 +65,8 @@ All other client-side components are based on the client library:
|
||||||
(at least by now). NBD is an older, non-recommended way to attach disks — you should use
|
(at least by now). NBD is an older, non-recommended way to attach disks — you should use
|
||||||
VDUSE whenever you can.
|
VDUSE whenever you can.
|
||||||
- **[CSI driver](../installation/kubernetes.en.md)** — driver for attaching Vitastor images
|
- **[CSI driver](../installation/kubernetes.en.md)** — driver for attaching Vitastor images
|
||||||
and VitastorFS subdirectories as Kubernetes persistent volumes. Block-based CSI uses
|
as Kubernetes persistent volumes. Works through VDUSE (when available) or NBD — images are
|
||||||
VDUSE (when available) or NBD — images are attached as kernel block devices and mounted
|
attached as kernel block devices and mounted into containers.
|
||||||
into containers. FS-based CSI uses **[vitastor-nfs](../usage/nfs.en.md)**.
|
|
||||||
- **Drivers for Proxmox, OpenStack and so on** — pluggable modules for corresponding systems,
|
- **Drivers for Proxmox, OpenStack and so on** — pluggable modules for corresponding systems,
|
||||||
allowing to use Vitastor as storage in them.
|
allowing to use Vitastor as storage in them.
|
||||||
- **[vitastor-nfs](../usage/nfs.en.md)** — NFS 3.0 server allowing export of two file system variants:
|
- **[vitastor-nfs](../usage/nfs.en.md)** — NFS 3.0 server allowing export of two file system variants:
|
||||||
|
|
|
@ -65,9 +65,8 @@
|
||||||
Vitastor нет (по крайней мере, пока). NBD — более старый и нерекомендуемый способ подключения
|
Vitastor нет (по крайней мере, пока). NBD — более старый и нерекомендуемый способ подключения
|
||||||
дисков — вам следует использовать VDUSE всегда, когда это возможно.
|
дисков — вам следует использовать VDUSE всегда, когда это возможно.
|
||||||
- **[CSI драйвер](../installation/kubernetes.ru.md)** — драйвер для подключения Vitastor-образов
|
- **[CSI драйвер](../installation/kubernetes.ru.md)** — драйвер для подключения Vitastor-образов
|
||||||
и поддиректорий VitastorFS в виде персистентных томов (PV) Kubernetes. Блочный CSI работает через
|
в виде персистентных томов (PV) Kubernetes. Работает через VDUSE (если доступно) или через
|
||||||
VDUSE (когда это возможно) или через NBD — образы отражаются в виде блочных устройств и монтируются
|
NBD — образы отражаются в виде блочных устройств и монтируются в контейнеры.
|
||||||
в контейнеры. Файловый CSI использует **[vitastor-nfs](../usage/nfs.ru.md)**.
|
|
||||||
- **Драйвера Proxmox, OpenStack и т.п.** — подключаемые модули для соответствующих систем,
|
- **Драйвера Proxmox, OpenStack и т.п.** — подключаемые модули для соответствующих систем,
|
||||||
позволяющие использовать Vitastor как хранилище в оных.
|
позволяющие использовать Vitastor как хранилище в оных.
|
||||||
- **[vitastor-nfs](../usage/nfs.ru.md)** — NFS 3.0 сервер, предоставляющий два варианта файловой системы:
|
- **[vitastor-nfs](../usage/nfs.ru.md)** — NFS 3.0 сервер, предоставляющий два варианта файловой системы:
|
||||||
|
|
|
@ -36,7 +36,6 @@
|
||||||
- [Clustered file system](../usage/nfs.en.md#vitastorfs)
|
- [Clustered file system](../usage/nfs.en.md#vitastorfs)
|
||||||
- [Experimental internal etcd replacement - antietcd](../config/monitor.en.md#use_antietcd)
|
- [Experimental internal etcd replacement - antietcd](../config/monitor.en.md#use_antietcd)
|
||||||
- [Built-in Prometheus metric exporter](../config/monitor.en.md#enable_prometheus)
|
- [Built-in Prometheus metric exporter](../config/monitor.en.md#enable_prometheus)
|
||||||
- [NFS RDMA support](../usage/nfs.en.md#rdma) (probably also usable for GPUDirect)
|
|
||||||
|
|
||||||
## Plugins and tools
|
## Plugins and tools
|
||||||
|
|
||||||
|
|
|
@ -38,7 +38,6 @@
|
||||||
- [Кластерная файловая система](../usage/nfs.ru.md#vitastorfs)
|
- [Кластерная файловая система](../usage/nfs.ru.md#vitastorfs)
|
||||||
- [Экспериментальная встроенная замена etcd - antietcd](../config/monitor.ru.md#use_antietcd)
|
- [Экспериментальная встроенная замена etcd - antietcd](../config/monitor.ru.md#use_antietcd)
|
||||||
- [Встроенный Prometheus-экспортер метрик](../config/monitor.ru.md#enable_prometheus)
|
- [Встроенный Prometheus-экспортер метрик](../config/monitor.ru.md#enable_prometheus)
|
||||||
- [Поддержка NFS RDMA](../usage/nfs.ru.md#rdma) (вероятно, также подходящая для GPUDirect)
|
|
||||||
|
|
||||||
## Драйверы и инструменты
|
## Драйверы и инструменты
|
||||||
|
|
||||||
|
|
|
@ -96,7 +96,7 @@ Example output (JSON format):
|
||||||
vitastor-nbd netlink-map [/dev/nbdN] (--image <image> | --pool <pool> --inode <inode> --size <size in bytes>)
|
vitastor-nbd netlink-map [/dev/nbdN] (--image <image> | --pool <pool> --inode <inode> --size <size in bytes>)
|
||||||
```
|
```
|
||||||
|
|
||||||
On recent kernel versions it's also possible to map NBD devices using netlink interface.
|
On recent kernel versions it's also possinle to map NBD devices using netlink interface.
|
||||||
|
|
||||||
This is an experimental feature because it doesn't solve all issues of NBD. Differences from regular ioctl-based 'map':
|
This is an experimental feature because it doesn't solve all issues of NBD. Differences from regular ioctl-based 'map':
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue