Compare commits

..

No commits in common. "17caaa59af64cbc3da06d1e9ea4f3d02c9d8e0b6" and "97f49d7d9406c83d046f3f74b8c49c54ed1ef6a9" have entirely different histories.

82 changed files with 51 additions and 4781 deletions

View File

@ -558,24 +558,6 @@ jobs:
echo "" echo ""
done done
test_dd:
runs-on: ubuntu-latest
needs: build
container: ${{env.TEST_IMAGE}}:${{github.sha}}
steps:
- name: Run test
id: test
timeout-minutes: 3
run: /root/vitastor/tests/test_dd.sh
- name: Print logs
if: always() && steps.test.outcome == 'failure'
run: |
for i in /root/vitastor/testdata/*.log /root/vitastor/testdata/*.txt; do
echo "-------- $i --------"
cat $i
echo ""
done
test_root_node: test_root_node:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: build needs: build

View File

@ -42,7 +42,6 @@ Vitastor поддерживает QEMU-драйвер, протоколы NBD и
- Установка - Установка
- [Пакеты](docs/installation/packages.ru.md) - [Пакеты](docs/installation/packages.ru.md)
- [Proxmox](docs/installation/proxmox.ru.md) - [Proxmox](docs/installation/proxmox.ru.md)
- [OpenNebula](docs/installation/opennebula.ru.md)
- [OpenStack](docs/installation/openstack.ru.md) - [OpenStack](docs/installation/openstack.ru.md)
- [Kubernetes CSI](docs/installation/kubernetes.ru.md) - [Kubernetes CSI](docs/installation/kubernetes.ru.md)
- [Сборка из исходных кодов](docs/installation/source.ru.md) - [Сборка из исходных кодов](docs/installation/source.ru.md)

View File

@ -42,7 +42,6 @@ Read more details below in the documentation.
- Installation - Installation
- [Packages](docs/installation/packages.en.md) - [Packages](docs/installation/packages.en.md)
- [Proxmox](docs/installation/proxmox.en.md) - [Proxmox](docs/installation/proxmox.en.md)
- [OpenNebula](docs/installation/opennebula.en.md)
- [OpenStack](docs/installation/openstack.en.md) - [OpenStack](docs/installation/openstack.en.md)
- [Kubernetes CSI](docs/installation/kubernetes.en.md) - [Kubernetes CSI](docs/installation/kubernetes.en.md)
- [Building from Source](docs/installation/source.en.md) - [Building from Source](docs/installation/source.en.md)

6
debian/control vendored
View File

@ -53,9 +53,3 @@ Architecture: amd64
Depends: ${shlibs:Depends}, ${misc:Depends}, vitastor-client (= ${binary:Version}) Depends: ${shlibs:Depends}, ${misc:Depends}, vitastor-client (= ${binary:Version})
Description: Vitastor Proxmox Virtual Environment storage plugin Description: Vitastor Proxmox Virtual Environment storage plugin
Vitastor storage plugin for Proxmox Virtual Environment. Vitastor storage plugin for Proxmox Virtual Environment.
Package: vitastor-opennebula
Architecture: amd64
Depends: ${shlibs:Depends}, ${misc:Depends}, vitastor-client, patch, python3, jq
Description: Vitastor OpenNebula storage plugin
Vitastor storage plugin for OpenNebula.

View File

@ -1,3 +0,0 @@
opennebula/remotes var/lib/one/
opennebula/sudoers.d etc/
opennebula/install.sh var/lib/one/remotes/datastore/vitastor/

View File

@ -1,7 +0,0 @@
#!/bin/sh
set -e
if [ "$1" = "configure" ]; then
/var/lib/one/remotes/datastore/vitastor/install.sh
fi

View File

@ -1,4 +0,0 @@
interest /var/lib/one/remotes/datastore/downloader.sh
interest /etc/one/oned.conf
interest /etc/one/vmm_exec/vmm_execrc
interest /etc/apparmor.d/local/abstractions/libvirt-qemu

View File

@ -1,184 +0,0 @@
[Documentation](../../README.md#documentation) → Installation → OpenNebula
-----
[Читать на русском](opennebula.ru.md)
## Automatic Installation
OpenNebula plugin is packaged as `vitastor-opennebula` Debian and RPM package since Vitastor 1.9.0. So:
- Run `apt-get install vitastor-opennebula` or `yum install vitastor-opennebula` after installing OpenNebula on all nodes
- Check that it prints "OK, Vitastor OpenNebula patches successfully applied" or "OK, Vitastor OpenNebula patches are already applied"
- If it does not, refer to [Manual Installation](#manual-installation) and apply configuration file changes manually
- Make sure that Vitastor patched versions of QEMU and libvirt are installed
(`dpkg -l qemu-system-x86`, `dpkg -l | grep libvirt`, `rpm -qa | grep qemu`, `rpm -qa | grep qemu`, `rpm -qa | grep libvirt-libs` should show "vitastor" in version names)
- [Block VM access to Vitastor cluster](#block-vm-access-to-vitastor-cluster)
## Manual Installation
Install OpenNebula. Then, on each node:
- Copy [opennebula/remotes](../../opennebula/remotes) into `/var/lib/one` recursively: `cp -r opennebula/remotes /var/lib/one/`
- Copy [opennebula/sudoers.d](../../opennebula/sudoers.d) to `/etc`: `cp -r opennebula/sudoers.d /etc/`
- Apply [downloader-vitastor.sh.diff](../../opennebula/remotes/datastore/vitastor/downloader-vitastor.sh.diff) to `/var/lib/one/remotes/datastore/downloader.sh`:
`patch /var/lib/one/remotes/datastore/downloader.sh < opennebula/remotes/datastore/vitastor/downloader-vitastor.sh.diff` - or read the patch and apply the same change manually
- Add `kvm-vitastor` to `LIVE_DISK_SNAPSHOTS` in `/etc/one/vmm_exec/vmm_execrc`
- If on Debian or Ubuntu (and AppArmor is used), add Vitastor config file path(s) to `/etc/apparmor.d/local/abstractions/libvirt-qemu`: for example,
`echo ' "/etc/vitastor/vitastor.conf" r,' >> /etc/apparmor.d/local/abstractions/libvirt-qemu`
- Apply changes to `/etc/one/oned.conf`
### oned.conf changes
1. Add deploy script override in kvm VM_MAD: add `-l deploy.vitastor` to ARGUMENTS.
```diff
VM_MAD = [
NAME = "kvm",
SUNSTONE_NAME = "KVM",
EXECUTABLE = "one_vmm_exec",
- ARGUMENTS = "-t 15 -r 0 kvm -p",
+ ARGUMENTS = "-t 15 -r 0 kvm -p -l deploy=deploy.vitastor",
DEFAULT = "vmm_exec/vmm_exec_kvm.conf",
TYPE = "kvm",
KEEP_SNAPSHOTS = "yes",
LIVE_RESIZE = "yes",
SUPPORT_SHAREABLE = "yes",
IMPORTED_VMS_ACTIONS = "terminate, terminate-hard, hold, release, suspend,
resume, delete, reboot, reboot-hard, resched, unresched, disk-attach,
disk-detach, nic-attach, nic-detach, snapshot-create, snapshot-delete,
resize, updateconf, update"
]
```
Optional: if you also want to save VM RAM checkpoints to Vitastor, use
`-l deploy=deploy.vitastor,save=save.vitastor,restore=restore.vitastor`
instead of just `-l deploy=deploy.vitastor`.
2. Add `vitastor` to TM_MAD.ARGUMENTS and DATASTORE_MAD.ARGUMENTS:
```diff
TM_MAD = [
EXECUTABLE = "one_tm",
- ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,fs_lvm_ssh,qcow2,ssh,ceph,dev,vcenter,iscsi_libvirt"
+ ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,fs_lvm_ssh,qcow2,ssh,ceph,vitastor,dev,vcenter,iscsi_libvirt"
]
DATASTORE_MAD = [
EXECUTABLE = "one_datastore",
- ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,dev,iscsi_libvirt,vcenter,restic,rsync -s shared,ssh,ceph,fs_lvm,fs_lvm_ssh,qcow2,vcenter"
+ ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,vitastor,dev,iscsi_libvirt,vcenter,restic,rsync -s shared,ssh,ceph,vitastor,fs_lvm,fs_lvm_ssh,qcow2,vcenter"
]
```
3. Add INHERIT_DATASTORE_ATTR for two Vitastor attributes:
```
INHERIT_DATASTORE_ATTR = "VITASTOR_CONF"
INHERIT_DATASTORE_ATTR = "IMAGE_PREFIX"
```
4. Add TM_MAD_CONF and DS_MAD_CONF for Vitastor:
```
TM_MAD_CONF = [
NAME = "vitastor", LN_TARGET = "NONE", CLONE_TARGET = "SELF", SHARED = "YES",
DS_MIGRATE = "NO", DRIVER = "raw", ALLOW_ORPHANS="format",
TM_MAD_SYSTEM = "ssh,shared", LN_TARGET_SSH = "SYSTEM", CLONE_TARGET_SSH = "SYSTEM",
DISK_TYPE_SSH = "FILE", LN_TARGET_SHARED = "NONE",
CLONE_TARGET_SHARED = "SELF", DISK_TYPE_SHARED = "FILE"
]
DS_MAD_CONF = [
NAME = "vitastor",
REQUIRED_ATTRS = "DISK_TYPE,BRIDGE_LIST",
PERSISTENT_ONLY = "NO",
MARKETPLACE_ACTIONS = "export"
]
```
## Create Datastores
Example Image and System Datastore definitions:
[opennebula/vitastor-imageds.conf](../../opennebula/vitastor-imageds.conf) and
[opennebula/vitastor-systemds.conf](../../opennebula/vitastor-systemds.conf).
Change parameters to your will:
- POOL_NAME is Vitastor pool name to store images.
- IMAGE_PREFIX is a string prepended to all Vitastor image names.
- BRIDGE_LIST is a list of hosts with access to Vitastor cluster, mostly used for image (not system) datastore operations.
- VITASTOR_CONF is the path to cluster configuration. Note that it should be also added to `/etc/apparmor.d/local/abstractions/libvirt-qemu` if you use AppArmor.
- STAGING_DIR is a temporary directory used when importing external images. Should have free space sufficient for downloading external images.
Then create datastores using `onedatastore create vitastor-imageds.conf` and `onedatastore create vitastor-systemds.conf` (or use UI).
## Block VM access to Vitastor cluster
Vitastor doesn't support any authentication yet, so you MUST block VM guest access to the Vitastor cluster at the network level.
If you use VLAN networking for VMs - make sure you use different VLANs for VMs and hypervisor/storage network and
block access between them using your firewall/switch configuration.
If you use something more stupid like bridged networking, you probably have to use manual firewall/iptables setup
to only allow access to Vitastor from hypervisor IPs.
Also you need to switch network to "Bridged & Security Groups" and enable IP spoofing filters in OpenNebula.
Problem is that OpenNebula's IP spoofing filter doesn't affect local interfaces of the hypervisor i.e. when
it's enabled a VM can't talk to other VMs or to the outer world using a spoofed IP, but it CAN talk to the
hypervisor if it takes an IP from its subnet. To fix that you also need some more iptables.
So the complete "stupid" bridged network filter setup could look like the following
(here `10.0.3.0/24` is the VM subnet and `10.0.2.0/24` is the hypervisor subnet):
```
# Allow incoming traffic from physical device
iptables -A INPUT -m physdev --physdev-in eth0 -j ACCEPT
# Do not allow incoming traffic from VMs, but not from VM subnet
iptables -A INPUT ! -s 10.0.3.0/24 -i onebr0 -j DROP
# Drop traffic from VMs to hypervisor/storage subnet
iptables -I FORWARD 1 -s 10.0.3.0/24 -d 10.0.2.0/24 -j DROP
```
## Testing
The OpenNebula plugin includes quite a bit of bash scripts, so here's their description to get an idea about what they actually do.
| Script | Action | How to Test |
| ----------------------- | ----------------------------------------- | ------------------------------------------------------------------------------------ |
| vmm/kvm/deploy.vitastor | Start a VM | Create and start a VM with Vitastor disk(s): persistent / non-persistent / volatile. |
| vmm/kvm/save.vitastor | Save VM memory checkpoint | Stop a VM using "Stop" command. |
| vmm/kvm/restore.vitastor| Restore VM memory checkpoint | Start a VM back after stopping it. |
| datastore/clone | Copy an image as persistent | Create a VM template and instantiate it as persistent. |
| datastore/cp | Import an external image | Import a VM template with images from Marketplace. |
| datastore/export | Export an image as URL | Probably: export a VM template with images to Marketplace. |
| datastore/mkfs | Create an image with FS | Storage → Images → Create → Type: Datablock, Location: Empty disk image, Filesystem: Not empty. |
| datastore/monitor | Monitor used space in image datastore | Check reported used/free space in image datastore list. |
| datastore/rm | Remove a persistent image | Storage → Images → Select an image → Delete. |
| datastore/snap_delete | Delete a snapshot of a persistent image | Storage → Images → Select an image → Select a snapshot → Delete; <br> To create an image with snapshot: attach a persistent image to a VM; create a snapshot; detach the image. |
| datastore/snap_flatten | Revert an image to snapshot and delete other snapshots | Storage → Images → Select an image → Select a snapshot → Flatten. |
| datastore/snap_revert | Revert an image to snapshot | Storage → Images → Select an image → Select a snapshot → Revert. |
| datastore/stat | Get virtual size of an image in MB | No idea. Seems to be unused both in Vitastor and Ceph datastores. |
| tm/clone | Clone a non-persistent image to a VM disk | Attach a non-persistent image to a VM. |
| tm/context | Generate a contextualisation VM disk | Create a VM with enabled contextualisation (default). Common host FS-based version is used in Vitastor and Ceph datastores. |
| tm/cpds | Copy a VM disk / its snapshot to an image | Select a VM → Select a disk → Optionally select a snapshot → Save as. |
| tm/delete | Delete a cloned or volatile VM disk | Detach a volatile disk or a non-persistent image from a VM. |
| tm/failmigrate | Handle live migration failure | No action. Script is empty in Vitastor and Ceph. In other datastores, should roll back actions done by tm/premigrate. |
| tm/ln | Attach a persistent image to a VM | No action. Script is empty in Vitastor and Ceph. |
| tm/mkimage | Create a volatile disk, maybe with FS | Attach a volatile disk to a VM, with or without file system. |
| tm/mkswap | Create a volatile swap disk | Attach a volatile disk to a VM, formatted as swap. |
| tm/monitor | Monitor used space in system datastore | Check reported used/free space in system datastore list. |
| tm/mv | Move a migrated VM disk between hosts | Migrate a VM between hosts. In Vitastor and Ceph datastores, doesn't do any storage action. |
| tm/mvds | Detach a persistent image from a VM | No action. The opposite of tm/ln. Script is empty in Vitastor and Ceph. In other datastores, script may copy the image from VM host back to the datastore. |
| tm/postbackup | Executed after backup | Seems that the script just removes temporary files after backup. Perform a VM backup and check that temporary files are cleaned up. |
| tm/postbackup_live | Executed after backup of a running VM | Same as tm/postbackup, but for a running VM. |
| tm/postmigrate | Executed after VM live migration | No action. Only executed for system datastore, so the script tries to call other TMs for other disks. Except that, the script does nothing in Vitastor and Ceph datastores. |
| tm/prebackup | Actual backup script: backup VM disks | Set up "rsync" backup datastore → Backup a VM to it. |
| tm/prebackup_live | Backup VM disks of a running VM | Same as tm/prebackup, but also does fsfreeze/thaw. So perform a live backup, restore it and check that disks are consistent. |
| tm/premigrate | Executed before live migration | No action. Only executed for system datastore, so the script tries to call other TMs for other disks. Except that, the script does nothing in Vitastor and Ceph datastores. |
| tm/resize | Resize a VM disk | Select a VM → Select a non-persistent disk → Resize. |
| tm/restore | Restore VM disks from backup | Set up "rsync" backup datastore → Backup a VM to it → Restore it back. |
| tm/snap_create | Create a VM disk snapshot | Select a VM → Select a disk → Create snapshot. |
| tm/snap_create_live | Create a VM disk snapshot for a live VM | Select a running VM → Select a disk → Create snapshot. |
| tm/snap_delete | Delete a VM disk snapshot | Select a VM → Select a disk → Select a snapshot → Delete. |
| tm/snap_revert | Revert a VM disk to a snapshot | Select a VM → Select a disk → Select a snapshot → Revert. |

View File

@ -1,187 +0,0 @@
[Документация](../../README-ru.md#документация) → Установка → OpenNebula
-----
[Read in English](opennebula.en.md)
## Автоматическая установка
Плагин OpenNebula Vitastor распространяется как Debian и RPM пакет `vitastor-opennebula`, начиная с версии Vitastor 1.9.0. Так что:
- Запустите `apt-get install vitastor-opennebula` или `yum install vitastor-opennebula` после установки OpenNebula на всех серверах
- Проверьте, что он выводит "OK, Vitastor OpenNebula patches successfully applied" или "OK, Vitastor OpenNebula patches are already applied" в процессе установки
- Если сообщение не выведено, пройдите по шагам инструкцию [Ручная установка](#ручная-установка) и примените правки файлов конфигурации вручную
- Удостоверьтесь, что установлены версии QEMU и libvirt с изменениями Vitastor
(`dpkg -l qemu-system-x86`, `dpkg -l | grep libvirt`, `rpm -qa | grep qemu`, `rpm -qa | grep qemu`, `rpm -qa | grep libvirt-libs` должны показывать "vitastor" в номере версии)
- [Заблокируйте доступ виртуальных машин в Vitastor](#блокировка-доступа-вм-в-vitastor)
## Ручная установка
Сначала установите саму OpenNebula. После этого, на каждом сервере:
- Скопируйте директорию [opennebula/remotes](../../opennebula/remotes) в `/var/lib/one`: `cp -r opennebula/remotes /var/lib/one/`
- Скопируйте директорию [opennebula/sudoers.d](../../opennebula/sudoers.d) в `/etc`: `cp -r opennebula/sudoers.d /etc/`
- Примените патч [downloader-vitastor.sh.diff](../../opennebula/remotes/datastore/vitastor/downloader-vitastor.sh.diff) к `/var/lib/one/remotes/datastore/downloader.sh`:
`patch /var/lib/one/remotes/datastore/downloader.sh < opennebula/remotes/datastore/vitastor/downloader-vitastor.sh.diff` - либо прочитайте патч и примените изменение вручную
- Добавьте `kvm-vitastor` в список `LIVE_DISK_SNAPSHOTS` в файле `/etc/one/vmm_exec/vmm_execrc`
- Если вы используете Debian или Ubuntu (и AppArmor), добавьте пути к файлу(ам) конфигурации Vitastor в файл `/etc/apparmor.d/local/abstractions/libvirt-qemu`: например,
`echo ' "/etc/vitastor/vitastor.conf" r,' >> /etc/apparmor.d/local/abstractions/libvirt-qemu`
- Примените изменения `/etc/one/oned.conf`
### Изменения oned.conf
1. Добавьте переопределение скрипта deploy в VM_MAD kvm, добавив `-l deploy.vitastor` в `ARGUMENTS`:
```diff
VM_MAD = [
NAME = "kvm",
SUNSTONE_NAME = "KVM",
EXECUTABLE = "one_vmm_exec",
- ARGUMENTS = "-t 15 -r 0 kvm -p",
+ ARGUMENTS = "-t 15 -r 0 kvm -p -l deploy=deploy.vitastor",
DEFAULT = "vmm_exec/vmm_exec_kvm.conf",
TYPE = "kvm",
KEEP_SNAPSHOTS = "yes",
LIVE_RESIZE = "yes",
SUPPORT_SHAREABLE = "yes",
IMPORTED_VMS_ACTIONS = "terminate, terminate-hard, hold, release, suspend,
resume, delete, reboot, reboot-hard, resched, unresched, disk-attach,
disk-detach, nic-attach, nic-detach, snapshot-create, snapshot-delete,
resize, updateconf, update"
]
```
Опционально: если вы хотите также сохранять снимки памяти ВМ в Vitastor, добавьте
`-l deploy=deploy.vitastor,save=save.vitastor,restore=restore.vitastor`
вместо просто `-l deploy=deploy.vitastor`.
2. Добавьте `vitastor` в значения TM_MAD.ARGUMENTS и DATASTORE_MAD.ARGUMENTS:
```diff
TM_MAD = [
EXECUTABLE = "one_tm",
- ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,fs_lvm_ssh,qcow2,ssh,ceph,dev,vcenter,iscsi_libvirt"
+ ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,fs_lvm_ssh,qcow2,ssh,ceph,vitastor,dev,vcenter,iscsi_libvirt"
]
DATASTORE_MAD = [
EXECUTABLE = "one_datastore",
- ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,dev,iscsi_libvirt,vcenter,restic,rsync -s shared,ssh,ceph,fs_lvm,fs_lvm_ssh,qcow2,vcenter"
+ ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,vitastor,dev,iscsi_libvirt,vcenter,restic,rsync -s shared,ssh,ceph,vitastor,fs_lvm,fs_lvm_ssh,qcow2,vcenter"
]
```
3. Добавьте строчки с INHERIT_DATASTORE_ATTR для двух атрибутов Vitastor-хранилищ:
```
INHERIT_DATASTORE_ATTR = "VITASTOR_CONF"
INHERIT_DATASTORE_ATTR = "IMAGE_PREFIX"
```
4. Добавьте TM_MAD_CONF и DS_MAD_CONF для Vitastor:
```
TM_MAD_CONF = [
NAME = "vitastor", LN_TARGET = "NONE", CLONE_TARGET = "SELF", SHARED = "YES",
DS_MIGRATE = "NO", DRIVER = "raw", ALLOW_ORPHANS="format",
TM_MAD_SYSTEM = "ssh,shared", LN_TARGET_SSH = "SYSTEM", CLONE_TARGET_SSH = "SYSTEM",
DISK_TYPE_SSH = "FILE", LN_TARGET_SHARED = "NONE",
CLONE_TARGET_SHARED = "SELF", DISK_TYPE_SHARED = "FILE"
]
DS_MAD_CONF = [
NAME = "vitastor",
REQUIRED_ATTRS = "DISK_TYPE,BRIDGE_LIST",
PERSISTENT_ONLY = "NO",
MARKETPLACE_ACTIONS = "export"
]
```
## Создайте хранилища
Примеры настроек хранилищ образов (image) и дисков ВМ (system):
[opennebula/vitastor-imageds.conf](../../opennebula/vitastor-imageds.conf) и
[opennebula/vitastor-systemds.conf](../../opennebula/vitastor-systemds.conf).
Скопируйте настройки и поменяйте следующие параметры так, как вам необходимо:
- POOL_NAME - имя пула Vitastor для сохранения образов дисков.
- IMAGE_PREFIX - строка, добавляемая в начало имён образов дисков.
- BRIDGE_LIST - список серверов с доступом к кластеру Vitastor, используемых для операций с хранилищем образов (image, не system).
- VITASTOR_CONF - путь к конфигурации Vitastor. Имейте в виду, что этот путь также надо добавить в `/etc/apparmor.d/local/abstractions/libvirt-qemu`, если вы используете AppArmor.
- STAGING_DIR - путь к временному каталогу, используемому при импорте внешних образов. Должен иметь достаточно свободного места, чтобы вмещать скачанные образы.
После этого создайте хранилища с помощью команд `onedatastore create vitastor-imageds.conf` и `onedatastore create vitastor-systemds.conf` (либо через UI).
## Блокировка доступа ВМ в Vitastor
Vitastor пока не поддерживает никакую аутентификацию, так что вы ДОЛЖНЫ заблокировать доступ гостевых ВМ
в кластер Vitastor на сетевом уровне.
Если вы используете VLAN-сети для ВМ - удостоверьтесь, что ВМ и гипервизор/сеть хранения помещены в разные
изолированные друг от друга VLAN-ы.
Если вы используете что-то более примитивное, например, мосты (bridge), вам, скорее всего, придётся вручную
настроить iptables / межсетевой экран, чтобы разрешить доступ к Vitastor только с IP гипервизоров.
Также в этом случае нужно будет переключить обычные мосты на "Bridged & Security Groups" и включить фильтр
спуфинга IP в OpenNebula. Правда, реализация этого фильтра пока не полная, и она не блокирует доступ к
локальным интерфейсам гипервизора. То есть, включённый фильтр спуфинга IP запрещает ВМ отправлять трафик
с чужими IP к другим ВМ или во внешний мир, но не запрещает отправлять его напрямую гипервизору. Чтобы
исправить это, тоже нужны дополнительные правила iptables.
Таким образом, более-менее полная блокировка при использовании простой сети на сетевых мостах может
выглядеть так (здесь `10.0.3.0/24` - подсеть ВМ, `10.0.2.0/24` - подсеть гипервизора):
```
# Разрешаем входящий трафик с физического устройства
iptables -A INPUT -m physdev --physdev-in eth0 -j ACCEPT
# Запрещаем трафик со всех ВМ, но с IP не из подсети ВМ
iptables -A INPUT ! -s 10.0.3.0/24 -i onebr0 -j DROP
# Запрещаем трафик от ВМ к сети гипервизора
iptables -I FORWARD 1 -s 10.0.3.0/24 -d 10.0.2.0/24 -j DROP
```
## Тестирование
Плагин OpenNebula по большей части состоит из bash-скриптов, и чтобы было понятнее, что они
вообще делают - ниже приведены описания процедур, которыми можно протестировать каждый из них.
| Скрипт | Описание | Как протестировать |
| ----------------------- | --------------------------------------------- | ------------------------------------------------------------------------------------ |
| vmm/kvm/deploy.vitastor | Запустить виртуальную машину | Создайте и запустите виртуальную машину с дисками Vitastor: постоянным / непостоянным / волатильным (временным). |
| vmm/kvm/save.vitastor | Сохранить снимок памяти ВМ | Остановите виртуальную машину командой "Остановить". |
| vmm/kvm/restore.vitastor| Восстановить снимок памяти ВМ | Запустите ВМ после остановки обратно. |
| datastore/clone | Скопировать образ как "постоянный" | Создайте шаблон ВМ и создайте из него постоянную ВМ. |
| datastore/cp | Импортировать внешний образ | Импортируйте шаблон ВМ с образами дисков из Магазина OpenNebula. |
| datastore/export | Экспортировать образ как URL | Вероятно: экспортируйте шаблон ВМ с образами в Магазин. |
| datastore/mkfs | Создать образ с файловой системой | Хранилище → Образы → Создать → Тип: базовый блок данных, Расположение: пустой образ диска, Файловая система: любая непустая. |
| datastore/monitor | Вывод статистики места в хранилище образов | Проверьте статистику свободного/занятого места в списке хранилищ образов. |
| datastore/rm | Удалить "постоянный" образ | Хранилище → Образы → Выберите образ → Удалить. |
| datastore/snap_delete | Удалить снимок "постоянного" образа | Хранилище → Образы → Выберите образ → Выберите снимок → Удалить; <br> Чтобы создать образ со снимком: подключите постоянный образ к ВМ, создайте снимок, отключите образ. |
| datastore/snap_flatten | Откатить образ к снимку, удалив другие снимки | Хранилище → Образы → Выберите образ → Выберите снимок → "Выровнять" (flatten). |
| datastore/snap_revert | Откатить образ к снимку | Хранилище → Образы → Выберите образ → Выберите снимок → Откатить. |
| datastore/stat | Показать виртуальный размер образа в МБ | Неизвестно. По-видимому, в плагинах Vitastor и Ceph не используется. |
| tm/clone | Клонировать "непостоянный" образ в диск ВМ | Подключите "непостоянный" образ к ВМ. |
| tm/context | Создать диск контекстуализации ВМ | Создайте ВМ с контекстуализацией, как обычно. Но тестировать особенно нечего: в плагинах Vitastor и Ceph образ контекста хранится в локальной ФС гипервизора. |
| tm/cpds | Копировать диск ВМ/его снимок в новый образ | Выберите ВМ → Выберите диск → Опционально выберите снимок → "Сохранить как". |
| tm/delete | Удалить диск-клон или волатильный диск ВМ | Отключите волатильный или не-постоянный диск от ВМ. |
| tm/failmigrate | Обработать неудачную миграцию | Тестировать нечего. Скрипт пуст в плагинах Vitastor и Ceph. В других плагинах скрипт должен откатывать действия tm/premigrate. |
| tm/ln | Подключить "постоянный" образ к ВМ | Тестировать нечего. Скрипт пуст в плагинах Vitastor и Ceph. |
| tm/mkimage | Создать волатильный диск, без или с ФС | Подключите волатильный диск к ВМ, с или без файловой системы. |
| tm/mkswap | Создать волатильный диск подкачки | Подключите волатильный диск к ВМ, форматированный как диск подкачки (swap). |
| tm/monitor | Вывод статистики места в хранилище дисков ВМ | Проверьте статистику свободного/занятого места в списке хранилищ дисков ВМ. |
| tm/mv | Мигрировать диск ВМ между хостами | Мигрируйте ВМ между серверами. Правда, с точки зрения хранилища в плагинах Vitastor и Ceph этот скрипт ничего не делает. |
| tm/mvds | Отключить "постоянный" образ от ВМ | Тестировать нечего. Скрипт пуст в плагинах Vitastor и Ceph. В целом же скрипт обратный к tm/ln и в других хранилищах он может, например, копировать образ ВМ с диска гипервизора обратно в хранилище. |
| tm/postbackup | Выполняется после бэкапа | По-видимому, скрипт просто удаляет временные файлы после резервного копирования. Так что можно провести его и проверить, что на серверах не осталось временных файлов. |
| tm/postbackup_live | Выполняется после бэкапа запущенной ВМ | То же, что tm/postbackup, но для запущенной ВМ. |
| tm/postmigrate | Выполняется после миграции ВМ | Тестировать нечего. Однако, OpenNebula запускает скрипт только для системного хранилища, поэтому он вызывает аналогичные скрипты для хранилищ других дисков той же ВМ. Помимо этого в плагинах Vitastor и Ceph скрипт ничего не делает. |
| tm/prebackup | Выполнить резервное копирование дисков ВМ | Создайте хранилище резервных копий типа "rsync" → Забэкапьте в него ВМ. |
| tm/prebackup_live | То же самое для запущенной ВМ | То же, что tm/prebackup, но запускает fsfreeze/thaw (остановку доступа к дискам). Так что смысл теста - проведите резервное копирование и проверьте, что данные скопировались консистентно. |
| tm/premigrate | Выполняется перед миграцией ВМ | Тестировать нечего. Аналогично tm/postmigrate запускается только для системного хранилища. |
| tm/resize | Изменить размер диска ВМ | Выберите ВМ → Выберите непостоянный диск → Измените его размер. |
| tm/restore | Восстановить диски ВМ из бэкапа | Создайте хранилище резервных копий → Забэкапьте в него ВМ → Восстановите её обратно. |
| tm/snap_create | Создать снимок диска ВМ | Выберите ВМ → Выберите диск → Создайте снимок. |
| tm/snap_create_live | Создать снимок диска запущенной ВМ | Выберите запущенную ВМ → Выберите диск → Создайте снимок. |
| tm/snap_delete | Удалить снимок диска ВМ | Выберите ВМ → Выберите диск → Выберите снимок → Удалить. |
| tm/snap_revert | Откатить диск ВМ к снимку | Выберите ВМ → Выберите диск → Выберите снимок → Откатить. |

View File

@ -39,10 +39,6 @@
## Plugins and tools ## Plugins and tools
- [Proxmox storage plugin and packages](../installation/proxmox.en.md)
- [OpenNebula storage plugin](../installation/opennebula.en.md)
- [CSI plugin for Kubernetes](../installation/kubernetes.en.md)
- [OpenStack support: Cinder driver, Nova and libvirt patches](../installation/openstack.en.md)
- [Debian and CentOS packages](../installation/packages.en.md) - [Debian and CentOS packages](../installation/packages.en.md)
- [Image management CLI (vitastor-cli)](../usage/cli.en.md) - [Image management CLI (vitastor-cli)](../usage/cli.en.md)
- [Disk management CLI (vitastor-disk)](../usage/disk.en.md) - [Disk management CLI (vitastor-disk)](../usage/disk.en.md)
@ -50,6 +46,9 @@
- [Native QEMU driver](../usage/qemu.en.md) - [Native QEMU driver](../usage/qemu.en.md)
- [Loadable fio engine for benchmarks](../usage/fio.en.md) - [Loadable fio engine for benchmarks](../usage/fio.en.md)
- [NBD proxy for kernel mounts](../usage/nbd.en.md) - [NBD proxy for kernel mounts](../usage/nbd.en.md)
- [CSI plugin for Kubernetes](../installation/kubernetes.en.md)
- [OpenStack support: Cinder driver, Nova and libvirt patches](../installation/openstack.en.md)
- [Proxmox storage plugin and packages](../installation/proxmox.en.md)
- [Simplified NFS proxy for file-based image access emulation (suitable for VMWare)](../usage/nfs.en.md#pseudo-fs) - [Simplified NFS proxy for file-based image access emulation (suitable for VMWare)](../usage/nfs.en.md#pseudo-fs)
## Roadmap ## Roadmap
@ -59,6 +58,7 @@ The following features are planned for the future:
- Control plane optimisation - Control plane optimisation
- Other administrative tools - Other administrative tools
- Web GUI - Web GUI
- OpenNebula plugin
- iSCSI and NVMeoF gateways - iSCSI and NVMeoF gateways
- Multi-threaded client - Multi-threaded client
- Faster failover - Faster failover

View File

@ -41,10 +41,6 @@
## Драйверы и инструменты ## Драйверы и инструменты
- [Плагин для Proxmox](../installation/proxmox.ru.md)
- [Плагин для OpenNebula](../installation/opennebula.ru.md)
- [CSI-плагин для Kubernetes](../installation/kubernetes.ru.md)
- [Базовая поддержка OpenStack: драйвер Cinder, патчи для Nova и libvirt](../installation/openstack.ru.md)
- [Пакеты для Debian и CentOS](../installation/packages.ru.md) - [Пакеты для Debian и CentOS](../installation/packages.ru.md)
- [Консольный интерфейс управления образами (vitastor-cli)](../usage/cli.ru.md) - [Консольный интерфейс управления образами (vitastor-cli)](../usage/cli.ru.md)
- [Инструмент управления дисками (vitastor-disk)](../usage/disk.ru.md) - [Инструмент управления дисками (vitastor-disk)](../usage/disk.ru.md)
@ -52,6 +48,9 @@
- [Драйвер диска для QEMU](../usage/qemu.ru.md) - [Драйвер диска для QEMU](../usage/qemu.ru.md)
- [Драйвер диска для утилиты тестирования производительности fio](../usage/fio.ru.md) - [Драйвер диска для утилиты тестирования производительности fio](../usage/fio.ru.md)
- [NBD-прокси для монтирования образов ядром](../usage/nbd.ru.md) ("блочное устройство в режиме пользователя") - [NBD-прокси для монтирования образов ядром](../usage/nbd.ru.md) ("блочное устройство в режиме пользователя")
- [CSI-плагин для Kubernetes](../installation/kubernetes.ru.md)
- [Базовая поддержка OpenStack: драйвер Cinder, патчи для Nova и libvirt](../installation/openstack.ru.md)
- [Плагин для Proxmox](../installation/proxmox.ru.md)
- [Упрощённая NFS-прокси для эмуляции файлового доступа к образам (подходит для VMWare)](../usage/nfs.ru.md#псевдо-фс) - [Упрощённая NFS-прокси для эмуляции файлового доступа к образам (подходит для VMWare)](../usage/nfs.ru.md#псевдо-фс)
## Планы развития ## Планы развития
@ -59,6 +58,7 @@
- Оптимизация слоя управления - Оптимизация слоя управления
- Другие инструменты администрирования - Другие инструменты администрирования
- Web-интерфейс - Web-интерфейс
- Плагин для OpenNebula
- iSCSI и NVMeoF прокси - iSCSI и NVMeoF прокси
- Многопоточный клиент - Многопоточный клиент
- Более быстрое переключение при отказах - Более быстрое переключение при отказах

View File

@ -1 +0,0 @@
See [../docs/installation/opennebula.en.md](../docs/installation/opennebula.en.md).

View File

@ -1,29 +0,0 @@
#!/bin/bash
set -e
reapply_patch() {
if ! patch -f --dry-run -F 0 -R $1 < $2 >/dev/null; then
already_applied=0
if ! patch --no-backup-if-mismatch -r - -F 0 -f $1 < $2; then
applied_ok=0
echo "ERROR: Failed to patch file $1, please apply the patch $2 manually"
fi
fi
}
echo "Reapplying Vitastor patches to OpenNebula's oned.conf, vmm_execrc and downloader.sh"
already_applied=1
applied_ok=1
reapply_patch /var/lib/one/remotes/datastore/downloader.sh /var/lib/one/remotes/datastore/vitastor/downloader-vitastor.sh.diff
reapply_patch /etc/one/oned.conf /var/lib/one/remotes/datastore/vitastor/oned.conf.diff
reapply_patch /etc/one/vmm_exec/vmm_execrc /var/lib/one/remotes/datastore/vitastor/vmm_execrc.diff
if [[ "$already_applied" = 1 ]]; then
echo "OK: Vitastor OpenNebula patches are already applied"
elif [[ "$applied_ok" = 1 ]]; then
echo "OK: Vitastor OpenNebula patches successfully applied"
fi
if [[ -f /etc/apparmor.d/local/abstractions/libvirt-qemu ]]; then
if ! grep -q /etc/vitastor/vitastor.conf /etc/apparmor.d/local/abstractions/libvirt-qemu; then
echo ' "/etc/vitastor/vitastor.conf" r,' >> /etc/apparmor.d/local/abstractions/libvirt-qemu
fi
fi

View File

@ -1,76 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
# This script is used to copy a VM image (SRC) to the image repository as DST
# -------- Set up the environment to source common tools & conf ------------
if [ -z "${ONE_LOCATION}" ]; then
LIB_LOCATION=/usr/lib/one
else
LIB_LOCATION=$ONE_LOCATION/lib
fi
. $LIB_LOCATION/sh/scripts_common.sh
DRIVER_PATH=$(dirname $0)
source ${DRIVER_PATH}/../libfs.sh
# -------- Get cp and datastore arguments from OpenNebula core ------------
DRV_ACTION=`cat -`
ID=$1
XPATH="${DRIVER_PATH}/../xpath.rb -b $DRV_ACTION"
unset i XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <($XPATH \
/DS_DRIVER_ACTION_DATA/DATASTORE/BASE_PATH \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/BRIDGE_LIST \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/POOL_NAME \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/IMAGE_PREFIX \
/DS_DRIVER_ACTION_DATA/IMAGE/PATH \
/DS_DRIVER_ACTION_DATA/IMAGE/SIZE \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/VITASTOR_CONF)
unset i
BASE_PATH="${XPATH_ELEMENTS[i++]}"
BRIDGE_LIST="${XPATH_ELEMENTS[i++]}"
POOL_NAME="${XPATH_ELEMENTS[i++]}"
IMAGE_PREFIX="${XPATH_ELEMENTS[i++]:-one}"
SRC="${XPATH_ELEMENTS[i++]}"
SIZE="${XPATH_ELEMENTS[i++]}"
VITASTOR_CONF="${XPATH_ELEMENTS[i++]}"
DST_HOST=`get_destination_host $ID`
if [ -z "$DST_HOST" ]; then
error_message "Datastore template missing 'BRIDGE_LIST' attribute."
exit -1
fi
CLI=vitastor-cli
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path ${VITASTOR_CONF}"
fi
if [ -n "$POOL_NAME" ]; then
CLI="$CLI --pool ${POOL_NAME}"
fi
SAFE_DIRS=""
DST="${IMAGE_PREFIX}-${ID}"
ssh_exec_and_log "$DST_HOST" "$CLI create --parent $SRC $DST" \
"Error during $CLI create --parent $SRC $DST in $DST_HOST"
ssh_exec_and_log "$DST_HOST" "$CLI flatten $DST" \
"Error during $CLI create flatten $DST in $DST_HOST"
echo "$DST raw"

View File

@ -1,135 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
# This script is used to copy a local image SRC to the image repository as DST
# -------- Set up the environment to source common tools & conf ------------
if [ -z "${ONE_LOCATION}" ]; then
LIB_LOCATION=/usr/lib/one
else
LIB_LOCATION=$ONE_LOCATION/lib
fi
. $LIB_LOCATION/sh/scripts_common.sh
DRIVER_PATH=$(dirname $0)
source ${DRIVER_PATH}/../libfs.sh
# -------- Get cp and datastore arguments from OpenNebula core ------------
DRV_ACTION=`cat -`
ID=$1
export DRV_ACTION
UTILS_PATH="${DRIVER_PATH}/.."
XPATH="$UTILS_PATH/xpath.rb -b $DRV_ACTION"
unset i XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <($XPATH \
/DS_DRIVER_ACTION_DATA/DATASTORE/BASE_PATH \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/RESTRICTED_DIRS \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/SAFE_DIRS \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/BRIDGE_LIST \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/POOL_NAME \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/IMAGE_PREFIX \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/STAGING_DIR \
/DS_DRIVER_ACTION_DATA/IMAGE/PATH \
/DS_DRIVER_ACTION_DATA/IMAGE/SIZE \
/DS_DRIVER_ACTION_DATA/IMAGE/TEMPLATE/MD5 \
/DS_DRIVER_ACTION_DATA/IMAGE/TEMPLATE/SHA1 \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/NO_DECOMPRESS \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/LIMIT_TRANSFER_BW \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/VITASTOR_CONF)
unset i
BASE_PATH="${XPATH_ELEMENTS[i++]}"
RESTRICTED_DIRS="${XPATH_ELEMENTS[i++]}"
SAFE_DIRS="${XPATH_ELEMENTS[i++]}"
BRIDGE_LIST="${XPATH_ELEMENTS[i++]}"
POOL_NAME="${XPATH_ELEMENTS[i++]}"
IMAGE_PREFIX="${XPATH_ELEMENTS[i++]:-one}"
STAGING_DIR="${XPATH_ELEMENTS[i++]:-/var/tmp}"
SRC="${XPATH_ELEMENTS[i++]}"
SIZE="${XPATH_ELEMENTS[i++]}"
MD5="${XPATH_ELEMENTS[i++]}"
SHA1="${XPATH_ELEMENTS[i++]}"
NO_DECOMPRESS="${XPATH_ELEMENTS[i++]}"
LIMIT_TRANSFER_BW="${XPATH_ELEMENTS[i++]}"
VITASTOR_CONF="${XPATH_ELEMENTS[i++]}"
DST_HOST=`get_destination_host $ID`
if [ -z "$DST_HOST" ]; then
error_message "Datastore template missing 'BRIDGE_LIST' attribute."
exit -1
fi
CLI=vitastor-cli
QEMU_ARG=""
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path ${VITASTOR_CONF}"
QEMU_ARG=":config_path=${VITASTOR_CONF}"
fi
if [ -n "$POOL_NAME" ]; then
CLI="$CLI --pool ${POOL_NAME}"
fi
set_up_datastore "$BASE_PATH" "$RESTRICTED_DIRS" "$SAFE_DIRS"
IMAGE_HASH=`generate_image_hash`
TMP_DST="$STAGING_DIR/$IMAGE_HASH"
DST="${IMAGE_PREFIX}-${ID}"
DOWNLOADER_ARGS=`set_downloader_args "$MD5" "$SHA1" "$NO_DECOMPRESS" "$LIMIT_TRANSFER_BW" "$SRC" -`
COPY_COMMAND="$UTILS_PATH/downloader.sh $DOWNLOADER_ARGS"
case $SRC in
http://*|https://*)
log "Downloading $SRC to the image repository"
DUMP="$COPY_COMMAND"
;;
*)
if [ `check_restricted $SRC` -eq 1 ]; then
log_error "Not allowed to copy images from $RESTRICTED_DIRS"
error_message "Not allowed to copy image file $SRC"
exit -1
fi
log "Copying local image $SRC to the image repository"
DUMP="$COPY_COMMAND"
;;
esac
multiline_exec_and_log "set -e -o pipefail; $DUMP | $SSH $DST_HOST $DD of=$TMP_DST bs=1M" \
"Error copying $SRC to $DST_HOST:$TMP_DST"
REGISTER_CMD=$(cat <<EOF
set -e -o pipefail
SIZE=\$($QEMU_IMG info --output json "$TMP_DST" | jq -r '.["virtual-size"]')
$CLI create -s \$SIZE "$DST"
$QEMU_IMG convert -O raw "$TMP_DST" "vitastor:image=$DST$QEMU_ARG"
# remove original
$RM -f $TMP_DST
EOF
)
ssh_exec_and_log "$DST_HOST" "$REGISTER_CMD" "Error registering $DST in $DST_HOST"
echo "$DST raw"

View File

@ -1,555 +0,0 @@
#!/bin/bash
# -------------------------------------------------------------------------- #
# Copyright 2002-2023, OpenNebula Project, OpenNebula Systems #
# #
# Licensed under the Apache License, Version 2.0 (the "License"); you may #
# not use this file except in compliance with the License. You may obtain #
# a copy of the License at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# Unless required by applicable law or agreed to in writing, software #
# distributed under the License is distributed on an "AS IS" BASIS, #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
# See the License for the specific language governing permissions and #
# limitations under the License. #
#--------------------------------------------------------------------------- #
if [ -z "${ONE_LOCATION}" ]; then
LIB_LOCATION=/usr/lib/one
VAR_LOCATION=/var/lib/one
else
LIB_LOCATION=$ONE_LOCATION/lib
VAR_LOCATION=$ONE_LOCATION/var
fi
. $LIB_LOCATION/sh/scripts_common.sh
DRIVER_PATH=$(dirname $0)
# Escape single quotes
function esc_sq
{
echo "$1" | sed -e "s/'/'\\\''/g"
}
# Execute a command (first parameter) and use the first kb of stdout
# to determine the file type
function get_type
{
if [ "$NO_DECOMPRESS" = "yes" ]; then
echo "application/octet-stream"
else
command=$1
( eval "$command" | head -n 1024 | file -b --mime-type - ) 2>/dev/null
fi
}
# Gets the command needed to decompress an stream.
function get_decompressor
{
type=$1
case "$type" in
"application/x-gzip"|"application/gzip")
echo "gunzip -c -"
;;
"application/x-bzip2")
echo "bunzip2 -qc -"
;;
"application/x-xz")
echo "unxz -c -"
;;
*)
echo "cat"
;;
esac
}
# Function called to decompress a stream. The first parameter is the command
# used to decompress the stream. Second parameter is the output file or
# - for stdout.
function decompress
{
command="$1"
to="$2"
if [ "$to" = "-" ]; then
$command
else
$command > "$to"
fi
}
# Function called to hash a stream. First parameter is the algorithm name.
function hasher
{
if [ -n "$1" ]; then
openssl dgst -$1 | awk '{print $NF}' > $HASH_FILE
else
# Needs something consuming stdin or the pipe will break
cat >/dev/null
fi
}
# Unarchives a tar or a zip a file to a directory with the same name.
function unarchive
{
TO="$1"
file_type=$(get_type "cat $TO")
tmp="$TO"
# Add full path if it is relative
if [ ${tmp:0:1} != "/" ]; then
tmp="$PWD/$tmp"
fi
IN="$tmp.tmp"
OUT="$tmp"
case "$file_type" in
"application/x-tar")
command="tar -xf $IN -C $OUT"
;;
"application/zip")
command="unzip -d $OUT $IN"
;;
*)
command=""
;;
esac
if [ -n "$command" ]; then
mv "$OUT" "$IN"
mkdir "$OUT"
$command
if [ "$?" != "0" ]; then
echo "Error uncompressing archive" >&2
exit -1
fi
rm "$IN"
fi
}
function s3_env
{
XPATH="$DRIVER_PATH/xpath.rb -b $DRV_ACTION"
unset i j XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <($XPATH /DS_DRIVER_ACTION_DATA/MARKETPLACE/TEMPLATE/ACCESS_KEY_ID \
/DS_DRIVER_ACTION_DATA/MARKETPLACE/TEMPLATE/SECRET_ACCESS_KEY \
/DS_DRIVER_ACTION_DATA/MARKETPLACE/TEMPLATE/REGION \
/DS_DRIVER_ACTION_DATA/MARKETPLACE/TEMPLATE/AWS \
/DS_DRIVER_ACTION_DATA/MARKETPLACE/TEMPLATE/ENDPOINT)
S3_ACCESS_KEY_ID="${XPATH_ELEMENTS[j++]}"
S3_SECRET_ACCESS_KEY="${XPATH_ELEMENTS[j++]}"
S3_REGION="${XPATH_ELEMENTS[j++]}"
S3_AWS="${XPATH_ELEMENTS[j++]}"
S3_ENDPOINT="${XPATH_ELEMENTS[j++]}"
CURRENT_DATE_DAY="$(date -u '+%Y%m%d')"
CURRENT_DATE_ISO8601="${CURRENT_DATE_DAY}T$(date -u '+%H%M%S')Z"
}
# Create an SHA-256 hash in hexadecimal.
# Usage:
# hash_sha256 <string>
function hash_sha256 {
printf "${1}" | openssl dgst -sha256 | sed 's/^.* //'
}
# Create an SHA-256 hmac in hexadecimal.
# Usage:
# hmac_sha256 <key> <data>
function hmac_sha256 {
printf "${2}" | openssl dgst -sha256 -mac HMAC -macopt "${1}" | sed 's/^.* //'
}
# Create the signature.
# Usage:
# create_signature
function create_signature {
stringToSign="AWS4-HMAC-SHA256\n${CURRENT_DATE_ISO8601}\n${CURRENT_DATE_DAY}/${S3_REGION}/s3/aws4_request\n$(hash_sha256 "${HTTP_CANONICAL_REQUEST}")"
dateKey=$(hmac_sha256 key:"AWS4${S3_SECRET_ACCESS_KEY}" "${CURRENT_DATE_DAY}")
regionKey=$(hmac_sha256 hexkey:"${dateKey}" "${S3_REGION}")
serviceKey=$(hmac_sha256 hexkey:"${regionKey}" "s3")
signingKey=$(hmac_sha256 hexkey:"${serviceKey}" "aws4_request")
printf "${stringToSign}" | openssl dgst -sha256 -mac HMAC -macopt hexkey:"${signingKey}" | sed 's/.*(stdin)= //'
}
function s3_curl_args
{
FROM="$1"
ENDPOINT="$S3_ENDPOINT"
OBJECT=$(basename "$FROM")
BUCKET=$(basename $(dirname "$FROM"))
DATE="`date -u +'%a, %d %b %Y %H:%M:%S GMT'`"
AUTH_STRING="GET\n\n\n${DATE}\n/${BUCKET}/${OBJECT}"
SIGNED_AUTH_STRING=`echo -en "$AUTH_STRING" | \
openssl sha1 -hmac ${S3_SECRET_ACCESS_KEY} -binary | \
base64`
echo " -H \"Date: ${DATE}\"" \
" -H \"Authorization: AWS ${S3_ACCESS_KEY_ID}:${SIGNED_AUTH_STRING}\"" \
" '$(esc_sq "${ENDPOINT}/${BUCKET}/${OBJECT}")'"
}
function s3_curl_args_aws
{
FROM="$1"
OBJECT=$(basename "$FROM")
BUCKET=$(basename "$(dirname "$FROM")")
ENDPOINT="$BUCKET.s3.amazonaws.com"
AWS_S3_PATH="$(echo $OBJECT | sed 's;^\([^/]\);/\1;')"
HTTP_REQUEST_PAYLOAD_HASH="$(echo "" | openssl dgst -sha256 | sed 's/^.* //')"
HTTP_CANONICAL_REQUEST_URI="${AWS_S3_PATH}"
HTTP_REQUEST_CONTENT_TYPE='application/octet-stream'
HTTP_CANONICAL_REQUEST_HEADERS="content-type:${HTTP_REQUEST_CONTENT_TYPE}
host:${ENDPOINT}
x-amz-content-sha256:${HTTP_REQUEST_PAYLOAD_HASH}
x-amz-date:${CURRENT_DATE_ISO8601}"
HTTP_REQUEST_SIGNED_HEADERS="content-type;host;x-amz-content-sha256;x-amz-date"
HTTP_CANONICAL_REQUEST="GET
${HTTP_CANONICAL_REQUEST_URI}\n
${HTTP_CANONICAL_REQUEST_HEADERS}\n
${HTTP_REQUEST_SIGNED_HEADERS}
${HTTP_REQUEST_PAYLOAD_HASH}"
SIGNATURE="$(create_signature)"
HTTP_REQUEST_AUTHORIZATION_HEADER="AWS4-HMAC-SHA256 Credential=${S3_ACCESS_KEY_ID}/${CURRENT_DATE_DAY}/${S3_REGION}/s3/aws4_request, SignedHeaders=${HTTP_REQUEST_SIGNED_HEADERS}, Signature=${SIGNATURE}"
echo " -H \"Authorization: ${HTTP_REQUEST_AUTHORIZATION_HEADER}\"" \
" -H \"content-type: ${HTTP_REQUEST_CONTENT_TYPE}\"" \
" -H \"x-amz-content-sha256: ${HTTP_REQUEST_PAYLOAD_HASH}\"" \
" -H \"x-amz-date: ${CURRENT_DATE_ISO8601}\"" \
" \"https://${ENDPOINT}${HTTP_CANONICAL_REQUEST_URI}\""
}
function get_rbd_cmd
{
local i j URL_ELEMENTS
FROM="$1"
URL_RB="$DRIVER_PATH/url.rb"
while IFS= read -r -d '' element; do
URL_ELEMENTS[i++]="$element"
done < <($URL_RB "$FROM" \
USER \
HOST \
SOURCE \
PARAM_DS \
PARAM_CEPH_USER \
PARAM_CEPH_KEY \
PARAM_CEPH_CONF)
USER="${URL_ELEMENTS[j++]}"
DST_HOST="${URL_ELEMENTS[j++]}"
SOURCE="${URL_ELEMENTS[j++]}"
DS="${URL_ELEMENTS[j++]}"
CEPH_USER="${URL_ELEMENTS[j++]}"
CEPH_KEY="${URL_ELEMENTS[j++]}"
CEPH_CONF="${URL_ELEMENTS[j++]}"
# Remove leading '/'
SOURCE="${SOURCE#/}"
if [ -n "$USER" ]; then
DST_HOST="$USER@$DST_HOST"
fi
if [ -n "$CEPH_USER" ]; then
RBD="$RBD --id '$(esc_sq "${CEPH_USER}")'"
fi
if [ -n "$CEPH_KEY" ]; then
RBD="$RBD --keyfile '$(esc_sq "${CEPH_KEY}")'"
fi
if [ -n "$CEPH_CONF" ]; then
RBD="$RBD --conf '$(esc_sq "${CEPH_CONF}")'"
fi
echo "ssh '$(esc_sq "$DST_HOST")' \"$RBD export '$(esc_sq "$SOURCE")' -\""
}
function get_vitastor_cmd
{
local i j URL_ELEMENTS
FROM="$1"
URL_RB="$DRIVER_PATH/url.rb"
while IFS= read -r -d '' element; do
URL_ELEMENTS[i++]="$element"
done < <($URL_RB "$FROM" \
USER \
HOST \
SOURCE \
PARAM_DS \
PARAM_VITASTOR_CONF)
USER="${URL_ELEMENTS[j++]}"
DST_HOST="${URL_ELEMENTS[j++]}"
SOURCE="${URL_ELEMENTS[j++]}"
DS="${URL_ELEMENTS[j++]}"
VITASTOR_CONF="${URL_ELEMENTS[j++]}"
# Remove leading '/'
SOURCE="${SOURCE#/}"
if [ -n "$USER" ]; then
DST_HOST="$USER@$DST_HOST"
fi
local CLI
CLI="vitastor-cli"
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path '$(esc_sq "${VITASTOR_CONF}")'"
fi
echo "ssh '$(esc_sq "$DST_HOST")' \"$CLI dd iimg='$(esc_sq "$SOURCE")'\""
}
# Compare 2 version strings using sort -V
# Usage:
# verlte "3.2.9" "3.4.0"
function verlte() {
[ "$1" = "`echo -e "$1\n$2" | sort -V | head -n1`" ]
}
# Returns curl retry options based on its version
function curl_retry_args {
[ "$NO_RETRY" = "yes" ] && return
RETRY_ARGS="--retry 3 --retry-delay 3"
CURL_VER=`curl --version | grep -o 'curl [0-9\.]*' | awk '{print $2}'`
# To retry also on conn-reset-by-peer fresh curl is needed
if verlte "7.71.0" "$CURL_VER" && [ -z ${MAX_SIZE} ] ; then
RETRY_ARGS+=" --retry-all-errors"
fi
echo $RETRY_ARGS
}
TEMP=`getopt -o m:s:l:c:no -l md5:,sha1:,limit:,max-size:,nodecomp,noretry -- "$@"`
if [ $? != 0 ] ; then
echo "Arguments error" >&2
exit -1
fi
eval set -- "$TEMP"
while true; do
case "$1" in
-m|--md5)
HASH_TYPE=md5
HASH=$2
shift 2
;;
-s|--sha1)
HASH_TYPE=sha1
HASH=$2
shift 2
;;
-n|--nodecomp)
export NO_DECOMPRESS="yes"
shift
;;
-l|--limit)
export LIMIT_RATE="$2"
shift 2
;;
-c|--max-size)
export MAX_SIZE="$2"
shift 2
;;
-o|--noretry)
export NO_RETRY="yes"
shift
;;
--)
shift
break
;;
*)
shift
;;
esac
done
FROM="$1"
TO="$2"
if [ -n "${HASH_TYPE}" -a -n "${MAX_SIZE}" ]; then
echo "Hash check not supported for partial downloads" >&2
exit -1
else
# File used by the hasher function to store the resulting hash
export HASH_FILE="/tmp/downloader.hash.$$"
fi
GLOBAL_CURL_ARGS="--fail -sS -k -L $(curl_retry_args)"
case "$FROM" in
http://*|https://*)
# -k so it does not check the certificate
# -L to follow redirects
# -sS to hide output except on failure
# --limit_rate to limit the bw
curl_args="$GLOBAL_CURL_ARGS '$(esc_sq "${FROM}")'"
if [ -n "$LIMIT_RATE" ]; then
curl_args="--limit-rate $LIMIT_RATE $curl_args"
fi
command="curl $curl_args"
;;
ssh://*)
# pseudo-url for ssh transfers ssh://user@host:path
# -l to limit the bw
ssh_src=${FROM#ssh://}
ssh_arg=(${ssh_src/:/ })
rmt_cmd="\"cat '$(esc_sq "${ssh_arg[1]}")'\""
command="ssh ${ssh_arg[0]} $rmt_cmd"
;;
s3://*)
# Read s3 environment
s3_env
if [ -z "$S3_ACCESS_KEY_ID" -o -z "$S3_SECRET_ACCESS_KEY" ]; then
echo "S3_ACCESS_KEY_ID and S3_SECRET_ACCESS_KEY are required" >&2
exit -1
fi
curl_args=""
if [[ "$S3_AWS" =~ (no|NO) ]]; then
curl_args="$(s3_curl_args "$FROM")"
else
curl_args="$(s3_curl_args_aws "$FROM")"
fi
command="curl $GLOBAL_CURL_ARGS $curl_args"
;;
rbd://*)
command="$(get_rbd_cmd "$FROM")"
;;
vitastor://*)
command="$(get_vitastor_cmd "$FROM")"
;;
vcenter://*)
command="$VAR_LOCATION/remotes/datastore/vcenter_downloader.rb '$(esc_sq "$FROM")'"
;;
lxd://*)
file_type="application/octet-stream"
command="$VAR_LOCATION/remotes/datastore/lxd_downloader.sh \"$FROM\""
;;
restic://*)
eval `$VAR_LOCATION/remotes/datastore/restic_downloader.rb "$FROM" | grep -e '^command=' -e '^clean_command='`
;;
rsync://*)
eval `$VAR_LOCATION/remotes/datastore/rsync_downloader.rb "$FROM" | grep -e '^command=' -e '^clean_command='`
;;
*)
if [ ! -r $FROM ]; then
echo "Cannot read from $FROM" >&2
exit -1
fi
command="cat '$(esc_sq "$FROM")'"
;;
esac
[ -z "$file_type" ] && file_type=$(get_type "$command")
decompressor=$(get_decompressor "$file_type")
if [ -z "${MAX_SIZE}" ]; then
eval "$command" | \
tee >( hasher $HASH_TYPE) | \
decompress "$decompressor" "$TO"
if [ "$?" != "0" -o "$PIPESTATUS" != "0" ]; then
echo "Error copying" >&2
exit -1
fi
else
# Order of the 'head' command is here on purpose:
# 1. We want to download more bytes than needed to get a requested
# number of bytes on the output. Decompressor may need more
# data to decompress the stream.
# 2. Decompressor command is also misused to detect SIGPIPE error.
eval "$command" | \
decompress "$decompressor" "$TO" 2>/dev/null | \
head -c "${MAX_SIZE}"
# Following table shows exit codes of each command
# in the pipe for various scenarios:
#
# ----------------------------------------------------
# | $COMMAND | TYPE | PIPESTATUS | BEHAVIOUR
# ----------------------------------------------------
# | cat | partial | 141 141 0 | OK
# | cat | full | 0 0 0 | OK
# | cat | error | 1 0 0 | fail
# | curl | partial | 23 141 0 | OK
# | curl | full | 0 0 0 | OK
# | curl | error | 22 0 0 | fail
# | ssh | partial | 255 141 0 | OK
# | ssh | full | 0 0 0 | OK
# | ssh | error ssh | 255 0 0 | fail
# | ssh | error ssh cat | 1 0 0 | fail
if [ \( "${PIPESTATUS[0]}" != '0' -a "${PIPESTATUS[1]}" = '0' \) \
-o \( "${PIPESTATUS[1]}" != '0' -a "${PIPESTATUS[1]}" != '141' \) \
-o \( "${PIPESTATUS[2]}" != "0" \) ];
then
echo "Error copying" >&2
exit -1
fi
fi
if [ -n "$HASH_TYPE" ]; then
HASH_RESULT=$( cat $HASH_FILE)
rm $HASH_FILE
if [ "$HASH_RESULT" != "$HASH" ]; then
echo "Hash does not match" >&2
exit -1
fi
fi
# Unarchive only if the destination is filesystem
if [ "$TO" != "-" ]; then
unarchive "$TO"
fi
# Perform any clean operation
if [ -n "${clean_command}" ]; then
eval "$clean_command"
fi

View File

@ -1,60 +0,0 @@
diff --git /var/lib/one/remotes/datastore/downloader.sh /var/lib/one/remotes/datastore/downloader.sh
index 9b75d8ee4b..09d2a5d41d 100755
--- /var/lib/one/remotes/datastore/downloader.sh
+++ /var/lib/one/remotes/datastore/downloader.sh
@@ -295,6 +295,45 @@ function get_rbd_cmd
echo "ssh '$(esc_sq "$DST_HOST")' \"$RBD export '$(esc_sq "$SOURCE")' -\""
}
+function get_vitastor_cmd
+{
+ local i j URL_ELEMENTS
+
+ FROM="$1"
+
+ URL_RB="$DRIVER_PATH/url.rb"
+
+ while IFS= read -r -d '' element; do
+ URL_ELEMENTS[i++]="$element"
+ done < <($URL_RB "$FROM" \
+ USER \
+ HOST \
+ SOURCE \
+ PARAM_DS \
+ PARAM_VITASTOR_CONF)
+
+ USER="${URL_ELEMENTS[j++]}"
+ DST_HOST="${URL_ELEMENTS[j++]}"
+ SOURCE="${URL_ELEMENTS[j++]}"
+ DS="${URL_ELEMENTS[j++]}"
+ VITASTOR_CONF="${URL_ELEMENTS[j++]}"
+
+ # Remove leading '/'
+ SOURCE="${SOURCE#/}"
+
+ if [ -n "$USER" ]; then
+ DST_HOST="$USER@$DST_HOST"
+ fi
+
+ local CLI
+ CLI="vitastor-cli"
+ if [ -n "$VITASTOR_CONF" ]; then
+ CLI="$CLI --config_path '$(esc_sq "${VITASTOR_CONF}")'"
+ fi
+
+ echo "ssh '$(esc_sq "$DST_HOST")' \"$CLI dd iimg='$(esc_sq "$SOURCE")'\""
+}
+
# Compare 2 version strings using sort -V
# Usage:
# verlte "3.2.9" "3.4.0"
@@ -424,6 +463,9 @@ s3://*)
rbd://*)
command="$(get_rbd_cmd "$FROM")"
;;
+vitastor://*)
+ command="$(get_vitastor_cmd "$FROM")"
+ ;;
vcenter://*)
command="$VAR_LOCATION/remotes/datastore/vcenter_downloader.rb '$(esc_sq "$FROM")'"
;;

View File

@ -1,114 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
# This script is used to export an image to qcow2 file
# ------------ Set up the environment to source common tools ------------
if [ -z "${ONE_LOCATION}" ]; then
LIB_LOCATION=/usr/lib/one
else
LIB_LOCATION=$ONE_LOCATION/lib
fi
. $LIB_LOCATION/sh/scripts_common.sh
DRIVER_PATH=$(dirname $0)
source ${DRIVER_PATH}/../libfs.sh
# -------- Get rm and datastore arguments from OpenNebula core ------------
DRV_ACTION=`cat -`
ID=$1
XPATH="${DRIVER_PATH}/../xpath.rb -b $DRV_ACTION"
unset i XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <($XPATH \
/DS_DRIVER_ACTION_DATA/IMAGE/SOURCE \
/DS_DRIVER_ACTION_DATA/IMAGE/SIZE \
/DS_DRIVER_ACTION_DATA/IMAGE/TEMPLATE/MD5 \
/DS_DRIVER_ACTION_DATA/IMAGE/TEMPLATE/SHA1 \
/DS_DRIVER_ACTION_DATA/IMAGE/TEMPLATE/FORMAT \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/BRIDGE_LIST \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/VITASTOR_CONF)
unset i
SRC="${XPATH_ELEMENTS[i++]}"
SIZE="${XPATH_ELEMENTS[i++]}"
MD5="${XPATH_ELEMENTS[i++]}"
SHA1="${XPATH_ELEMENTS[i++]}"
FORMAT="${XPATH_ELEMENTS[i++]:-raw}"
BRIDGE_LIST="${XPATH_ELEMENTS[i++]}"
VITASTOR_CONF="${XPATH_ELEMENTS[i++]}"
DST_HOST=`get_destination_host $ID`
if [ -z "$DST_HOST" ]; then
error_message "Datastore template missing 'BRIDGE_LIST' attribute."
exit -1
fi
IMPORT_SOURCE="vitastor://$DST_HOST/$SRC"
IS_JOIN="?"
CLI=vitastor-cli
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path $VITASTOR_CONF"
IMPORT_SOURCE="${IMPORT_SOURCE}${IS_JOIN}VITASTOR_CONF=${VITASTOR_CONF}"
fi
# FIXME: this is inefficient - it pipes the image twice...
INFO_SCRIPT=$(cat <<EOF
if [ -z "$MD5" ]; then
CHECKSUM=\$(
$CLI dd iimg=${SRC} | ${MD5SUM} | cut -f1 -d' '
ps=\$PIPESTATUS
if [ "\$ps" != "0" ]; then
exit \$ps
fi
)
status=\$?
[ "\$status" != "0" ] && exit \$status
else
CHECKSUM="$MD5"
fi
if [ -z "\$CHECKSUM" ]; then
exit 1
fi
cat <<EOT
<MD5><![CDATA[\$CHECKSUM]]></MD5>
<SIZE><![CDATA[$SIZE]]></SIZE>
<FORMAT><![CDATA[${FORMAT}]]></FORMAT>
EOT
EOF
)
INFO=$(ssh_monitor_and_log "$DST_HOST" "$INFO_SCRIPT" "Image info script" 2>&1)
INFO_STATUS=$?
if [ "$INFO_STATUS" != "0" ]; then
echo "$INFO"
exit $INFO_STATUS
fi
cat <<EOF
<IMPORT_INFO>
<IMPORT_SOURCE><![CDATA[$IMPORT_SOURCE]]></IMPORT_SOURCE>
$INFO
<DISPOSE>NO</DISPOSE>
</IMPORT_INFO>"
EOF

View File

@ -1,124 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
# This script is used to create a VM image (SRC) of size (SIZE) and formatted as (FS)
# -------- Set up the environment to source common tools & conf ------------
if [ -z "${ONE_LOCATION}" ]; then
LIB_LOCATION=/usr/lib/one
else
LIB_LOCATION=$ONE_LOCATION/lib
fi
. $LIB_LOCATION/sh/scripts_common.sh
DRIVER_PATH=$(dirname $0)
source ${DRIVER_PATH}/../libfs.sh
source ${DRIVER_PATH}/../../etc/datastore/datastore.conf
# -------- Get mkfs and datastore arguments from OpenNebula core ------------
DRV_ACTION=`cat -`
ID=$1
XPATH="${DRIVER_PATH}/../xpath.rb -b $DRV_ACTION"
unset i XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <($XPATH \
/DS_DRIVER_ACTION_DATA/DATASTORE/BASE_PATH \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/RESTRICTED_DIRS \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/SAFE_DIRS \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/BRIDGE_LIST \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/POOL_NAME \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/IMAGE_PREFIX \
/DS_DRIVER_ACTION_DATA/IMAGE/FORMAT \
/DS_DRIVER_ACTION_DATA/IMAGE/SIZE \
/DS_DRIVER_ACTION_DATA/IMAGE/FS \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/VITASTOR_CONF)
unset i
BASE_PATH="${XPATH_ELEMENTS[i++]}"
RESTRICTED_DIRS="${XPATH_ELEMENTS[i++]}"
SAFE_DIRS="${XPATH_ELEMENTS[i++]}"
BRIDGE_LIST="${XPATH_ELEMENTS[i++]}"
POOL_NAME="${XPATH_ELEMENTS[i++]}"
IMAGE_PREFIX="${XPATH_ELEMENTS[i++]:-one}"
FORMAT="${XPATH_ELEMENTS[i++]}"
SIZE="${XPATH_ELEMENTS[i++]}"
FS="${XPATH_ELEMENTS[i++]}"
VITASTOR_CONF="${XPATH_ELEMENTS[i++]}"
DST_HOST=`get_destination_host $ID`
if [ -z "$DST_HOST" ]; then
error_message "Datastore template missing 'BRIDGE_LIST' attribute."
exit -1
fi
CLI=
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path ${VITASTOR_CONF}"
fi
if [ -n "$POOL_NAME" ]; then
CLI="$CLI --pool ${POOL_NAME}"
fi
set_up_datastore "$BASE_PATH" "$RESTRICTED_DIRS" "$SAFE_DIRS"
IMAGE_NAME="${IMAGE_PREFIX}-${ID}"
# ------------ Image to save_as disk, no need to create a new image ------------
if [ "$FORMAT" = "save_as" ]; then
echo "$IMAGE_NAME"
exit 0
fi
# ------------ Create the image in the repository ------------
# FIXME: Duplicate code with tm/vitastor/mkimage
MKIMAGE_CMD=$(cat <<EOF
set -e -o pipefail
export PATH=/usr/sbin:/sbin:\$PATH
vitastor-cli $CLI create --pool "${POOL_NAME}" "$IMAGE_NAME" --size "${SIZE}M"
EOF
)
if [ -n "$FS" -o "$FORMAT" = "swap" ]; then
MKFS_CMD=`mkfs_command '$NBD' raw "$SIZE" "$SUPPORTED_FS" "$FS" "$FS_OPTS" | grep -v $QEMU_IMG`
fi
MKIMAGE_CMD=$(cat <<EOF
set -e -o pipefail
export PATH=/usr/sbin:/sbin:\$PATH
vitastor-cli $CLI create --pool "${POOL_NAME}" "$IMAGE_NAME" --size "${SIZE}M"
EOF
)
if [ ! -z $FS ]; then
set -e -o pipefail
IMAGE_HASH=`generate_image_hash`
FS_OPTS=$(eval $(echo "echo \$FS_OPTS_$FS"))
MKFS_CMD=`mkfs_command '$NBD' raw "$SIZE" "$SUPPORTED_FS" "$FS" "$FS_OPTS" | grep -v $QEMU_IMG`
MKIMAGE_CMD=$(cat <<EOF
$MKIMAGE_CMD
NBD=\$(sudo vitastor-nbd $CLI map --image "$IMAGE_NAME")
trap "sudo vitastor-nbd $CLI unmap \$NBD" EXIT
$MKFS_CMD
EOF
)
fi
ssh_exec_and_log "$DST_HOST" "$MKIMAGE_CMD" "Error registering $IMAGE_NAME in $DST_HOST"
echo "$IMAGE_NAME"

View File

@ -1,64 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
# This script is used to monitor the free and used space of a datastore
# -------- Set up the environment to source common tools & conf ------------
if [ -z "${ONE_LOCATION}" ]; then
LIB_LOCATION=/usr/lib/one
else
LIB_LOCATION=$ONE_LOCATION/lib
fi
. $LIB_LOCATION/sh/scripts_common.sh
DRIVER_PATH=$(dirname $0)
source ${DRIVER_PATH}/../../datastore/libfs.sh
# -------- Get datastore arguments from OpenNebula core ------------
DRV_ACTION=`cat -`
ID=$1
XPATH="${DRIVER_PATH}/../../datastore/xpath.rb -b $DRV_ACTION"
unset i j XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <($XPATH \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/BRIDGE_LIST \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/POOL_NAME \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/VITASTOR_CONF)
BRIDGE_LIST="${XPATH_ELEMENTS[j++]}"
POOL_NAME="${XPATH_ELEMENTS[j++]}"
VITASTOR_CONF="${XPATH_ELEMENTS[j++]}"
HOST=`get_destination_host`
if [ -z "$HOST" ]; then
error_message "Datastore template missing 'BRIDGE_LIST' attribute."
exit -1
fi
CLI=vitastor-cli
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path ${VITASTOR_CONF}"
fi
# ------------ Compute datastore usage -------------
MONITOR_SCRIPT=$(cat <<EOF
vitastor-cli df --json | jq -r '.[] | select(.name == "${POOL_NAME}") |
"TOTAL_MB="+(.total_raw/.raw_to_usable/1024/1024 | tostring)+
"\nUSED_MB="+(.used_raw/.raw_to_usable/1024/1024 | tostring)+
"\nFREE_MB="+(.max_available/1024/1024 | tostring)'
EOF
)
ssh_monitor_and_log $HOST "$MONITOR_SCRIPT 2>&1" "Error monitoring ${POOL_NAME} in $HOST"

View File

@ -1,73 +0,0 @@
diff --git /etc/one/oned.conf /etc/one/oned.conf
index be02d646a8..27f876ec36 100644
--- /etc/one/oned.conf
+++ /etc/one/oned.conf
@@ -481,7 +481,7 @@ VM_MAD = [
NAME = "kvm",
SUNSTONE_NAME = "KVM",
EXECUTABLE = "one_vmm_exec",
- ARGUMENTS = "-t 15 -r 0 kvm -p",
+ ARGUMENTS = "-t 15 -r 0 kvm -p -l deploy=deploy.vitastor",
DEFAULT = "vmm_exec/vmm_exec_kvm.conf",
TYPE = "kvm",
KEEP_SNAPSHOTS = "yes",
@@ -592,7 +592,7 @@ VM_MAD = [
TM_MAD = [
EXECUTABLE = "one_tm",
- ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,fs_lvm_ssh,qcow2,ssh,ceph,dev,vcenter,iscsi_libvirt"
+ ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,fs_lvm_ssh,qcow2,ssh,ceph,vitastor,dev,vcenter,iscsi_libvirt"
]
#*******************************************************************************
@@ -612,7 +612,7 @@ TM_MAD = [
DATASTORE_MAD = [
EXECUTABLE = "one_datastore",
- ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,dev,iscsi_libvirt,vcenter,restic,rsync -s shared,ssh,ceph,fs_lvm,fs_lvm_ssh,qcow2,vcenter"
+ ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,vitastor,dev,iscsi_libvirt,vcenter,restic,rsync -s shared,ssh,ceph,vitastor,fs_lvm,fs_lvm_ssh,qcow2,vcenter"
]
#*******************************************************************************
@@ -1050,6 +1050,9 @@ INHERIT_DATASTORE_ATTR = "VCENTER_DS_IMAGE_DIR"
INHERIT_DATASTORE_ATTR = "VCENTER_DS_VOLATILE_DIR"
INHERIT_DATASTORE_ATTR = "VCENTER_INSTANCE_ID"
+INHERIT_DATASTORE_ATTR = "VITASTOR_CONF"
+INHERIT_DATASTORE_ATTR = "IMAGE_PREFIX"
+
INHERIT_IMAGE_ATTR = "DISK_TYPE"
INHERIT_IMAGE_ATTR = "VCENTER_ADAPTER_TYPE"
INHERIT_IMAGE_ATTR = "VCENTER_DISK_TYPE"
@@ -1180,6 +1183,14 @@ TM_MAD_CONF = [
CLONE_TARGET_SHARED = "SELF", DISK_TYPE_SHARED = "RBD"
]
+TM_MAD_CONF = [
+ NAME = "vitastor", LN_TARGET = "NONE", CLONE_TARGET = "SELF", SHARED = "YES",
+ DS_MIGRATE = "NO", DRIVER = "raw", ALLOW_ORPHANS="format",
+ TM_MAD_SYSTEM = "ssh,shared", LN_TARGET_SSH = "SYSTEM", CLONE_TARGET_SSH = "SYSTEM",
+ DISK_TYPE_SSH = "FILE", LN_TARGET_SHARED = "NONE",
+ CLONE_TARGET_SHARED = "SELF", DISK_TYPE_SHARED = "FILE"
+]
+
TM_MAD_CONF = [
NAME = "iscsi_libvirt", LN_TARGET = "NONE", CLONE_TARGET = "SELF", SHARED = "YES",
DS_MIGRATE = "NO", DRIVER = "raw"
@@ -1219,9 +1230,16 @@ DS_MAD_CONF = [
NAME = "ceph",
REQUIRED_ATTRS = "DISK_TYPE,BRIDGE_LIST",
PERSISTENT_ONLY = "NO",
MARKETPLACE_ACTIONS = "export"
]
+DS_MAD_CONF = [
+ NAME = "vitastor",
+ REQUIRED_ATTRS = "DISK_TYPE,BRIDGE_LIST",
+ PERSISTENT_ONLY = "NO",
+ MARKETPLACE_ACTIONS = "export"
+]
+
DS_MAD_CONF = [
NAME = "dev", REQUIRED_ATTRS = "DISK_TYPE", PERSISTENT_ONLY = "YES"
]

View File

@ -1,63 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
# This script is used to remove a VM image from the image repository
# ------------ Set up the environment to source common tools ------------
if [ -z "${ONE_LOCATION}" ]; then
LIB_LOCATION=/usr/lib/one
else
LIB_LOCATION=$ONE_LOCATION/lib
fi
. $LIB_LOCATION/sh/scripts_common.sh
DRIVER_PATH=$(dirname $0)
source ${DRIVER_PATH}/../libfs.sh
# -------- Get rm and datastore arguments from OpenNebula core ------------
DRV_ACTION=`cat -`
ID=$1
XPATH="${DRIVER_PATH}/../xpath.rb -b $DRV_ACTION"
unset i j XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <($XPATH \
/DS_DRIVER_ACTION_DATA/IMAGE/SOURCE \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/BRIDGE_LIST \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/VITASTOR_CONF)
IMAGE_NAME="${XPATH_ELEMENTS[j++]}"
BRIDGE_LIST="${XPATH_ELEMENTS[j++]}"
VITASTOR_CONF="${XPATH_ELEMENTS[j++]}"
DST_HOST=`get_destination_host $ID`
if [ -z "$DST_HOST" ]; then
error_message "Datastore template missing 'BRIDGE_LIST' attribute."
exit -1
fi
CLI=vitastor-cli
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path ${VITASTOR_CONF}"
fi
# -------- Remove Image from Datastore ------------
log "Removing $IMAGE_NAME from the image repository in $DST_HOST"
DELETE_CMD=$(cat <<EOF
$CLI rm $IMAGE_NAME
EOF
)
ssh_exec_and_log "$DST_HOST" "$DELETE_CMD" "Error deleting $IMAGE_NAME in $DST_HOST"

View File

@ -1,64 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
# This script is used to delete a snapshot of an image
# -------- Set up the environment to source common tools & conf ------------
if [ -z "${ONE_LOCATION}" ]; then
LIB_LOCATION=/usr/lib/one
else
LIB_LOCATION=$ONE_LOCATION/lib
fi
. $LIB_LOCATION/sh/scripts_common.sh
DRIVER_PATH=$(dirname $0)
source ${DRIVER_PATH}/../libfs.sh
# -------- Get image and datastore arguments from OpenNebula core ------------
DRV_ACTION=`cat -`
ID=$1
XPATH="${DRIVER_PATH}/../xpath.rb -b $DRV_ACTION"
unset i XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <($XPATH \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/BRIDGE_LIST \
/DS_DRIVER_ACTION_DATA/IMAGE/SOURCE \
/DS_DRIVER_ACTION_DATA/IMAGE/TARGET_SNAPSHOT \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/VITASTOR_CONF)
unset i
BRIDGE_LIST="${XPATH_ELEMENTS[i++]}"
IMAGE_NAME="${XPATH_ELEMENTS[i++]}"
SNAP_ID="${XPATH_ELEMENTS[i++]}"
VITASTOR_CONF="${XPATH_ELEMENTS[i++]}"
DST_HOST=`get_destination_host $ID`
if [ -z "$DST_HOST" ]; then
error_message "Datastore template missing 'BRIDGE_LIST' attribute."
exit -1
fi
CLI=vitastor-cli
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path ${VITASTOR_CONF}"
fi
SNAP_DELETE_CMD=$(cat <<EOF
$CLI rm ${IMAGE_NAME}@${SNAP_ID}
EOF
)
ssh_exec_and_log "$DST_HOST" "$SNAP_DELETE_CMD" "Error deleting snapshot $IMAGE_NAME-$SNAP_ID@$SNAP_ID"

View File

@ -1,69 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
# This script is used to flatten a snapshot of a persistent image
# -------- Set up the environment to source common tools & conf ------------
if [ -z "${ONE_LOCATION}" ]; then
LIB_LOCATION=/usr/lib/one
else
LIB_LOCATION=$ONE_LOCATION/lib
fi
. $LIB_LOCATION/sh/scripts_common.sh
DRIVER_PATH=$(dirname $0)
source ${DRIVER_PATH}/../libfs.sh
# -------- Get image and datastore arguments from OpenNebula core ------------
DRV_ACTION=`cat -`
ID=$1
XPATH="${DRIVER_PATH}/../xpath.rb -b $DRV_ACTION"
unset i XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <($XPATH \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/BRIDGE_LIST \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/POOL_NAME \
/DS_DRIVER_ACTION_DATA/IMAGE/SOURCE \
/DS_DRIVER_ACTION_DATA/IMAGE/TARGET_SNAPSHOT \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/VITASTOR_CONF)
unset i
BRIDGE_LIST="${XPATH_ELEMENTS[i++]}"
POOL_NAME="${XPATH_ELEMENTS[i++]}"
IMAGE_NAME="${XPATH_ELEMENTS[i++]}"
SNAP_ID="${XPATH_ELEMENTS[i++]}"
VITASTOR_CONF="${XPATH_ELEMENTS[i++]}"
DST_HOST=`get_destination_host $ID`
if [ -z "$DST_HOST" ]; then
error_message "Datastore template missing 'BRIDGE_LIST' attribute."
exit -1
fi
CLI=vitastor-cli
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path ${VITASTOR_CONF}"
fi
SNAP_FLATTEN_CMD=$(cat <<EOF
set -e
$CLI flatten "$IMAGE_NAME@$SNAP_ID"
$CLI modify "$IMAGE_NAME@$SNAP_ID" --rename "$IMAGE_NAME"
$CLI rm --matching "$IMAGE_NAME@*"
EOF
)
ssh_exec_and_log "$DST_HOST" "$SNAP_FLATTEN_CMD" "Error flattening snapshot $SNAP_ID for $IMAGE_NAME"

View File

@ -1,72 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
# This script is used to revert a snapshot of an image
# -------- Set up the environment to source common tools & conf ------------
if [ -z "${ONE_LOCATION}" ]; then
LIB_LOCATION=/usr/lib/one
else
LIB_LOCATION=$ONE_LOCATION/lib
fi
. $LIB_LOCATION/sh/scripts_common.sh
DRIVER_PATH=$(dirname $0)
source ${DRIVER_PATH}/../libfs.sh
# -------- Get image and datastore arguments from OpenNebula core ------------
DRV_ACTION=`cat -`
ID=$1
XPATH="${DRIVER_PATH}/../xpath.rb -b $DRV_ACTION"
unset i XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <($XPATH \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/BRIDGE_LIST \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/POOL_NAME \
/DS_DRIVER_ACTION_DATA/IMAGE/SOURCE \
/DS_DRIVER_ACTION_DATA/IMAGE/TARGET_SNAPSHOT \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/VITASTOR_CONF)
unset i
BRIDGE_LIST="${XPATH_ELEMENTS[i++]}"
POOL_NAME="${XPATH_ELEMENTS[i++]}"
IMAGE_NAME="${XPATH_ELEMENTS[i++]}"
SNAP_ID="${XPATH_ELEMENTS[i++]}"
VITASTOR_CONF="${XPATH_ELEMENTS[i++]}"
DST_HOST=`get_destination_host $ID`
if [ -z "$DST_HOST" ]; then
error_message "Datastore template missing 'BRIDGE_LIST' attribute."
exit -1
fi
CLI=vitastor-cli
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path ${VITASTOR_CONF}"
fi
if [ -n "$POOL_NAME" ]; then
CLI="$CLI --pool ${POOL_NAME}"
fi
SNAP_REVERT_CMD=$(cat <<EOF
$CLI rm ${IMAGE_NAME}.flatten || true
$CLI create --pool "${POOL_NAME}" --parent ${IMAGE_NAME}@${SNAP_ID} ${IMAGE_NAME}.flatten
$CLI rm ${IMAGE_NAME} || true
$CLI modify ${IMAGE_NAME}.flatten --rename ${IMAGE_NAME}
EOF
)
ssh_exec_and_log "$DST_HOST" "$SNAP_REVERT_CMD" "Error reverting snapshot $SNAP_ID for $IMAGE_NAME"

View File

@ -1 +0,0 @@
../ceph/stat

View File

@ -1,12 +0,0 @@
diff --git /etc/one/vmm_exec/vmm_execrc /etc/one/vmm_exec/vmm_execrc
index e210526e63..cb51d3b5e8 100644
--- /etc/one/vmm_exec/vmm_execrc
+++ /etc/one/vmm_exec/vmm_execrc
@@ -1,6 +1,6 @@
# Space separated list of VMM-TM pairs that support live disk snapshots. VMM
# and TM must be separated by '-'
-LIVE_DISK_SNAPSHOTS="kvm-qcow2 kvm-shared kvm-ceph kvm-ssh qemu-qcow2 qemu-shared qemu-ceph qemu-ssh"
+LIVE_DISK_SNAPSHOTS="kvm-qcow2 kvm-shared kvm-ceph kvm-vitastor kvm-ssh qemu-qcow2 qemu-shared qemu-ceph qemu-ssh"
# Space separated list VNM_MAD-ACTION pairs that run locally
VNMAD_LOCAL_ACTIONS="elastic-post elastic-clean"

View File

@ -1,97 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
# clone fe:SOURCE host:remote_system_ds/disk.i size
# - fe is the front-end hostname
# - SOURCE is the path of the disk image in the form DS_BASE_PATH/disk
# - host is the target host to deploy the VM
# - remote_system_ds is the path for the system datastore in the host
SRC=$1
DST=$2
VM_ID=$3
DS_ID=$4
#--------------------------------------------------------------------------------
if [ -z "${ONE_LOCATION}" ]; then
TMCOMMON=/var/lib/one/remotes/tm/tm_common.sh
LIB_LOCATION=/usr/lib/one
else
TMCOMMON=$ONE_LOCATION/var/remotes/tm/tm_common.sh
LIB_LOCATION=$ONE_LOCATION/lib
fi
DRIVER_PATH=$(dirname $0)
source $TMCOMMON
#-------------------------------------------------------------------------------
# Compute the destination image name
#-------------------------------------------------------------------------------
DST_HOST=`arg_host $DST`
SRC_PATH=`arg_path $SRC`
DST_PATH=`arg_path $DST`
DST_DIR=`dirname $DST_PATH`
DISK_ID=$(echo $DST|awk -F. '{print $NF}')
VM_DST="${SRC_PATH}-${VM_ID}-${DISK_ID}"
DST_DS_ID=`echo $DST | sed s#//*#/#g | awk -F/ '{print $(NF-2)}'`
#-------------------------------------------------------------------------------
# Get Image information
#-------------------------------------------------------------------------------
XPATH="${DRIVER_PATH}/../../datastore/xpath.rb --stdin"
unset i j XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <(onevm show -x $VM_ID | $XPATH \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/VITASTOR_CONF \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/SIZE)
VITASTOR_CONF="${XPATH_ELEMENTS[j++]}"
SIZE="${XPATH_ELEMENTS[j++]}"
#-------------------------------------------------------------------------------
# Get Datastore information
#-------------------------------------------------------------------------------
unset i j XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <(onedatastore show -x $DST_DS_ID | $XPATH \
/DATASTORE/TEMPLATE/POOL_NAME)
POOL_NAME="${XPATH_ELEMENTS[j++]}"
disable_local_monitoring $DST_HOST $DST_DIR
#-------------------------------------------------------------------------------
# Clone the image
#-------------------------------------------------------------------------------
CLI=vitastor-cli
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path ${VITASTOR_CONF}"
fi
if [ -n "$POOL_NAME" ]; then
CLI="$CLI --pool ${POOL_NAME}"
fi
CLONE_CMD=$(cat <<EOF
$CLI create --parent $SRC_PATH --size ${SIZE}M $VM_DST
EOF
)
ssh_exec_and_log "$DST_HOST" "$CLONE_CMD" "Error cloning $SRC_PATH to $VM_DST in $DST_HOST"
exit 0

View File

@ -1 +0,0 @@
../ceph/context

View File

@ -1,113 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
# cpds host:remote_system_ds/disk.i fe:SOURCE snapid vmid dsid
# - fe is the front-end hostname
# - SOURCE is the path of the disk image in the form DS_BASE_PATH/disk
# - host is the target host to deploy the VM
# - remote_system_ds is the path for the system datastore in the host
# - snapid is the snapshot id. "-1" for none
SRC=$1
DST=$2
SNAP_ID=$3
VM_ID=$4
DS_ID=$5
#--------------------------------------------------------------------------------
if [ -z "${ONE_LOCATION}" ]; then
TMCOMMON=/var/lib/one/remotes/tm/tm_common.sh
LIB_LOCATION=/usr/lib/one
else
TMCOMMON=$ONE_LOCATION/var/remotes/tm/tm_common.sh
LIB_LOCATION=$ONE_LOCATION/lib
fi
DRIVER_PATH=$(dirname $0)
source $TMCOMMON
source ${DRIVER_PATH}/../../datastore/libfs.sh
source ${DRIVER_PATH}/../../etc/vmm/kvm/kvmrc
#-------------------------------------------------------------------------------
# Set dst path and dir
#-------------------------------------------------------------------------------
SRC_HOST=`arg_host $SRC`
SRC_PATH=`arg_path $SRC`
#-------------------------------------------------------------------------------
# Get Image information
#-------------------------------------------------------------------------------
DISK_ID=$(echo "$SRC_PATH" | $AWK -F. '{print $NF}')
XPATH="${DRIVER_PATH}/../../datastore/xpath.rb --stdin"
unset i j XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <(onevm show -x $VM_ID | $XPATH \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/SOURCE \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/CLONE \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/VITASTOR_CONF \
/VM/LCM_STATE)
SRC_IMAGE="${XPATH_ELEMENTS[j++]}"
CLONE="${XPATH_ELEMENTS[j++]}"
VITASTOR_CONF="${XPATH_ELEMENTS[j++]}"
LCM_STATE="${XPATH_ELEMENTS[j++]}"
#-------------------------------------------------------------------------------
# Get Datastore information
#-------------------------------------------------------------------------------
unset i j XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <(onedatastore show -x $DS_ID | $XPATH \
/DATASTORE/TEMPLATE/POOL_NAME \
/DATASTORE/TEMPLATE/BRIDGE_LIST)
POOL_NAME="${XPATH_ELEMENTS[j++]}"
BRIDGE_LIST="${XPATH_ELEMENTS[j++]}"
#-------------------------------------------------------------------------------
# Copy Image back to the datastore
#-------------------------------------------------------------------------------
if [ "$CLONE" = "YES" ]; then
SRC_IMAGE="${SRC_IMAGE}-${VM_ID}-${DISK_ID}"
fi
CLI=vitastor-cli
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path ${VITASTOR_CONF}"
fi
if [ -n "$POOL_NAME" ]; then
CLI="$CLI --pool ${POOL_NAME}"
fi
# Undeployed VM state, do not use front-end, choose host from bridge_list
if [ "$LCM_STATE" = '67' ] || [ "$LCM_STATE" = '68' ]; then
if [ -n "$BRIDGE_LIST" ]; then
SRC_HOST=`get_destination_host`
fi
fi
if [ "$SNAP_ID" != "-1" ]; then
SRC_IMAGE=$SRC_IMAGE@$SNAP_ID
fi
COPY_CMD=$(cat <<EOF
$CLI dd iimg=$SRC_IMAGE oimg=$DST
EOF
)
ssh_exec_and_log "$SRC_HOST" "$COPY_CMD" "Error cloning $SRC_IMAGE to $DST in $SRC_HOST"

View File

@ -1,139 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
# DELETE <host:remote_system_ds/disk.i|host:remote_system_ds/>
# - host is the target host to deploy the VM
# - remote_system_ds is the path for the system datastore in the host
DST=$1
VM_ID=$2
DS_ID=$3
#--------------------------------------------------------------------------------
if [ -z "${ONE_LOCATION}" ]; then
TMCOMMON=/var/lib/one/remotes/tm/tm_common.sh
LIB_LOCATION=/usr/lib/one
else
TMCOMMON=$ONE_LOCATION/var/remotes/tm/tm_common.sh
LIB_LOCATION=$ONE_LOCATION/lib
fi
DRIVER_PATH=$(dirname $0)
source $TMCOMMON
source ${DRIVER_PATH}/../../datastore/libfs.sh
#-------------------------------------------------------------------------------
# Process destination
#-------------------------------------------------------------------------------
DST_PATH=`arg_path $DST`
DST_HOST=`arg_host $DST`
XPATH="${DRIVER_PATH}/../../datastore/xpath.rb --stdin"
#-------------------------------------------------------------------------------
# Delete and exit if directory
#-------------------------------------------------------------------------------
if [ `is_disk $DST_PATH` -eq 0 ]; then
# Directory: delete checkpoint and directory
unset i j XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <(onedatastore show -x $DS_ID | $XPATH \
/DATASTORE/TEMPLATE/SOURCE \
/DATASTORE/TEMPLATE/CLONE \
/DATASTORE/TEMPLATE/VITASTOR_CONF \
/DATASTORE/TEMPLATE/IMAGE_PREFIX \
/DATASTORE/TEMPLATE/POOL_NAME)
SRC="${XPATH_ELEMENTS[j++]}"
CLONE="${XPATH_ELEMENTS[j++]}"
VITASTOR_CONF="${XPATH_ELEMENTS[j++]}"
IMAGE_PREFIX="${XPATH_ELEMENTS[j++]:-one}"
POOL_NAME="${XPATH_ELEMENTS[j++]}"
CLI=vitastor-cli
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path ${VITASTOR_CONF}"
fi
SRC_CHECKPOINT="${IMAGE_PREFIX}-sys-${VM_ID}-checkpoint"
ssh_exec_and_log "$DST_HOST" "$CLI rm $SRC_CHECKPOINT 2>/dev/null || exit 0" \
"Error deleting $SRC_CHECKPOINT in $DST_HOST"
log "Deleting $DST_PATH"
ssh_exec_and_log "$DST_HOST" "rm -rf $DST_PATH" "Error deleting $DST_PATH"
exit 0
fi
#-------------------------------------------------------------------------------
# Get Image information
#-------------------------------------------------------------------------------
DISK_ID=$(echo "$DST_PATH" | $AWK -F. '{print $NF}')
# Reads the disk parameters -- taken from image datastore
unset i j XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <(onevm show -x $VM_ID | $XPATH \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/SOURCE \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/CLONE \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/VITASTOR_CONF \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/IMAGE_PREFIX \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/POOL_NAME)
SRC="${XPATH_ELEMENTS[j++]}"
CLONE="${XPATH_ELEMENTS[j++]}"
VITASTOR_CONF="${XPATH_ELEMENTS[j++]}"
IMAGE_PREFIX="${XPATH_ELEMENTS[j++]:-one}"
POOL_NAME="${XPATH_ELEMENTS[j++]}"
if is_undeployed "$VM_ID" "$DST_HOST"; then
# get BRIDGE_LIST from datastore
XPATH="${DRIVER_PATH}/../../datastore/xpath.rb --stdin"
IFS= read -r -d '' BRIDGE_LIST < <(onedatastore show -x "$DS_ID" \
| $XPATH /DATASTORE/TEMPLATE/BRIDGE_LIST )
if [ -n "$BRIDGE_LIST" ]; then
DST_HOST=$(get_destination_host)
fi
fi
# No need to delete not cloned images
if [ "$CLONE" = "NO" ]; then
exit 0
fi
CLI=vitastor-cli
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path ${VITASTOR_CONF}"
fi
if [ -n "$SRC" ]; then
# cloned, so the name will be "one-<imageid>-<vmid>-<diskid>"
SRC_IMAGE="${SRC}-${VM_ID}-${DISK_ID}"
else
# volatile
SRC_IMAGE="${IMAGE_PREFIX}-sys-${VM_ID}-${DISK_ID}"
fi
# Delete the image
log "Deleting $DST_PATH"
DELETE_CMD=$(cat <<EOF
$CLI rm $SRC_IMAGE
EOF
)
ssh_exec_and_log "$DST_HOST" "$DELETE_CMD" "Error deleting $SRC_IMAGE in $DST_HOST"

View File

@ -1 +0,0 @@
../ceph/failmigrate

View File

@ -1,16 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
# <CLONE|LN>(.tm_mad_system) tm_mad fe:SOURCE host:remote_system_ds/disk.i vmid dsid
# LN = Attach disk to a VM (Vitastor doesn't need to do anything in this case)
SRC=$1
DST=$2
VM_ID=$3
DS_ID=$4
exit 0

View File

@ -1,120 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
# mkimage size format host:remote_system_ds/disk.i vmid dsid
# - size in MB of the image
# - format for the image
# - host is the target host to deploy the VM
# - remote_system_ds is the path for the system datastore in the host
# - vmid is the id of the VM
# - dsid is the target datastore (0 is the system datastore)
SIZE=$1
FORMAT=$2
DST=$3
VMID=$4
DSID=$5
#-------------------------------------------------------------------------------
if [ -z "${ONE_LOCATION}" ]; then
TMCOMMON=/var/lib/one/remotes/tm/tm_common.sh
LIB_LOCATION=/usr/lib/one
else
TMCOMMON=$ONE_LOCATION/var/remotes/tm/tm_common.sh
LIB_LOCATION=$ONE_LOCATION/lib
fi
DRIVER_PATH=$(dirname $0)
source $TMCOMMON
source ${DRIVER_PATH}/../../etc/datastore/datastore.conf
source ${DRIVER_PATH}/../../datastore/libfs.sh
#-------------------------------------------------------------------------------
# Set dst path and dir
#-------------------------------------------------------------------------------
DST_PATH=`arg_path $DST`
DST_HOST=`arg_host $DST`
DST_DIR=`dirname $DST_PATH`
DISK_ID=$(echo $DST|awk -F. '{print $NF}')
#-------------------------------------------------------------------------------
# Get Image information
#-------------------------------------------------------------------------------
XPATH="${DRIVER_PATH}/../../datastore/xpath.rb --stdin"
unset i j XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <(onevm show -x $VMID | $XPATH \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/VITASTOR_CONF \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/POOL_NAME \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/IMAGE_PREFIX \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/FS)
VITASTOR_CONF="${XPATH_ELEMENTS[j++]}"
POOL_NAME="${XPATH_ELEMENTS[j++]}"
IMAGE_PREFIX="${XPATH_ELEMENTS[j++]:-one}"
FS="${XPATH_ELEMENTS[j++]}"
CLI=
QEMU_ARG=""
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path ${VITASTOR_CONF}"
QEMU_ARG=":config_path=${VITASTOR_CONF}"
fi
IMAGE_NAME="${IMAGE_PREFIX}-sys-${VMID}-${DISK_ID}"
ssh_make_path $DST_HOST $DST_DIR
set -e -o pipefail
# if user requested a swap or specifies a FS, we need to create a local
# formatted image and upload into existing Vitastor image
FS_OPTS=$(eval $(echo "echo \$FS_OPTS_$FS"))
MKIMAGE_CMD=$(cat <<EOF
set -e -o pipefail
export PATH=/usr/sbin:/sbin:\$PATH
vitastor-cli $CLI create --pool "${POOL_NAME}" "$IMAGE_NAME" --size "${SIZE}M"
EOF
)
if [ -n "$FS" -o "$FORMAT" = "swap" ]; then
MKFS_CMD=`mkfs_command '$NBD' raw "$SIZE" "$SUPPORTED_FS" "$FS" "$FS_OPTS" | grep -v $QEMU_IMG`
MKIMAGE_CMD=$(cat <<EOF
$MKIMAGE_CMD
NBD=\$(sudo vitastor-nbd $CLI map --image "$IMAGE_NAME")
trap "sudo vitastor-nbd $CLI unmap \$NBD" EXIT
$MKFS_CMD
EOF
)
fi
DELIMAGE_CMD=$(cat <<EOF
vitastor-cli $CLI rm "$IMAGE_NAME"
EOF
)
log "Making volatile disk of ${SIZE}M at $DST"
ssh_exec_and_log_no_error "$DST_HOST" "$MKIMAGE_CMD" "Error creating volatile disk.$DISK_ID ($IMAGE_NAME) in $DST_HOST in pool $POOL_NAME."
rc=$?
if [ $rc != 0 ]; then
ssh_exec_and_log_no_error "$DST_HOST" "$DELIMAGE_CMD" "Error removing image"
fi
exit $rc

View File

@ -1 +0,0 @@
../ceph/mkswap

View File

@ -1 +0,0 @@
../../datastore/vitastor/monitor

View File

@ -1 +0,0 @@
../ceph/mv

View File

@ -1,15 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
# mvds host:remote_system_ds/disk.i fe:SOURCE vmid dsid
# - fe is the front-end hostname
# - SOURCE is the path of the disk image in the form DS_BASE_PATH/disk
# - host is the target host to deploy the VM
# - remote_system_ds is the path for the system datastore in the host
# - vmid is the id of the VM
# - dsid is the target datastore (0 is the system datastore)
exit 0

View File

@ -1 +0,0 @@
postbackup_live

View File

@ -1 +0,0 @@
../ceph/postbackup_live

View File

@ -1 +0,0 @@
../ceph/postmigrate

View File

@ -1,152 +0,0 @@
#!/usr/bin/env ruby
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
ONE_LOCATION = ENV['ONE_LOCATION']
LIVE = ENV['LIVE']
if !ONE_LOCATION
RUBY_LIB_LOCATION = '/usr/lib/one/ruby'
GEMS_LOCATION = '/usr/share/one/gems'
VMDIR = '/var/lib/one'
CONFIG_FILE = '/var/lib/one/config'
else
RUBY_LIB_LOCATION = ONE_LOCATION + '/lib/ruby'
GEMS_LOCATION = ONE_LOCATION + '/share/gems'
VMDIR = ONE_LOCATION + '/var'
CONFIG_FILE = ONE_LOCATION + '/var/config'
end
# %%RUBYGEMS_SETUP_BEGIN%%
if File.directory?(GEMS_LOCATION)
real_gems_path = File.realpath(GEMS_LOCATION)
if !defined?(Gem) || Gem.path != [real_gems_path]
$LOAD_PATH.reject! {|l| l =~ /vendor_ruby/ }
# Suppress warnings from Rubygems
# https://github.com/OpenNebula/one/issues/5379
begin
verb = $VERBOSE
$VERBOSE = nil
require 'rubygems'
Gem.use_paths(real_gems_path)
ensure
$VERBOSE = verb
end
end
end
# %%RUBYGEMS_SETUP_END%%
$LOAD_PATH << RUBY_LIB_LOCATION
require 'rexml/document'
require_relative '../lib/tm_action'
require_relative '../lib/kvm'
require_relative '../lib/datastore'
if LIVE
# TODO: fsfreeze for each hypervisor based on VM_MAD
include TransferManager::KVM
end
#-------------------------------------------------------------------------------
# BACKUP tm_mad host:remote_dir DISK_ID:...:DISK_ID deploy_id bjid vmid dsid
#-------------------------------------------------------------------------------
TransferManager::Datastore.load_env
vm_xml = STDIN.read
dir = ARGV[0].split ':'
disks = ARGV[1].split ':'
deploy_id = ARGV[2]
_bjid = ARGV[3]
vmid = ARGV[4]
_dsid = ARGV[5]
rhost = dir[0]
rdir = dir[1]
xml_doc = REXML::Document.new(vm_xml)
vm = xml_doc.root
ds = TransferManager::Datastore.from_vm_backup_ds(:vm_xml => vm_xml)
base_path = ENV['BACKUP_BASE_PATH']
bck_dir = if base_path
"#{base_path}/#{vmid}/backup"
else
"#{rdir}/backup"
end
snap_cmd = ''
expo_cmd = ''
clup_cmd = ''
vm.elements.each 'TEMPLATE/DISK' do |d|
did = d.elements['DISK_ID'].text
next unless disks.include? did
src = d.elements['SOURCE'].text
clon = d.elements['CLONE'].text
src_image = if clon == 'NO' then src else "#{src}-#{vmid}-#{did}" end
cmd = 'vitastor-cli'
qemu_arg = ''
if d.elements['VITASTOR_CONF']
cmd = cmd + ' --config_path ' + d.elements['VITASTOR_CONF']
qemu_arg += 'config_path='+d.elements['VITASTOR_CONF']+':'
end
draw = "#{bck_dir}/disk.#{did}.raw"
ddst = "#{bck_dir}/disk.#{did}.0"
expo_cmd << ds.cmd_confinement("qemu-img convert -m 4 -O qcow2 'vitastor:#{qemu_arg}image=#{src_image}' #{ddst}\n", rdir)
clup_cmd << "rm -f #{draw}\n"
rescue StandardError => e
STDERR.puts "Missing configuration attributes in DISK: #{e.message}"
exit(1)
end
if LIVE
freeze, thaw = fsfreeze(vm, deploy_id)
end
script = <<~EOS
set -ex -o pipefail
# Prepare backup folder
[ -d #{bck_dir} ] && rm -rf #{bck_dir}
mkdir -p #{bck_dir}
echo "#{Base64.encode64(vm_xml)}" > #{bck_dir}/vm.xml
#{freeze}
#{snap_cmd}
#{thaw}
#{expo_cmd}
#{clup_cmd}
EOS
rc = TransferManager::Action.ssh('prebackup_live',
:host => rhost,
:cmds => script,
:nostdout => false,
:nostderr => false
)
if rc.code != 0
STDERR.puts "Error preparing disk files: #{rc.stdout} #{rc.stderr}"
end
exit(rc.code)

View File

@ -1,8 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
export LIVE=1
`dirname $0`/prebackup $@

View File

@ -1 +0,0 @@
../ceph/premigrate

View File

@ -1,81 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
# resize image size vmid
SRC=$1
SIZE=$2
VM_ID=$3
#--------------------------------------------------------------------------------
if [ -z "${ONE_LOCATION}" ]; then
TMCOMMON=/var/lib/one/remotes/tm/tm_common.sh
LIB_LOCATION=/usr/lib/one
else
TMCOMMON=$ONE_LOCATION/var/remotes/tm/tm_common.sh
LIB_LOCATION=$ONE_LOCATION/lib
fi
DRIVER_PATH=$(dirname $0)
source $TMCOMMON
#-------------------------------------------------------------------------------
# Set dst path and dir
#-------------------------------------------------------------------------------
SRC_HOST=`arg_host $SRC`
SRC_PATH=`arg_path $SRC`
#-------------------------------------------------------------------------------
# Get Image information
#-------------------------------------------------------------------------------
DISK_ID=$(echo "$SRC_PATH" | $AWK -F. '{print $NF}')
XPATH="${DRIVER_PATH}/../../datastore/xpath.rb --stdin"
unset i j XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <(onevm show -x $VM_ID | $XPATH \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/SOURCE \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/VITASTOR_CONF \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/IMAGE_PREFIX \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/PERSISTENT)
SRC_IMAGE="${XPATH_ELEMENTS[j++]}"
VITASTOR_CONF="${XPATH_ELEMENTS[j++]}"
IMAGE_PREFIX="${XPATH_ELEMENTS[j++]:-one}"
PERSISTENT="${XPATH_ELEMENTS[j++]}"
if [ -n "${SRC_IMAGE}" ]; then
if [ "${PERSISTENT}" != 'YES' ]; then
SRC_IMAGE="${SRC_IMAGE}-${VM_ID}-${DISK_ID}"
fi
else
SRC_IMAGE="${IMAGE_PREFIX}-sys-${VM_ID}-${DISK_ID}"
fi
#-------------------------------------------------------------------------------
# Resize disk
#-------------------------------------------------------------------------------
CLI=vitastor-cli
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path ${VITASTOR_CONF}"
fi
RESIZE_CMD=$(cat <<EOF
$CLI modify --resize ${SIZE}M "$SRC_IMAGE"
EOF
)
ssh_exec_and_log "$SRC_HOST" "$RESIZE_CMD" "Error resizing disk $SRC_IMAGE"
exit 0

View File

@ -1,201 +0,0 @@
#!/usr/bin/env ruby
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
ONE_LOCATION = ENV['ONE_LOCATION']
if !ONE_LOCATION
RUBY_LIB_LOCATION = '/usr/lib/one/ruby'
GEMS_LOCATION = '/usr/share/one/gems'
VMDIR = '/var/lib/one'
CONFIG_FILE = '/var/lib/one/config'
else
RUBY_LIB_LOCATION = ONE_LOCATION + '/lib/ruby'
GEMS_LOCATION = ONE_LOCATION + '/share/gems'
VMDIR = ONE_LOCATION + '/var'
CONFIG_FILE = ONE_LOCATION + '/var/config'
end
# %%RUBYGEMS_SETUP_BEGIN%%
if File.directory?(GEMS_LOCATION)
real_gems_path = File.realpath(GEMS_LOCATION)
if !defined?(Gem) || Gem.path != [real_gems_path]
$LOAD_PATH.reject! {|l| l =~ /vendor_ruby/ }
# Suppress warnings from Rubygems
# https://github.com/OpenNebula/one/issues/5379
begin
verb = $VERBOSE
$VERBOSE = nil
require 'rubygems'
Gem.use_paths(real_gems_path)
ensure
$VERBOSE = verb
end
end
end
# %%RUBYGEMS_SETUP_END%%
$LOAD_PATH << RUBY_LIB_LOCATION
require 'rexml/document'
require 'json'
require 'securerandom'
require_relative '../lib/tm_action'
require_relative '../lib/datastore'
def get_vitastor_disks(vm_xml)
vm_xml = REXML::Document.new(vm_xml) if vm_xml.is_a?(String)
vm = vm_xml.root
vmid = vm.elements['VMID'].text
indexed_disks = []
vm.elements.each('DISK[TM_MAD="vitastor"]') do |d|
disk = new(vmid, d)
indexed_disks[disk.id] = disk
end
indexed_disks
end
#-------------------------------------------------------------------------------
# RESTORE vm_id img_id inc_id disk_id
#-------------------------------------------------------------------------------
_dir = ARGV[0].split ':'
vm_id = ARGV[1]
bk_img_id = ARGV[2].to_i
inc_id = ARGV[3]
disk_id = ARGV[4].to_i
begin
action = TransferManager::Action.new(:action_name => 'restore',
:vm_id => vm_id)
# --------------------------------------------------------------------------
# Image & Datastore information
# --------------------------------------------------------------------------
bk_img = OpenNebula::Image.new_with_id(bk_img_id, action.one)
rc = bk_img.info
raise rc.message.to_s if OpenNebula.is_error?(rc)
bk_ds = TransferManager::Datastore.from_image_ds(:image => bk_img,
:client => action.one)
# --------------------------------------------------------------------------
# Backup information
# sample output: {"0":"rsync://100//0:3ffce7/var/lib/one/datastores/100/1/3ffce7/disk.0.0"}
# --------------------------------------------------------------------------
xml_data = <<~EOS
#{action.vm.to_xml}
#{bk_img.to_xml}
EOS
rc = bk_ds.action("ls -i #{inc_id}", xml_data)
raise 'cannot list backup contents' unless rc.code == 0
disk_urls = JSON.parse(rc.stdout)
disk_urls = disk_urls.select {|id, _url| id.to_i == disk_id } if disk_id != -1
# --------------------------------------------------------------------------
# Restore disk_urls in Host VM folder
# --------------------------------------------------------------------------
vitastor_disks = get_vitastor_disks(action.vm.template_xml)
success_disks = []
info = {}
disk_urls.each do |id, url|
vitastor_disk = vitastor_disks[id.to_i]
randsuffix = SecureRandom.hex(5)
vitastor_one_ds = OpenNebula::Datastore.new_with_id(
action.vm["/VM/TEMPLATE/DISK[DISK_ID = #{id}]/DATASTORE_ID"].to_i, action.one
)
vitastor_ds = TransferManager::Datastore.new(:ds => vitastor_one_ds, :client => action.one)
src_image = vitastor_disk.elements['SOURCE'].text
disk_id = vitastor_disk.elements['DISK_ID'].text
if vitastor_disk.elements['CLONE'].text == 'YES'
src_image += '-'+vm_id+'-'+disk_id
end
cli = 'vitastor-cli'
config_path = vitastor_disk.elements['VITASTOR_CONF']
qemu_args = ''
if config_path:
cli += ' --config_path "'+config_path.text+'"'
qemu_args += ':config_path='+config_path.text
info[vitastor_disk] = {
:br => vitastor_ds.pick_bridge,
:bak => "#{src_image}.backup.#{randsuffix}",
:old => "#{src_image}.old.#{randsuffix}",
:cli => cli,
:img => src_image,
}
upload_vitastor = <<~EOS
set -e
tmpimg="$(mktemp -t disk#{id}.XXXX)"
#{__dir__}/../../datastore/downloader.sh --nodecomp #{url} $tmpimg
#{cli} create -s $(qemu-img info --output json $tmpimg | jq -r '.["virtual-size"]') #{info[vitastor_disk][:bak]}
qemu-img convert -m 4 -O raw $tmpimg "vitastor:image=#{info[vitastor_disk][:bak]}#{qemu_args}"
rm -f $tmpimg
EOS
rc = action.ssh(:host => info[vitastor_disk][:br],
:cmds => upload_ceph,
:forward => false,
:nostdout => false,
:nostderr => false)
break if rc.code != 0
success_disks << vitastor_disk
end
# Rollback and raise error if it was unable to backup all disks
if success_disks.length != disk_urls.length
success_disks.each do |vitastor_disk|
cleanup = <<~EOS
#{info[vitastor_disk][:cli]} rm #{info[vitastor_disk][:bak]}
EOS
action.ssh(:host => info[vitastor_disk][:br],
:cmds => cleanup,
:forward => false,
:nostdout => false,
:nostderr => false)
end
raise "error uploading backup disk to Vitastor (#{success_disks.length}/#{disk_urls.length})"
end
# --------------------------------------------------------------------------
# Replace VM disk_urls with backup copies (~prolog)
# --------------------------------------------------------------------------
success_disks.each do |vitastor_disk|
move = <<~EOS
set -e
#{info[vitastor_disk][:cli]} mv #{info[vitastor_disk][:img]} #{info[vitastor_disk][:old]}
#{info[vitastor_disk][:cli]} mv #{info[vitastor_disk][:bak]} #{info[vitastor_disk][:img]}
#{info[vitastor_disk][:cli]} rm --matching "#{info[vitastor_disk][:img]}@*"
#{info[vitastor_disk][:cli]} rm #{info[vitastor_disk][:old]}
EOS
rc = action.ssh(:host => info[vitastor_disk][:br],
:cmds => move,
:forward => false,
:nostdout => false,
:nostderr => false)
warn 'cannot restore disk backup' if rc.code != 0
end
rescue StandardError => e
STDERR.puts "Error restoring VM disks: #{e.message}"
exit(1)
end

View File

@ -1,78 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
# snap_create host:parent_image snap_id vmid ds_id
SRC=$1
SNAP_ID=$2
VM_ID=$3
DS_ID=$4
#--------------------------------------------------------------------------------
if [ -z "${ONE_LOCATION}" ]; then
TMCOMMON=/var/lib/one/remotes/tm/tm_common.sh
LIB_LOCATION=/usr/lib/one
else
TMCOMMON=$ONE_LOCATION/var/remotes/tm/tm_common.sh
LIB_LOCATION=$ONE_LOCATION/lib
fi
DRIVER_PATH=$(dirname $0)
source $TMCOMMON
#-------------------------------------------------------------------------------
# Set dst path and dir
#-------------------------------------------------------------------------------
SRC_HOST=`arg_host $SRC`
SRC_PATH=`arg_path $SRC`
#-------------------------------------------------------------------------------
# Get Image information
#-------------------------------------------------------------------------------
DISK_ID=$(echo "$SRC_PATH" | $AWK -F. '{print $NF}')
XPATH="${DRIVER_PATH}/../../datastore/xpath.rb --stdin"
unset i j XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <(onevm show -x $VM_ID | $XPATH \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/SOURCE \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/CLONE \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/VITASTOR_CONF \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/TYPE )
SRC_IMAGE="${XPATH_ELEMENTS[j++]}"
CLONE="${XPATH_ELEMENTS[j++]}"
VITASTOR_CONF="${XPATH_ELEMENTS[j++]}"
TYPE="${XPATH_ELEMENTS[j++]}"
if [ "$CLONE" = "YES" ]; then
SRC_IMAGE="${SRC_IMAGE}-${VM_ID}-${DISK_ID}"
fi
CLI=vitastor-cli
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path ${VITASTOR_CONF}"
fi
#-------------------------------------------------------------------------------
# Create snapshots
#-------------------------------------------------------------------------------
SNAP_CREATE_CMD=$(cat <<EOF
$CLI snap-create "$SRC_IMAGE@$SNAP_ID"
EOF
)
ssh_exec_and_log "$SRC_HOST" "$SNAP_CREATE_CMD" "Error creating snapshot $SRC_IMAGE@$SNAP_ID"
exit 0

View File

@ -1 +0,0 @@
snap_create

View File

@ -1,75 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
# snap_delete host:parent_image snap_id vmid ds_id
SRC=$1
SNAP_ID=$2
VM_ID=$3
DS_ID=$4
# FIXME: copypaste below, down to "delete snapshot"
#--------------------------------------------------------------------------------
if [ -z "${ONE_LOCATION}" ]; then
TMCOMMON=/var/lib/one/remotes/tm/tm_common.sh
LIB_LOCATION=/usr/lib/one
else
TMCOMMON=$ONE_LOCATION/var/remotes/tm/tm_common.sh
LIB_LOCATION=$ONE_LOCATION/lib
fi
DRIVER_PATH=$(dirname $0)
source $TMCOMMON
#-------------------------------------------------------------------------------
# Set dst path and dir
#-------------------------------------------------------------------------------
SRC_HOST=`arg_host $SRC`
SRC_PATH=`arg_path $SRC`
#-------------------------------------------------------------------------------
# Get Image information
#-------------------------------------------------------------------------------
DISK_ID=$(echo "$SRC_PATH" | $AWK -F. '{print $NF}')
XPATH="${DRIVER_PATH}/../../datastore/xpath.rb --stdin"
unset i j XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <(onevm show -x $VM_ID | $XPATH \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/SOURCE \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/CLONE \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/VITASTOR_CONF )
SRC_IMAGE="${XPATH_ELEMENTS[j++]}"
CLONE="${XPATH_ELEMENTS[j++]}"
VITASTOR_CONF="${XPATH_ELEMENTS[j++]}"
if [ "$CLONE" = "YES" ]; then
SRC_IMAGE="${SRC_IMAGE}-${VM_ID}-${DISK_ID}"
fi
CLI=vitastor-cli
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path ${VITASTOR_CONF}"
fi
#-------------------------------------------------------------------------------
# Delete snapshot
#-------------------------------------------------------------------------------
SNAP_DELETE_CMD=$(cat <<EOF
$CLI rm "$SRC_IMAGE@$SNAP_ID"
EOF
)
ssh_exec_and_log "$SRC_HOST" "$SNAP_DELETE_CMD" "Error deleting snapshot $SRC_IMAGE@$SNAP_ID"

View File

@ -1,79 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
# snap_revert host:parent_image snap_id vmid ds_id
SRC=$1
SNAP_ID=$2
VM_ID=$3
DS_ID=$4
#--------------------------------------------------------------------------------
if [ -z "${ONE_LOCATION}" ]; then
TMCOMMON=/var/lib/one/remotes/tm/tm_common.sh
LIB_LOCATION=/usr/lib/one
else
TMCOMMON=$ONE_LOCATION/var/remotes/tm/tm_common.sh
LIB_LOCATION=$ONE_LOCATION/lib
fi
DRIVER_PATH=$(dirname $0)
source $TMCOMMON
#-------------------------------------------------------------------------------
# Set dst path and dir
#-------------------------------------------------------------------------------
SRC_HOST=`arg_host $SRC`
SRC_PATH=`arg_path $SRC`
#-------------------------------------------------------------------------------
# Get Image information
#-------------------------------------------------------------------------------
DISK_ID=$(echo "$SRC_PATH" | $AWK -F. '{print $NF}')
XPATH="${DRIVER_PATH}/../../datastore/xpath.rb --stdin"
unset i j XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <(onevm show -x $VM_ID | $XPATH \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/SOURCE \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/CLONE \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/VITASTOR_CONF \
/VM/TEMPLATE/DISK[DISK_ID=$DISK_ID]/TYPE )
SRC_IMAGE="${XPATH_ELEMENTS[j++]}"
CLONE="${XPATH_ELEMENTS[j++]}"
VITASTOR_CONF="${XPATH_ELEMENTS[j++]}"
TYPE="${XPATH_ELEMENTS[j++]}"
if [ "$CLONE" = "YES" ]; then
SRC_IMAGE="${SRC_IMAGE}-${VM_ID}-${DISK_ID}"
fi
CLI=vitastor-cli
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path ${VITASTOR_CONF}"
fi
#-------------------------------------------------------------------------------
# Revert to snapshot (== remove current image and recreate it as a clone)
#-------------------------------------------------------------------------------
SNAP_REVERT_CMD=$(cat <<EOF
set -e
$CLI ls --json "$SRC_IMAGE@$SNAP_ID" | jq -s -e '[ .[][] | select(.name == "$SRC_IMAGE@$SNAP_ID") ] | length > 0'
$CLI rm "$SRC_IMAGE" || true
$CLI create --parent "$SRC_IMAGE@$SNAP_ID" "$SRC_IMAGE"
EOF
)
ssh_exec_and_log "$SRC_HOST" "$SNAP_REVERT_CMD" "Error reverting snapshot $SNAP_ID for $SRC_IMAGE"

View File

@ -1,15 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
DRIVER_PATH=$(dirname $0)
DEP_FILE=$1
DEP_FILE_LOCATION=$(dirname $DEP_FILE)
cat > $DEP_FILE
python3 $DRIVER_PATH/deploy_vitastor.py $DEP_FILE $DEP_FILE_LOCATION/vm.xml
cat $DEP_FILE | $DRIVER_PATH/deploy $@

View File

@ -1,58 +0,0 @@
#!/usr/bin/env python3
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
import base64
from sys import argv, stderr
from xml.etree import ElementTree as ET
dep_file = argv[1]
with open(dep_file, 'rb') as fd:
dep_txt = base64.b64decode(fd.read())
dep = ET.fromstring(dep_txt)
vm_file = argv[2]
with open(vm_file, 'rb') as fd:
vm_txt = base64.b64decode(fd.read())
vm = ET.fromstring(vm_txt)
ET.register_namespace('qemu', 'http://libvirt.org/schemas/domain/qemu/1.0')
ET.register_namespace('one', 'http://opennebula.org/xmlns/libvirt/1.0')
vm_id = vm.find('./ID').text
context_disk_id = vm.find('./TEMPLATE/CONTEXT/DISK_ID').text
changed = 0
txt = lambda x: '' if x is None else x.text
for disk in dep.findall('./devices/disk[@type="file"]'):
try:
disk_id = disk.find('./source').attrib['file'].split('.')[-1]
vm_disk = vm.find('./TEMPLATE/DISK[DISK_ID="{}"]'.format(disk_id))
if vm_disk is None:
continue
tm_mad = txt(vm_disk.find('./TM_MAD'))
if tm_mad != 'vitastor':
continue
src_image = txt(vm_disk.find('./SOURCE'))
clone = txt(vm_disk.find('./CLONE'))
vitastor_conf = txt(vm_disk.find('./VITASTOR_CONF'))
if clone == "YES":
src_image += "-"+vm_id+"-"+disk_id
# modify
changed = 1
disk.attrib['type'] = 'network'
disk.remove(disk.find('./source'))
src = ET.SubElement(disk, 'source')
src.attrib['protocol'] = 'vitastor'
src.attrib['name'] = src_image
if vitastor_conf:
# path to config should be added to /etc/apparmor.d/local/abstractions/libvirt-qemu
config = ET.SubElement(src, 'config')
config.text = vitastor_conf
except Exception as e:
print("Error: {}".format(e), file=stderr)
if changed:
ET.ElementTree(dep).write(dep_file)

View File

@ -1,39 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
DRIVER_PATH=$(dirname $0)
source $DRIVER_PATH/../../etc/vmm/kvm/kvmrc
source $DRIVER_PATH/../../scripts_common.sh
FILE=$1
HOST=$2
DEPLOY_ID=$3
VM_ID=$4
DS_ID=$5
XPATH="${DRIVER_PATH}/../../datastore/xpath.rb --stdin"
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <(cat | $XPATH \
/VMM_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/VITASTOR_CONF \
/VMM_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/IMAGE_PREFIX)
VITASTOR_CONF="${XPATH_ELEMENTS[j++]}"
IMAGE_PREFIX="${XPATH_ELEMENTS[j++]:-one}"
CLI=vitastor-cli
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path $VITASTOR_CONF"
fi
SRC_IMAGE="${IMAGE_PREFIX}-sys-${VM_ID}-checkpoint"
exec_and_log "$CLI dd iimg=$SRC_IMAGE of=$FILE" "Error exporting checkpoint into from $SRC_IMAGE to $FILE"
exec_and_log "$CLI rm $SRC_IMAGE" "Error removing checkpoint $SRC_IMAGE"
"$DRIVER_PATH"/restore $@

View File

@ -1,48 +0,0 @@
#!/bin/bash
# Vitastor OpenNebula driver
# Copyright (c) Vitaliy Filippov, 2024+
# License: Apache-2.0 http://www.apache.org/licenses/LICENSE-2.0
DRIVER_PATH=$(dirname $0)
source $DRIVER_PATH/../../etc/vmm/kvm/kvmrc
source $DRIVER_PATH/../../scripts_common.sh
DEPLOY_ID=$1
FILE=$2
VM_ID=$4
DS_ID=$5
rm -f "$FILE"
"$DRIVER_PATH"/save $@
XPATH="${DRIVER_PATH}/../../datastore/xpath.rb --stdin"
unset i j XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <(cat | $XPATH \
/VMM_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/VITASTOR_CONF \
/VMM_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/IMAGE_PREFIX \
/VMM_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/POOL_NAME)
VITASTOR_CONF="${XPATH_ELEMENTS[j++]}"
IMAGE_PREFIX="${XPATH_ELEMENTS[j++]:-one}"
POOL_NAME="${XPATH_ELEMENTS[j++]}"
CLI=vitastor-cli
if [ -n "$VITASTOR_CONF" ]; then
CLI="$CLI --config_path ${VITASTOR_CONF}"
fi
if [ -n "$POOL_NAME" ]; then
CLI="$CLI --pool ${POOL_NAME}"
fi
DST_IMAGE="${IMAGE_PREFIX}-sys-${VM_ID}-checkpoint"
exec_and_log "$CLI dd if=$FILE oimg=$DST_IMAGE conv=trunc" "Error importing checkpoint into $DST_IMAGE"
exec_and_log "$RM -f $FILE" "Error removing checkpoint ($FILE)"
exit 0

View File

@ -1 +0,0 @@
oneadmin ALL=(ALL) NOPASSWD: /usr/bin/vitastor-nbd

View File

@ -1,10 +0,0 @@
NAME = "Vitastor Images"
DS_MAD = "vitastor"
TM_MAD = "vitastor"
TYPE = "IMAGE_DS"
DISK_TYPE = "file"
BRIDGE_LIST = "opennebula1"
POOL_NAME = "test1"
IMAGE_PREFIX = "one"
STAGING_DIR = "/var/tmp"
VITASTOR_CONF = "/etc/vitastor/vitastor.conf"

View File

@ -1,7 +0,0 @@
NAME = "Vitastor System"
TM_MAD = "vitastor"
TYPE = "SYSTEM_DS"
BRIDGE_LIST = "opennebula1"
POOL_NAME = "test1"
IMAGE_PREFIX = "one"
VITASTOR_CONF = "/etc/vitastor/vitastor.conf"

View File

@ -43,5 +43,5 @@ RUN set -e; \
rpmbuild -ba vitastor.spec; \ rpmbuild -ba vitastor.spec; \
mkdir -p /root/packages/vitastor-el7; \ mkdir -p /root/packages/vitastor-el7; \
rm -rf /root/packages/vitastor-el7/*; \ rm -rf /root/packages/vitastor-el7/*; \
cp ~/rpmbuild/RPMS/*/*vitastor* /root/packages/vitastor-el7/; \ cp ~/rpmbuild/RPMS/*/vitastor* /root/packages/vitastor-el7/; \
cp ~/rpmbuild/SRPMS/vitastor* /root/packages/vitastor-el7/ cp ~/rpmbuild/SRPMS/vitastor* /root/packages/vitastor-el7/

View File

@ -89,20 +89,6 @@ Requires: fio = 3.7-1.el7
Vitastor fio drivers for benchmarking. Vitastor fio drivers for benchmarking.
%package -n vitastor-opennebula
Summary: Vitastor for OpenNebula
Group: Development/Libraries
Requires: vitastor-client
Requires: jq
Requires: python3-lxml
Requires: patch
Requires: qemu-kvm-block-vitastor
%description -n vitastor-opennebula
Vitastor storage plugin for OpenNebula.
%prep %prep
%setup -q %setup -q
@ -127,11 +113,6 @@ mkdir -p %buildroot/lib/systemd/system
cp mon/scripts/vitastor.target mon/scripts/vitastor-mon.service mon/scripts/vitastor-osd@.service %buildroot/lib/systemd/system cp mon/scripts/vitastor.target mon/scripts/vitastor-mon.service mon/scripts/vitastor-osd@.service %buildroot/lib/systemd/system
mkdir -p %buildroot/lib/udev/rules.d mkdir -p %buildroot/lib/udev/rules.d
cp mon/scripts/90-vitastor.rules %buildroot/lib/udev/rules.d cp mon/scripts/90-vitastor.rules %buildroot/lib/udev/rules.d
mkdir -p %buildroot/var/lib/one
cp -r opennebula/remotes %buildroot/var/lib/one
cp opennebula/install.sh %buildroot/var/lib/one/remotes/datastore/vitastor/
mkdir -p %buildroot/etc/
cp -r opennebula/sudoers.d %buildroot/etc/
%files %files
@ -192,14 +173,4 @@ chown vitastor:vitastor /var/lib/vitastor
%_libdir/libfio_vitastor_sec.so %_libdir/libfio_vitastor_sec.so
%files -n vitastor-opennebula
/var/lib/one
/etc/sudoers.d/opennebula-vitastor
%triggerin -n vitastor-opennebula -- opennebula
[ $2 = 0 ] || exit 0
/var/lib/one/remotes/datastore/vitastor/install.sh
%changelog %changelog

View File

@ -42,5 +42,5 @@ RUN set -e; \
rpmbuild -ba vitastor.spec; \ rpmbuild -ba vitastor.spec; \
mkdir -p /root/packages/vitastor-el8; \ mkdir -p /root/packages/vitastor-el8; \
rm -rf /root/packages/vitastor-el8/*; \ rm -rf /root/packages/vitastor-el8/*; \
cp ~/rpmbuild/RPMS/*/*vitastor* /root/packages/vitastor-el8/; \ cp ~/rpmbuild/RPMS/*/vitastor* /root/packages/vitastor-el8/; \
cp ~/rpmbuild/SRPMS/vitastor* /root/packages/vitastor-el8/ cp ~/rpmbuild/SRPMS/vitastor* /root/packages/vitastor-el8/

View File

@ -87,20 +87,6 @@ Requires: fio = 3.7-3.el8
Vitastor fio drivers for benchmarking. Vitastor fio drivers for benchmarking.
%package -n vitastor-opennebula
Summary: Vitastor for OpenNebula
Group: Development/Libraries
Requires: vitastor-client
Requires: jq
Requires: python3-lxml
Requires: patch
Requires: qemu-kvm-block-vitastor
%description -n vitastor-opennebula
Vitastor storage plugin for OpenNebula.
%prep %prep
%setup -q %setup -q
@ -124,11 +110,6 @@ mkdir -p %buildroot/lib/systemd/system
cp mon/scripts/vitastor.target mon/scripts/vitastor-mon.service mon/scripts/vitastor-osd@.service %buildroot/lib/systemd/system cp mon/scripts/vitastor.target mon/scripts/vitastor-mon.service mon/scripts/vitastor-osd@.service %buildroot/lib/systemd/system
mkdir -p %buildroot/lib/udev/rules.d mkdir -p %buildroot/lib/udev/rules.d
cp mon/scripts/90-vitastor.rules %buildroot/lib/udev/rules.d cp mon/scripts/90-vitastor.rules %buildroot/lib/udev/rules.d
mkdir -p %buildroot/var/lib/one
cp -r opennebula/remotes %buildroot/var/lib/one
cp opennebula/install.sh %buildroot/var/lib/one/remotes/datastore/vitastor/
mkdir -p %buildroot/etc/
cp -r opennebula/sudoers.d %buildroot/etc/
%files %files
@ -189,14 +170,4 @@ chown vitastor:vitastor /var/lib/vitastor
%_libdir/libfio_vitastor_sec.so %_libdir/libfio_vitastor_sec.so
%files -n vitastor-opennebula
/var/lib/one
/etc/sudoers.d/opennebula-vitastor
%triggerin -n vitastor-opennebula -- opennebula
[ $2 = 0 ] || exit 0
/var/lib/one/remotes/datastore/vitastor/install.sh
%changelog %changelog

View File

@ -25,5 +25,5 @@ RUN set -e; \
rpmbuild -ba vitastor.spec; \ rpmbuild -ba vitastor.spec; \
mkdir -p /root/packages/vitastor-el9; \ mkdir -p /root/packages/vitastor-el9; \
rm -rf /root/packages/vitastor-el9/*; \ rm -rf /root/packages/vitastor-el9/*; \
cp ~/rpmbuild/RPMS/*/*vitastor* /root/packages/vitastor-el9/; \ cp ~/rpmbuild/RPMS/*/vitastor* /root/packages/vitastor-el9/; \
cp ~/rpmbuild/SRPMS/vitastor* /root/packages/vitastor-el9/ cp ~/rpmbuild/SRPMS/vitastor* /root/packages/vitastor-el9/

View File

@ -81,20 +81,6 @@ Requires: fio = 3.27-8.el9
Vitastor fio drivers for benchmarking. Vitastor fio drivers for benchmarking.
%package -n vitastor-opennebula
Summary: Vitastor for OpenNebula
Group: Development/Libraries
Requires: vitastor-client
Requires: jq
Requires: python3-lxml
Requires: patch
Requires: qemu-kvm-block-vitastor
%description -n vitastor-opennebula
Vitastor storage plugin for OpenNebula.
%prep %prep
%setup -q %setup -q
@ -117,11 +103,6 @@ mkdir -p %buildroot/lib/systemd/system
cp mon/scripts/vitastor.target mon/scripts/vitastor-mon.service mon/scripts/vitastor-osd@.service %buildroot/lib/systemd/system cp mon/scripts/vitastor.target mon/scripts/vitastor-mon.service mon/scripts/vitastor-osd@.service %buildroot/lib/systemd/system
mkdir -p %buildroot/lib/udev/rules.d mkdir -p %buildroot/lib/udev/rules.d
cp mon/scripts/90-vitastor.rules %buildroot/lib/udev/rules.d cp mon/scripts/90-vitastor.rules %buildroot/lib/udev/rules.d
mkdir -p %buildroot/var/lib/one
cp -r opennebula/remotes %buildroot/var/lib/one
cp opennebula/install.sh %buildroot/var/lib/one/remotes/datastore/vitastor/
mkdir -p %buildroot/etc/
cp -r opennebula/sudoers.d %buildroot/etc/
%files %files
@ -182,14 +163,4 @@ chown vitastor:vitastor /var/lib/vitastor
%_libdir/libfio_vitastor_sec.so %_libdir/libfio_vitastor_sec.so
%files -n vitastor-opennebula
/var/lib/one
/etc/sudoers.d/opennebula-vitastor
%triggerin -n vitastor-opennebula -- opennebula
[ $2 = 0 ] || exit 0
/var/lib/one/remotes/datastore/vitastor/install.sh
%changelog %changelog

View File

@ -58,7 +58,6 @@ cluster_client_t::cluster_client_t(ring_loop_t *ringloop, timerfd_manager_t *tfd
st_cli.on_reload_hook = [this]() { st_cli.load_global_config(); }; st_cli.on_reload_hook = [this]() { st_cli.load_global_config(); };
st_cli.parse_config(config); st_cli.parse_config(config);
st_cli.infinite_start = false;
st_cli.load_global_config(); st_cli.load_global_config();
scrap_buffer_size = SCRAP_BUFFER_SIZE; scrap_buffer_size = SCRAP_BUFFER_SIZE;
@ -1077,7 +1076,7 @@ bool cluster_client_t::try_send(cluster_op_t *op, int i)
pool_cfg.scheme == POOL_SCHEME_REPLICATED ? 1 : pool_cfg.pg_size-pool_cfg.parity_chunks pool_cfg.scheme == POOL_SCHEME_REPLICATED ? 1 : pool_cfg.pg_size-pool_cfg.parity_chunks
); );
uint64_t meta_rev = 0; uint64_t meta_rev = 0;
if (op->opcode != OSD_OP_READ_BITMAP && op->opcode != OSD_OP_DELETE) if (op->opcode != OSD_OP_READ_BITMAP && op->opcode != OSD_OP_READ_CHAIN_BITMAP && op->opcode != OSD_OP_DELETE)
{ {
auto ino_it = st_cli.inode_config.find(op->inode); auto ino_it = st_cli.inode_config.find(op->inode);
if (ino_it != st_cli.inode_config.end()) if (ino_it != st_cli.inode_config.end())

View File

@ -121,7 +121,6 @@ void etcd_state_client_t::etcd_call(std::string api, json11::Json payload, int t
"Connection: keep-alive\r\n" "Connection: keep-alive\r\n"
"Keep-Alive: timeout="+std::to_string(etcd_keepalive_timeout)+"\r\n" "Keep-Alive: timeout="+std::to_string(etcd_keepalive_timeout)+"\r\n"
"\r\n"+req; "\r\n"+req;
retries--;
auto cb = [this, api, payload, timeout, retries, interval, callback, auto cb = [this, api, payload, timeout, retries, interval, callback,
cur_addr = selected_etcd_address](const http_response_t *response) cur_addr = selected_etcd_address](const http_response_t *response)
{ {
@ -145,11 +144,11 @@ void etcd_state_client_t::etcd_call(std::string api, json11::Json payload, int t
{ {
tfd->set_timer(interval, false, [this, api, payload, timeout, retries, interval, callback](int) tfd->set_timer(interval, false, [this, api, payload, timeout, retries, interval, callback](int)
{ {
etcd_call(api, payload, timeout, retries, interval, callback); etcd_call(api, payload, timeout, retries-1, interval, callback);
}); });
} }
else else
etcd_call(api, payload, timeout, retries, interval, callback); etcd_call(api, payload, timeout, retries-1, interval, callback);
} }
else else
callback(err, data); callback(err, data);
@ -559,22 +558,15 @@ void etcd_state_client_t::load_global_config()
{ {
etcd_call("/kv/range", json11::Json::object { etcd_call("/kv/range", json11::Json::object {
{ "key", base64_encode(etcd_prefix+"/config/global") } { "key", base64_encode(etcd_prefix+"/config/global") }
}, etcd_quick_timeout, max_etcd_attempts, 0, [this](std::string err, json11::Json data) }, etcd_slow_timeout, max_etcd_attempts, 0, [this](std::string err, json11::Json data)
{ {
if (err != "") if (err != "")
{ {
fprintf(stderr, "Error reading configuration from etcd: %s\n", err.c_str()); fprintf(stderr, "Error reading OSD configuration from etcd: %s\n", err.c_str());
if (infinite_start)
{
tfd->set_timer(etcd_slow_timeout, false, [this](int timer_id) tfd->set_timer(etcd_slow_timeout, false, [this](int timer_id)
{ {
load_global_config(); load_global_config();
}); });
}
else
{
exit(1);
}
return; return;
} }
json11::Json::object global_config; json11::Json::object global_config;

View File

@ -106,7 +106,6 @@ public:
int max_etcd_attempts = 5; int max_etcd_attempts = 5;
int etcd_quick_timeout = 1000; int etcd_quick_timeout = 1000;
int etcd_slow_timeout = 5000; int etcd_slow_timeout = 5000;
bool infinite_start = true;
uint64_t global_block_size = DEFAULT_BLOCK_SIZE; uint64_t global_block_size = DEFAULT_BLOCK_SIZE;
uint32_t global_bitmap_granularity = DEFAULT_BITMAP_GRANULARITY; uint32_t global_bitmap_granularity = DEFAULT_BITMAP_GRANULARITY;
uint32_t global_immediate_commit = IMMEDIATE_NONE; uint32_t global_immediate_commit = IMMEDIATE_NONE;

View File

@ -626,6 +626,7 @@ help:
} }
else else
{ {
printf("%d %d\n", r, errno);
perror("run_nbd"); perror("run_nbd");
exit(1); exit(1);
} }

View File

@ -11,7 +11,6 @@ add_library(vitastor_cli STATIC
cli_fix.cpp cli_fix.cpp
cli_ls.cpp cli_ls.cpp
cli_create.cpp cli_create.cpp
cli_dd.cpp
cli_modify.cpp cli_modify.cpp
cli_modify_osd.cpp cli_modify_osd.cpp
cli_osd_tree.cpp cli_osd_tree.cpp
@ -19,7 +18,6 @@ add_library(vitastor_cli STATIC
cli_flatten.cpp cli_flatten.cpp
cli_merge.cpp cli_merge.cpp
cli_rm_data.cpp cli_rm_data.cpp
cli_rm_wildcard.cpp
cli_rm.cpp cli_rm.cpp
cli_rm_osd.cpp cli_rm_osd.cpp
cli_pool_cfg.cpp cli_pool_cfg.cpp

View File

@ -30,7 +30,6 @@ static const char* help_text =
"\n" "\n"
"vitastor-cli ls [-l] [-p POOL] [--sort FIELD] [-r] [-n N] [<glob> ...]\n" "vitastor-cli ls [-l] [-p POOL] [--sort FIELD] [-r] [-n N] [<glob> ...]\n"
" List images (only matching <glob> patterns if passed).\n" " List images (only matching <glob> patterns if passed).\n"
" --exact Do not match glob patterns as names, select only exact name matches.\n"
" -p|--pool POOL Filter images by pool ID or name\n" " -p|--pool POOL Filter images by pool ID or name\n"
" -l|--long Also report allocated size and I/O statistics\n" " -l|--long Also report allocated size and I/O statistics\n"
" --del Also include delete operation statistics\n" " --del Also include delete operation statistics\n"
@ -54,44 +53,15 @@ static const char* help_text =
" -f|--force Proceed with shrinking or setting readwrite flag even if the image has children.\n" " -f|--force Proceed with shrinking or setting readwrite flag even if the image has children.\n"
" --down-ok Proceed with shrinking even if some data will be left on unavailable OSDs.\n" " --down-ok Proceed with shrinking even if some data will be left on unavailable OSDs.\n"
"\n" "\n"
"vitastor-cli rm <from> [<to>]\n" "vitastor-cli rm <from> [<to>] [--writers-stopped] [--down-ok]\n"
"vitastor-cli rm (--exact|--matching) <glob> ...\n" " Remove <from> or all layers between <from> and <to> (<to> must be a child of <from>),\n"
" Remove layer(s) and rebase all their children accordingly.\n" " rebasing all their children accordingly. --writers-stopped allows merging to be a bit\n"
" In the first form, remove <from> or layers between <from> and its child <to>.\n" " more effective in case of a single 'slim' read-write child and 'fat' removed parent:\n"
" In the second form, remove all images with exact or pattern-matched names.\n" " the child is merged into parent and parent is renamed to child in that case.\n"
" --writers-stopped allows optimised removal in case of a single 'slim' read-write\n" " In other cases parent layers are always merged into children.\n"
" child and 'fat' removed parent: the child is merged into parent and parent is renamed\n" " Other options:\n"
" to child in that case. In other cases parent layers are always merged into children.\n"
" --exact Remove multiple images with names matching given glob patterns.\n"
" --matching Remove multiple images with given names\n"
" --writers-stopped Allow renaming inodes over their read/write children.\n"
" --down-ok Continue deletion/merging even if some data will be left on unavailable OSDs.\n" " --down-ok Continue deletion/merging even if some data will be left on unavailable OSDs.\n"
"\n" "\n"
"vitastor-cli dd [iimg=<image> | if=<file>] [oimg=<image> | of=<file>] [bs=1M]\n"
" [count=N] [seek/oseek=N] [skip/iseek=M] [iodepth=N] [status=progress]\n"
" [conv=nocreat,noerror,nofsync,trunc,nosparse] [iflag=direct] [oflag=direct,append]\n"
" Copy data between Vitastor images, files and pipes.\n"
" Options can be specified in classic dd style (key=value) or like usual (--key value).\n"
" iimg=<image> Copy from Vitastor image <image>\n"
" if=<file> Copy from file <file>\n"
" oimg=<image> Copy to Vitastor image <image>\n"
" of=<file> Copy to file <file>\n"
" bs=1M Set copy block size\n"
" count=N Copy only N input blocks. If N ends in B it counts bytes, not blocks\n"
" seek/oseek=N Skip N output blocks. If N ends in B it counts bytes, not blocks\n"
" skip/iseek=N Skip N input blocks. If N ends in B it counts bytes, not blocks\n"
" iodepth=N Send N reads or writes in parallel (default 4)\n"
" status=LEVEL The LEVEL of information to print to stderr: none/noxfer/progress\n"
" size=N Specify size for the created output file/image (defaults to input size)\n"
" iflag=direct For files only: use direct I/O\n"
" oflag=direct For files only: use direct I/O\n"
" oflag=append For files only: append to output file\n"
" conv=nocreat Do not create output file/image\n"
" conv=trunc For files only: truncate output file\n"
" conv=noerror Continue read after errors\n"
" conv=nofsync Do not call fsync before finishing (default behaviour is fsync)\n"
" conv=nosparse Write all output blocks including all-zero blocks\n"
"\n"
"vitastor-cli flatten <layer>\n" "vitastor-cli flatten <layer>\n"
" Flatten a layer, i.e. merge data and detach it from parents.\n" " Flatten a layer, i.e. merge data and detach it from parents.\n"
"\n" "\n"
@ -281,7 +251,6 @@ static json11::Json::object parse_args(int narg, const char *args[])
!strcmp(opt, "down-ok") || !strcmp(opt, "down_ok") || !strcmp(opt, "down-ok") || !strcmp(opt, "down_ok") ||
!strcmp(opt, "dry-run") || !strcmp(opt, "dry_run") || !strcmp(opt, "dry-run") || !strcmp(opt, "dry_run") ||
!strcmp(opt, "help") || !strcmp(opt, "all") || !strcmp(opt, "help") || !strcmp(opt, "all") ||
!strcmp(opt, "exact") || !strcmp(opt, "matching") ||
!strcmp(opt, "writers-stopped") || !strcmp(opt, "writers_stopped")) !strcmp(opt, "writers-stopped") || !strcmp(opt, "writers_stopped"))
{ {
cfg[opt] = "1"; cfg[opt] = "1";
@ -413,20 +382,6 @@ static int run(cli_tool_t *p, json11::Json::object cfg)
} }
action_cb = p->start_flatten(cfg); action_cb = p->start_flatten(cfg);
} }
else if (cmd[0] == "dd")
{
// Read or write to/from cluster
for (int i = 0; i < cmd.size(); i++)
{
auto arg = cmd[i].string_value();
ssize_t p = arg.find("=");
if (p != std::string::npos)
{
cfg[arg.substr(0, p)] = arg.substr(p+1);
}
}
action_cb = p->start_dd(cfg);
}
else if (cmd[0] == "rm") else if (cmd[0] == "rm")
{ {
// Remove multiple snapshots and rebase their children // Remove multiple snapshots and rebase their children
@ -530,6 +485,8 @@ static int run(cli_tool_t *p, json11::Json::object cfg)
p->ringloop = new ring_loop_t(RINGLOOP_DEFAULT_SIZE); p->ringloop = new ring_loop_t(RINGLOOP_DEFAULT_SIZE);
p->epmgr = new epoll_manager_t(p->ringloop); p->epmgr = new epoll_manager_t(p->ringloop);
p->cli = new cluster_client_t(p->ringloop, p->epmgr->tfd, cfg_j); p->cli = new cluster_client_t(p->ringloop, p->epmgr->tfd, cfg_j);
// Smaller timeout by default for more interactiveness
p->cli->st_cli.etcd_slow_timeout = p->cli->st_cli.etcd_quick_timeout;
p->loop_and_wait(action_cb, [&](const cli_result_t & r) p->loop_and_wait(action_cb, [&](const cli_result_t & r)
{ {
result = r; result = r;

View File

@ -75,9 +75,7 @@ public:
std::function<bool(cli_result_t &)> start_rm(json11::Json); std::function<bool(cli_result_t &)> start_rm(json11::Json);
std::function<bool(cli_result_t &)> start_rm_data(json11::Json); std::function<bool(cli_result_t &)> start_rm_data(json11::Json);
std::function<bool(cli_result_t &)> start_rm_osd(json11::Json); std::function<bool(cli_result_t &)> start_rm_osd(json11::Json);
std::function<bool(cli_result_t &)> start_rm_wildcard(json11::Json);
std::function<bool(cli_result_t &)> start_status(json11::Json); std::function<bool(cli_result_t &)> start_status(json11::Json);
std::function<bool(cli_result_t &)> start_dd(json11::Json);
// Should be called like loop_and_wait(start_status(), <completion callback>) // Should be called like loop_and_wait(start_status(), <completion callback>)
void loop_and_wait(std::function<bool(cli_result_t &)> loop_cb, std::function<void(const cli_result_t &)> complete_cb); void loop_and_wait(std::function<bool(cli_result_t &)> loop_cb, std::function<void(const cli_result_t &)> complete_cb);

View File

@ -26,7 +26,7 @@ struct image_creator_t
std::string new_pool_name; std::string new_pool_name;
std::string image_name, new_snap, new_parent; std::string image_name, new_snap, new_parent;
json11::Json new_meta; json11::Json new_meta;
uint64_t size = 0; uint64_t size;
bool force = false; bool force = false;
bool force_size = false; bool force_size = false;
@ -554,10 +554,10 @@ std::function<bool(cli_result_t &)> cli_tool_t::start_create(json11::Json cfg)
image_creator->new_snap = cfg["snapshot"].string_value(); image_creator->new_snap = cfg["snapshot"].string_value();
} }
image_creator->new_parent = cfg["parent"].string_value(); image_creator->new_parent = cfg["parent"].string_value();
if (!cfg["size"].is_null()) if (cfg["size"].string_value() != "")
{ {
bool ok; bool ok;
image_creator->size = parse_size(cfg["size"].as_string(), &ok); image_creator->size = parse_size(cfg["size"].string_value(), &ok);
if (!ok) if (!ok)
{ {
return [size = cfg["size"].string_value()](cli_result_t & result) return [size = cfg["size"].string_value()](cli_result_t & result)

View File

@ -1,968 +0,0 @@
// Copyright (c) Vitaliy Filippov, 2019+
// License: VNPL-1.1 (see README.md for details)
#include "cli.h"
#include "cluster_client.h"
#include "str_util.h"
#include <fcntl.h>
#include <unistd.h>
#include <sys/stat.h>
#include <sys/ioctl.h>
#include <algorithm>
// Copy data between Vitastor images, files and pipes
// A showpiece implementation of dd :-) with iodepth, asynchrony, pipe support and so on
struct dd_buf_t
{
void *buf = NULL;
uint64_t offset = 0, len = 0, max = 0;
dd_buf_t(uint64_t offset, uint64_t max)
{
this->offset = offset;
this->max = max;
this->buf = malloc_or_die(max);
}
~dd_buf_t()
{
free(this->buf);
this->buf = NULL;
}
};
struct dd_in_info_t
{
// in
std::string iimg, ifile;
bool in_direct = false;
bool detect_size = true;
// out
cli_result_t result;
inode_watch_t *iwatch = NULL;
int ifd = -1;
uint64_t in_size = 0;
uint32_t in_granularity = 1;
bool in_seekable = false;
void open_input(cli_tool_t *parent)
{
in_seekable = true;
if (iimg != "")
{
iwatch = parent->cli->st_cli.watch_inode(iimg);
if (!iwatch->cfg.num)
{
result = (cli_result_t){ .err = ENOENT, .text = "Image "+iimg+" does not exist" };
parent->cli->st_cli.close_watch(iwatch);
iwatch = NULL;
return;
}
auto pool_it = parent->cli->st_cli.pool_config.find(INODE_POOL(iwatch->cfg.num));
if (pool_it == parent->cli->st_cli.pool_config.end())
{
result = (cli_result_t){ .err = ENOENT, .text = "Pool of image "+iimg+" does not exist" };
parent->cli->st_cli.close_watch(iwatch);
iwatch = NULL;
return;
}
in_granularity = pool_it->second.bitmap_granularity;
if (detect_size)
{
in_size = iwatch->cfg.size;
}
}
else if (ifile != "")
{
ifd = open(ifile.c_str(), (in_direct ? O_DIRECT : 0) | O_RDONLY);
if (ifd < 0)
{
result = (cli_result_t){ .err = errno, .text = "Failed to open "+ifile+": "+std::string(strerror(errno)) };
return;
}
if (detect_size)
{
struct stat st;
if (fstat(ifd, &st) < 0)
{
result = (cli_result_t){ .err = errno, .text = "Failed to stat "+ifile+": "+std::string(strerror(errno)) };
close(ifd);
ifd = -1;
return;
}
if (S_ISREG(st.st_mode))
{
in_size = st.st_size;
}
else if (S_ISBLK(st.st_mode))
{
if (ioctl(ifd, BLKGETSIZE64, &in_size) < 0)
{
result = (cli_result_t){ .err = errno, .text = "Failed to get "+ifile+" size: "+std::string(strerror(errno)) };
close(ifd);
ifd = -1;
return;
}
}
}
if (in_direct)
{
in_granularity = 512;
}
if (lseek(ifd, 1, SEEK_SET) == (off_t)-1)
{
in_seekable = false;
}
else
{
lseek(ifd, 0, SEEK_SET);
}
}
else
{
ifd = 0;
in_seekable = false;
}
}
void close_input(cli_tool_t *parent)
{
if (iimg != "")
{
parent->cli->st_cli.close_watch(iwatch);
iwatch = NULL;
}
else if (ifile != "")
{
close(ifd);
ifd = -1;
}
}
};
struct dd_out_info_t
{
std::string oimg, ofile;
std::string out_pool;
bool out_direct = false;
bool out_create = true;
bool out_trunc = false;
bool out_append = false;
bool end_fsync = true;
uint64_t out_size = 0;
cli_result_t result;
bool old_progress = false;
inode_watch_t *owatch = NULL;
int ofd = -1;
uint32_t out_granularity = 1;
bool out_seekable = false;
std::function<bool(cli_result_t &)> sub_cb;
pool_config_t *find_pool(cli_tool_t *parent, const std::string & name)
{
if (name == "" && parent->cli->st_cli.pool_config.size() == 1)
{
return &parent->cli->st_cli.pool_config.begin()->second;
}
for (auto & pp: parent->cli->st_cli.pool_config)
{
if (pp.second.name == name)
{
return &pp.second;
}
}
return NULL;
}
bool open_output(cli_tool_t *parent, int & state, int base_state)
{
if (state == base_state)
goto resume_1;
else if (state == base_state+1)
goto resume_2;
if (oimg != "")
{
out_seekable = true;
owatch = parent->cli->st_cli.watch_inode(oimg);
if (owatch->cfg.num)
{
auto pool_it = parent->cli->st_cli.pool_config.find(INODE_POOL(owatch->cfg.num));
if (pool_it == parent->cli->st_cli.pool_config.end())
{
result = (cli_result_t){ .err = ENOENT, .text = "Pool of image "+oimg+" does not exist" };
parent->cli->st_cli.close_watch(owatch);
owatch = NULL;
return true;
}
out_granularity = pool_it->second.bitmap_granularity;
}
else
{
auto pool_cfg = find_pool(parent, out_pool);
if (pool_cfg)
{
out_granularity = pool_cfg->bitmap_granularity;
}
else
{
result = (cli_result_t){ .err = ENOENT, .text = "Pool to create output image "+oimg+" is not specified" };
parent->cli->st_cli.close_watch(owatch);
owatch = NULL;
return true;
}
}
if (out_size % 4096)
{
out_size += (4096 - (out_size % 4096));
}
old_progress = parent->progress;
if (!owatch->cfg.num)
{
if (!out_create)
{
result = (cli_result_t){ .err = ENOENT, .text = "Image "+oimg+" does not exist" };
parent->cli->st_cli.close_watch(owatch);
owatch = NULL;
return true;
}
if (!out_size)
{
result = (cli_result_t){ .err = ENOENT, .text = "Input size is unknown, specify size to create output image "+oimg };
parent->cli->st_cli.close_watch(owatch);
owatch = NULL;
return true;
}
// Create output image
sub_cb = parent->start_create(json11::Json::object {
{ "image", oimg },
{ "pool", out_pool },
{ "size", out_size },
});
}
else if (owatch->cfg.size < out_size || out_trunc)
{
if (!out_size)
{
result = (cli_result_t){ .err = ENOENT, .text = "Input size is unknown, specify size to truncate output image" };
parent->cli->st_cli.close_watch(owatch);
owatch = NULL;
return true;
}
// Resize output image
parent->progress = false;
sub_cb = parent->start_modify(json11::Json::object {
{ "image", oimg },
{ "resize", out_size },
});
}
else
{
// ok
return true;
}
// Wait for sub-command
resume_1:
while (!sub_cb(result))
{
state = base_state;
return false;
}
parent->progress = old_progress;
sub_cb = NULL;
if (result.err)
{
parent->cli->st_cli.close_watch(owatch);
owatch = NULL;
return true;
}
// Wait until output image actually appears
resume_2:
while (!owatch->cfg.num)
{
state = base_state+1;
return false;
}
}
else if (ofile != "")
{
ofd = open(ofile.c_str(), (out_direct ? O_DIRECT : 0) | (out_append ? O_APPEND : O_RDWR) | (out_create ? O_CREAT : 0), 0666);
if (ofd < 0)
{
result = (cli_result_t){ .err = errno, .text = "Failed to open "+ofile+": "+std::string(strerror(errno)) };
return true;
}
if (out_trunc && ftruncate(ofd, out_size) < 0)
{
result = (cli_result_t){ .err = errno, .text = "Failed to truncate "+ofile+": "+std::string(strerror(errno)) };
return true;
}
if (out_direct)
{
out_granularity = 512;
}
out_seekable = !out_append;
}
else
{
ofd = 1;
out_seekable = false;
}
return true;
}
bool fsync_output(cli_tool_t *parent, int & state, int base_state)
{
if (state == base_state)
goto resume_1;
if (oimg != "")
{
{
cluster_op_t *sync_op = new cluster_op_t;
sync_op->opcode = OSD_OP_SYNC;
parent->waiting++;
sync_op->callback = [this, parent](cluster_op_t *sync_op)
{
parent->waiting--;
delete sync_op;
parent->ringloop->wakeup();
};
parent->cli->execute(sync_op);
}
resume_1:
if (parent->waiting > 0)
{
state = base_state;
return false;
}
}
else
{
int res = fsync(ofd);
if (res < 0)
{
result = (cli_result_t){ .err = errno, .text = "Failed to fsync "+ofile+": "+std::string(strerror(errno)) };
}
}
return true;
}
void close_output(cli_tool_t *parent)
{
if (oimg != "")
{
parent->cli->st_cli.close_watch(owatch);
owatch = NULL;
}
else
{
if (ofile != "")
close(ofd);
ofd = -1;
}
}
};
struct cli_dd_t
{
cli_tool_t *parent;
dd_in_info_t iinfo;
dd_out_info_t oinfo;
uint64_t blocksize = 0, bytelimit = 0, iseek = 0, oseek = 0, iodepth = 0;
bool end_status = true, ignore_errors = false;
bool write_zero = false;
uint64_t in_iodepth = 0, out_iodepth = 0;
uint64_t read_offset = 0, read_end = 0;
std::vector<dd_buf_t*> read_buffers, short_reads, short_writes;
std::vector<uint8_t*> zero_buf;
bool in_eof = false;
uint64_t written_size = 0;
uint64_t written_progress = 0;
timespec tv_begin = {}, tv_progress = {};
int state = 0;
int copy_error = 0;
int in_waiting = 0, out_waiting = 0;
cli_result_t result;
bool is_done()
{
return state == 100;
}
int skip_read(int fd, uint64_t to_skip)
{
void *buf = malloc_or_die(blocksize);
while (to_skip > 0)
{
auto res = read(fd, buf, blocksize < to_skip ? blocksize : to_skip);
if (res <= 0)
{
return res == 0 ? -EPIPE : -errno;
}
to_skip -= res;
}
free(buf);
return 0;
}
uint64_t round_up(uint64_t n, uint64_t align)
{
return (n % align) ? (n + align - (n % align)) : n;
}
void vitastor_read_bitmap(dd_buf_t *cur_read)
{
cluster_op_t *read_op = new cluster_op_t;
read_op->opcode = OSD_OP_READ_CHAIN_BITMAP;
read_op->inode = iinfo.iwatch->cfg.num;
// FIXME: Support unaligned read?
read_op->offset = cur_read->offset + iseek;
read_op->len = round_up(round_up(cur_read->max, iinfo.in_granularity), oinfo.out_granularity);
in_waiting++;
read_op->callback = [this, cur_read](cluster_op_t *read_op)
{
in_waiting--;
if (read_op->retval < 0)
{
fprintf(
stderr, "Failed to read bitmap for %lu bytes from image %s at offset %lu: %s (code %d)\n",
read_op->len, iinfo.iimg.c_str(), read_op->offset,
strerror(read_op->retval < 0 ? -read_op->retval : EIO), read_op->retval
);
if (!ignore_errors)
{
copy_error = read_op->retval < 0 ? -read_op->retval : EIO;
}
delete cur_read;
}
else if (!is_zero(read_op->bitmap_buf, read_op->len/iinfo.in_granularity/8))
{
vitastor_read(cur_read);
}
else
{
delete cur_read;
}
delete read_op;
parent->ringloop->wakeup();
};
parent->cli->execute(read_op);
}
void vitastor_read(dd_buf_t *cur_read)
{
cluster_op_t *read_op = new cluster_op_t;
read_op->opcode = OSD_OP_READ;
read_op->inode = iinfo.iwatch->cfg.num;
// FIXME: Support unaligned read?
read_op->offset = cur_read->offset + iseek;
read_op->len = round_up(round_up(cur_read->max, iinfo.in_granularity), oinfo.out_granularity);
read_op->iov.push_back(cur_read->buf, cur_read->max);
if (cur_read->max < read_op->len)
{
// Zero pad
read_op->iov.push_back(zero_buf.data(), read_op->len - cur_read->max);
}
in_waiting++;
read_op->callback = [this, cur_read](cluster_op_t *read_op)
{
in_waiting--;
if (read_op->retval != read_op->len)
{
fprintf(
stderr, "Failed to read %lu bytes from image %s at offset %lu: %s (code %d)\n",
read_op->len, iinfo.iimg.c_str(), read_op->offset,
strerror(read_op->retval < 0 ? -read_op->retval : EIO), read_op->retval
);
if (!ignore_errors)
{
copy_error = read_op->retval < 0 ? -read_op->retval : EIO;
}
delete cur_read;
}
else
{
cur_read->len = cur_read->max;
add_finished_read(cur_read);
}
delete read_op;
parent->ringloop->wakeup();
};
parent->cli->execute(read_op);
}
bool add_read_op()
{
if (iinfo.iwatch)
{
dd_buf_t *cur_read = new dd_buf_t(read_offset, read_offset + blocksize > read_end ? read_end - read_offset : blocksize);
read_offset += cur_read->max;
in_eof = read_offset >= read_end;
cur_read->len = cur_read->max;
if (!write_zero)
{
vitastor_read_bitmap(cur_read);
}
else
{
vitastor_read(cur_read);
}
}
else
{
io_uring_sqe *sqe = parent->ringloop->get_sqe();
if (!sqe)
{
return false;
}
dd_buf_t *cur_read;
if (short_reads.size())
{
cur_read = short_reads[0];
short_reads.erase(short_reads.begin(), short_reads.begin()+1);
// reset eof flag
if (!short_reads.size() && read_offset >= read_end)
in_eof = true;
}
else
{
cur_read = new dd_buf_t(read_offset, iinfo.in_seekable && read_offset + blocksize > read_end ? read_end-read_offset : blocksize);
read_offset += cur_read->max;
if (read_offset >= read_end)
in_eof = true;
}
ring_data_t *data = ((ring_data_t*)sqe->user_data);
data->iov = (iovec){ cur_read->buf + cur_read->len, cur_read->max - cur_read->len };
my_uring_prep_readv(sqe, iinfo.ifd, &data->iov, 1, iinfo.in_seekable ? iseek + cur_read->offset + cur_read->len : -1);
in_waiting++;
data->callback = [this, cur_read](ring_data_t *data)
{
in_waiting--;
if (data->res < 0)
{
fprintf(
stderr, "Failed to read %lu bytes from %s at offset %lu: %s (code %d)\n",
data->iov.iov_len, iinfo.ifile == "" ? "stdin" : iinfo.ifile.c_str(), cur_read->offset,
strerror(-data->res), data->res
);
if (!ignore_errors)
{
copy_error = -data->res;
}
}
else if (data->res == 0)
{
in_eof = true;
}
if (data->res <= 0)
{
if (cur_read->len > 0)
add_finished_read(cur_read);
else
delete cur_read;
}
else
{
cur_read->len += data->res;
if (cur_read->len < cur_read->max)
{
// short read, retry
short_reads.push_back(cur_read);
// reset eof flag to signal that there's still something to read
in_eof = false;
}
else
{
add_finished_read(cur_read);
}
}
parent->ringloop->wakeup();
};
}
return true;
}
void add_finished_read(dd_buf_t *cur_read)
{
if (!write_zero && is_zero(cur_read->buf, cur_read->max))
{
// do not write all-zero buffer
delete cur_read;
return;
}
auto it = std::lower_bound(read_buffers.begin(), read_buffers.end(), cur_read, [](dd_buf_t *item, dd_buf_t *ref)
{
return item->offset < ref->offset;
});
read_buffers.insert(it, cur_read);
}
bool add_write_op()
{
dd_buf_t *cur_read;
if (short_writes.size())
{
cur_read = short_writes[0];
short_writes.erase(short_writes.begin(), short_writes.begin()+1);
}
else
{
cur_read = read_buffers[0];
if (!oinfo.out_seekable && cur_read->offset > written_size)
{
// can't write - input buffers are out of order
return false;
}
cur_read->max = cur_read->len;
cur_read->len = 0;
read_buffers.erase(read_buffers.begin(), read_buffers.begin()+1);
}
if (oinfo.owatch)
{
cluster_op_t *write_op = new cluster_op_t;
write_op->opcode = OSD_OP_WRITE;
write_op->inode = oinfo.owatch->cfg.num;
// FIXME: Support unaligned write?
write_op->offset = cur_read->offset + oseek;
write_op->len = round_up(cur_read->max, oinfo.out_granularity);
write_op->iov.push_back(cur_read->buf, cur_read->max);
if (cur_read->max < write_op->len)
{
// Zero pad
write_op->iov.push_back(zero_buf.data(), write_op->len - cur_read->max);
}
out_waiting++;
write_op->callback = [this, cur_read](cluster_op_t *write_op)
{
out_waiting--;
if (write_op->retval != write_op->len)
{
fprintf(
stderr, "Failed to write %lu bytes to image %s at offset %lu: %s (code %d)\n",
write_op->len, oinfo.oimg.c_str(), write_op->offset,
strerror(write_op->retval < 0 ? -write_op->retval : EIO), write_op->retval
);
if (!ignore_errors)
{
copy_error = write_op->retval < 0 ? -write_op->retval : EIO;
}
}
else
{
written_size += write_op->len;
}
delete cur_read;
delete write_op;
parent->ringloop->wakeup();
};
parent->cli->execute(write_op);
}
else
{
io_uring_sqe *sqe = parent->ringloop->get_sqe();
if (!sqe)
{
return false;
}
ring_data_t *data = ((ring_data_t*)sqe->user_data);
data->iov = (iovec){ .iov_base = cur_read->buf+cur_read->len, .iov_len = cur_read->max-cur_read->len };
my_uring_prep_writev(sqe, oinfo.ofd, &data->iov, 1, oinfo.out_seekable ? cur_read->offset+cur_read->len+oseek : -1);
out_waiting++;
data->callback = [this, cur_read](ring_data_t *data)
{
out_waiting--;
if (data->res < 0)
{
fprintf(
stderr, "Failed to write %lu bytes to %s at offset %lu: %s (code %d)\n",
data->iov.iov_len, oinfo.ofile == "" ? "stdout" : oinfo.ofile.c_str(),
oinfo.out_seekable ? cur_read->offset+cur_read->len+oseek : 0,
strerror(-data->res), data->res
);
if (!ignore_errors)
{
copy_error = -data->res;
}
delete cur_read;
}
else
{
written_size += data->res;
cur_read->len += data->res;
if (cur_read->len < cur_read->max)
short_writes.push_back(cur_read);
else
delete cur_read;
}
parent->ringloop->wakeup();
};
}
return true;
}
void print_progress(bool end)
{
if (!parent->progress && (!end || !end_status && !parent->json_output))
{
return;
}
timespec tv_now;
clock_gettime(CLOCK_REALTIME, &tv_now);
double sec_delta = ((tv_now.tv_sec - tv_progress.tv_sec) + (double)(tv_now.tv_nsec - tv_progress.tv_nsec)/1000000000.0);
if (sec_delta < 1 && !end)
{
return;
}
double sec_total = ((tv_now.tv_sec - tv_begin.tv_sec) + (double)(tv_now.tv_nsec - tv_begin.tv_nsec)/1000000000.0);
uint64_t delta = written_size-written_progress;
tv_progress = tv_now;
written_progress = written_size;
if (end)
{
char buf[256];
snprintf(
buf, sizeof(buf), "%lu bytes (%s) copied, %.1f s, %sB/s",
written_size, format_size(written_size).c_str(), sec_total,
format_size((uint64_t)(written_size/sec_total), true).c_str()
);
if (parent->json_output)
{
if (parent->progress)
fprintf(stderr, "\n");
result.text = buf;
result.data = json11::Json::object {
{ "copied", written_size },
{ "seconds", sec_total },
};
}
else
{
fprintf(stderr, (parent->progress ? ("\r%s\033[K\n") : ("%s\n")), buf);
}
}
else
{
fprintf(
stderr, "\r%lu bytes (%s) copied, %.1f s, %sB/s, avg %sB/s\033[K",
written_size, format_size(written_size).c_str(), sec_total,
format_size((uint64_t)(delta/sec_delta), true).c_str(),
format_size((uint64_t)(written_size/sec_total), true).c_str()
);
}
}
void loop()
{
if (state == 1)
goto resume_1;
else if (state == 2)
goto resume_2;
else if (state == 3)
goto resume_3;
else if (state == 4)
goto resume_4;
if ((oinfo.oimg != "" && oinfo.ofile != "") || (iinfo.iimg != "" && iinfo.ifile != ""))
{
result = (cli_result_t){ .err = EINVAL, .text = "Image and file can't be specified at the same time" };
state = 100;
return;
}
if ((iinfo.iimg != "" ? "i"+iinfo.iimg : "f"+iinfo.ifile) == (oinfo.oimg != "" ? "i"+oinfo.oimg : "f"+oinfo.ofile))
{
result = (cli_result_t){ .err = EINVAL, .text = "Input and output image/file can't be equal" };
state = 100;
return;
}
zero_buf.resize(blocksize);
// Open input and output
iinfo.open_input(parent);
if (iinfo.result.err)
{
result = iinfo.result;
state = 100;
return;
}
if (iinfo.iwatch && ((iseek % iinfo.in_granularity) || (blocksize % iinfo.in_granularity)))
{
iinfo.close_input(parent);
result = (cli_result_t){ .err = EINVAL, .text = "Unaligned read from Vitastor is not supported" };
state = 100;
return;
}
if (!oinfo.out_size)
{
oinfo.out_size = oseek + (iinfo.in_seekable && (!bytelimit || iinfo.in_size-iseek < bytelimit) ? iinfo.in_size-iseek : bytelimit);
}
resume_1:
resume_2:
if (!oinfo.open_output(parent, state, 1))
{
return;
}
if (oinfo.result.err)
{
iinfo.close_input(parent);
result = oinfo.result;
state = 100;
return;
}
if (oinfo.owatch && ((oseek % oinfo.out_granularity) || (blocksize % oinfo.out_granularity)))
{
result = (cli_result_t){ .err = EINVAL, .text = "Unaligned write to Vitastor is not supported" };
goto close_end;
}
// Copy data
if (iinfo.in_seekable && iseek >= iinfo.in_size)
{
result = (cli_result_t){ .err = -EINVAL, .text = "Input seek position is beyond end of input" };
goto close_end;
}
if (!iinfo.iwatch && !iinfo.in_seekable && iseek)
{
// Read and ignore some data from input
int res = skip_read(iinfo.ifd, iseek);
if (res < 0)
{
result = (cli_result_t){ .err = -res, .text = "Failed to skip "+std::to_string(iseek)+" input bytes: "+std::string(strerror(-res)) };
goto close_end;
}
}
in_iodepth = iinfo.in_seekable ? iodepth : 1;
out_iodepth = oinfo.out_seekable ? iodepth : 1;
write_zero = write_zero || !oinfo.out_seekable;
oinfo.end_fsync = oinfo.end_fsync && oinfo.out_seekable;
read_offset = 0;
read_end = iinfo.in_seekable ? iinfo.in_size-iseek : 0;
if (bytelimit && (!read_end || read_end > bytelimit))
read_end = bytelimit;
clock_gettime(CLOCK_REALTIME, &tv_begin);
tv_progress = tv_begin;
resume_3:
while ((ignore_errors || !copy_error) && (!in_eof || read_buffers.size() || in_waiting > 0 || out_waiting > 0))
{
print_progress(false);
while ((ignore_errors || !copy_error) &&
(!in_eof && in_waiting < in_iodepth && read_buffers.size() < out_iodepth ||
read_buffers.size() && out_waiting < out_iodepth))
{
if (!in_eof && in_waiting < in_iodepth && read_buffers.size() < out_iodepth)
{
if (!add_read_op())
{
break;
}
}
if (read_buffers.size() && out_waiting < out_iodepth)
{
if (!add_write_op())
{
break;
}
}
}
if (in_waiting > 0 || out_waiting > 0)
{
state = 3;
return;
}
}
if (oinfo.end_fsync)
{
resume_4:
if (!oinfo.fsync_output(parent, state, 4))
{
return;
}
}
print_progress(true);
close_end:
oinfo.close_output(parent);
iinfo.close_input(parent);
// Done
result.err = copy_error;
state = 100;
}
};
// parse <n>B or <n> blocks of size `bs`
static uint64_t parse_blocks(json11::Json v, uint64_t bs, uint64_t def)
{
uint64_t res;
if (!v.is_string() && !v.is_number() ||
v.is_string() && v.string_value() == "" ||
v.is_number() && !v.uint64_value())
return def;
auto num = v.uint64_value();
if (num)
return num * bs;
auto s = v.string_value();
if (s != "" && (s[s.size()-1] == 'b' || s[s.size()-1] == 'B'))
res = stoull_full(s.substr(0, s.size()-1));
else
res = parse_size(s);
return res;
}
std::function<bool(cli_result_t &)> cli_tool_t::start_dd(json11::Json cfg)
{
auto dd = new cli_dd_t();
dd->parent = this;
dd->iinfo.iimg = cfg["iimg"].string_value();
dd->oinfo.oimg = cfg["oimg"].string_value();
dd->iinfo.ifile = cfg["if"].string_value();
dd->oinfo.ofile = cfg["of"].string_value();
dd->blocksize = parse_size(cfg["bs"].string_value());
if (!dd->blocksize)
dd->blocksize = 1048576;
dd->bytelimit = parse_blocks(cfg["count"], dd->blocksize, 0);
dd->oseek = parse_blocks(cfg["oseek"], dd->blocksize, 0);
if (!dd->oseek)
dd->oseek = parse_blocks(cfg["seek"], dd->blocksize, 0);
dd->iseek = parse_blocks(cfg["oseek"], dd->blocksize, 0);
if (!dd->iseek)
dd->iseek = parse_blocks(cfg["skip"], dd->blocksize, 0);
dd->iodepth = cfg["iodepth"].uint64_value();
if (!dd->iodepth)
dd->iodepth = 4;
if (cfg["status"] == "none")
dd->end_status = false;
else if (cfg["status"] == "progress")
progress = true;
dd->iinfo.detect_size = cfg["size"].is_null();
dd->oinfo.out_size = parse_size(cfg["size"].as_string());
std::vector<std::string> conv = explode(",", cfg["conv"].string_value(), true);
if (std::find(conv.begin(), conv.end(), "nofsync") != conv.end())
dd->oinfo.end_fsync = false;
if (std::find(conv.begin(), conv.end(), "trunc") != conv.end())
dd->oinfo.out_trunc = true;
if (std::find(conv.begin(), conv.end(), "nocreat") != conv.end())
dd->oinfo.out_create = false;
if (std::find(conv.begin(), conv.end(), "noerror") != conv.end())
dd->ignore_errors = true;
if (std::find(conv.begin(), conv.end(), "nosparse") != conv.end())
dd->write_zero = true;
conv = explode(",", cfg["iflag"].string_value(), true);
if (std::find(conv.begin(), conv.end(), "direct") != conv.end())
dd->iinfo.in_direct = true;
conv = explode(",", cfg["oflag"].string_value(), true);
if (std::find(conv.begin(), conv.end(), "direct") != conv.end())
dd->oinfo.out_direct = true;
if (std::find(conv.begin(), conv.end(), "append") != conv.end())
dd->oinfo.out_append = true;
return [dd](cli_result_t & result)
{
dd->loop();
if (dd->is_done())
{
result = dd->result;
delete dd;
return true;
}
return false;
};
}

View File

@ -18,7 +18,6 @@ struct image_lister_t
std::string sort_field; std::string sort_field;
std::set<std::string> only_names; std::set<std::string> only_names;
bool reverse = false; bool reverse = false;
bool exact = false;
int max_count = 0; int max_count = 0;
bool show_stats = false, show_delete = false; bool show_stats = false, show_delete = false;
@ -209,9 +208,9 @@ resume_1:
} }
else else
{ {
for (auto & glob: only_names) for (auto glob: only_names)
{ {
if (exact ? (kv.second["name"].string_value() == glob) : stupid_glob(kv.second["name"].string_value(), glob)) if (stupid_glob(kv.second["name"].string_value(), glob))
{ {
list.push_back(kv.second); list.push_back(kv.second);
break; break;
@ -539,7 +538,6 @@ std::function<bool(cli_result_t &)> cli_tool_t::start_ls(json11::Json cfg)
{ {
auto lister = new image_lister_t(); auto lister = new image_lister_t();
lister->parent = this; lister->parent = this;
lister->exact = cfg["exact"].bool_value();
lister->list_pool_id = cfg["pool"].uint64_value(); lister->list_pool_id = cfg["pool"].uint64_value();
lister->list_pool_name = lister->list_pool_id ? "" : cfg["pool"].as_string(); lister->list_pool_name = lister->list_pool_id ? "" : cfg["pool"].as_string();
lister->show_stats = cfg["long"].bool_value(); lister->show_stats = cfg["long"].bool_value();

View File

@ -70,21 +70,6 @@ struct pool_creator_t
state = 100; state = 100;
return; return;
} }
// Validate pool name
for (auto & pp: parent->cli->st_cli.pool_config)
{
if (pp.second.name == cfg["name"].string_value())
{
result = (cli_result_t){
.err = EAGAIN,
.text = "Pool "+cfg["name"].string_value()+" already exists",
};
state = 100;
return;
}
}
state = 1; state = 1;
resume_1: resume_1:
// If not forced, check that we have enough osds for pg_size // If not forced, check that we have enough osds for pg_size
@ -132,7 +117,7 @@ resume_2:
} } } }
}); });
} }
parent->etcd_txn(json11::Json::object{ { "success", osd_configs } }); parent->etcd_txn(json11::Json::object { { "success", osd_configs, }, });
} }
state = 3; state = 3;
@ -171,7 +156,7 @@ resume_3:
}); });
} }
parent->etcd_txn(json11::Json::object{ { "success", osd_stats } }); parent->etcd_txn(json11::Json::object { { "success", osd_stats, }, });
} }
state = 4; state = 4;
@ -193,7 +178,6 @@ resume_4:
auto kv = parent->cli->st_cli.parse_etcd_kv(ocr["response_range"]["kvs"][0]); auto kv = parent->cli->st_cli.parse_etcd_kv(ocr["response_range"]["kvs"][0]);
osd_stats.push_back(kv.value); osd_stats.push_back(kv.value);
} }
guess_block_size(osd_stats);
state_node_tree = filter_state_node_tree_by_stats(state_node_tree, osd_stats); state_node_tree = filter_state_node_tree_by_stats(state_node_tree, osd_stats);
} }
@ -338,7 +322,7 @@ resume_8:
if (!create_check.passed) if (!create_check.passed)
{ {
result = (cli_result_t){ result = (cli_result_t) {
.err = EAGAIN, .err = EAGAIN,
.text = "Pool "+cfg["name"].string_value()+" was created, but failed to become active." .text = "Pool "+cfg["name"].string_value()+" was created, but failed to become active."
" This may indicate that cluster state has changed while the pool was being created." " This may indicate that cluster state has changed while the pool was being created."
@ -456,80 +440,6 @@ resume_8:
return json11::Json::object { { "osds", accepted_osds }, { "nodes", accepted_nodes } }; return json11::Json::object { { "osds", accepted_osds }, { "nodes", accepted_nodes } };
} }
// Autodetect block size for the pool if not specified
void guess_block_size(std::vector<json11::Json> & osd_stats)
{
json11::Json::object upd;
if (!cfg["block_size"].uint64_value())
{
uint64_t osd_bs = 0;
for (auto & os: osd_stats)
{
if (!os["data_block_size"].is_null())
{
if (osd_bs == 0)
osd_bs = os["data_block_size"].uint64_value();
else if (osd_bs != os["data_block_size"].uint64_value())
osd_bs = UINT32_MAX;
}
}
if (osd_bs && osd_bs != UINT32_MAX && osd_bs != parent->cli->st_cli.global_block_size)
{
fprintf(stderr, "Auto-selecting block_size=%s because all pool OSDs use it\n", format_size(osd_bs).c_str());
upd["block_size"] = osd_bs;
}
}
if (!cfg["bitmap_granularity"].uint64_value())
{
uint64_t osd_bg = 0;
for (auto & os: osd_stats)
{
if (!os["bitmap_granularity"].is_null())
{
if (osd_bg == 0)
osd_bg = os["bitmap_granularity"].uint64_value();
else if (osd_bg != os["bitmap_granularity"].uint64_value())
osd_bg = UINT32_MAX;
}
}
if (osd_bg && osd_bg != UINT32_MAX && osd_bg != parent->cli->st_cli.global_bitmap_granularity)
{
fprintf(stderr, "Auto-selecting bitmap_granularity=%s because all pool OSDs use it\n", format_size(osd_bg).c_str());
upd["bitmap_granularity"] = osd_bg;
}
}
if (cfg["immediate_commit"].is_null())
{
uint32_t osd_imm = UINT32_MAX;
for (auto & os: osd_stats)
{
if (!os["immediate_commit"].is_null())
{
uint32_t imm = etcd_state_client_t::parse_immediate_commit(os["immediate_commit"].string_value(), IMMEDIATE_NONE);
if (osd_imm == UINT32_MAX)
osd_imm = imm;
else if (osd_imm != imm)
osd_imm = UINT32_MAX-1;
}
}
if (osd_imm < UINT32_MAX-1 && osd_imm != parent->cli->st_cli.global_immediate_commit)
{
const char *imm_str = osd_imm == IMMEDIATE_NONE ? "none" : (osd_imm == IMMEDIATE_ALL ? "all" : "small");
fprintf(stderr, "Auto-selecting immediate_commit=%s because all pool OSDs use it\n", imm_str);
upd["immediate_commit"] = imm_str;
}
}
if (upd.size())
{
json11::Json::object cfg_obj = cfg.object_items();
for (auto & kv: upd)
{
cfg_obj[kv.first] = kv.second;
}
cfg = cfg_obj;
}
}
// Returns new state_node_tree based on given state_node_tree with osds // Returns new state_node_tree based on given state_node_tree with osds
// filtered out by stats parameters (block_size, bitmap_granularity) in // filtered out by stats parameters (block_size, bitmap_granularity) in
// given osd_stats and current pool config. // given osd_stats and current pool config.

View File

@ -689,10 +689,6 @@ resume_100:
std::function<bool(cli_result_t &)> cli_tool_t::start_rm(json11::Json cfg) std::function<bool(cli_result_t &)> cli_tool_t::start_rm(json11::Json cfg)
{ {
if (cfg["exact"].bool_value() || cfg["matching"].bool_value())
{
return start_rm_wildcard(cfg);
}
auto snap_remover = new snap_remover_t(); auto snap_remover = new snap_remover_t();
snap_remover->parent = this; snap_remover->parent = this;
snap_remover->from_name = cfg["from"].string_value(); snap_remover->from_name = cfg["from"].string_value();

View File

@ -1,208 +0,0 @@
// Copyright (c) Vitaliy Filippov, 2019+
// License: VNPL-1.1 (see README.md for details)
#include <fcntl.h>
#include <algorithm>
#include "cli.h"
#include "cluster_client.h"
#include "str_util.h"
struct inode_rev_t
{
inode_t inode_num;
uint64_t meta_rev;
};
// Remove multiple images in correct order
struct wildcard_remover_t
{
cli_tool_t *parent;
json11::Json cfg;
std::vector<std::string> globs;
bool exact = false;
json11::Json::array deleted_ids, deleted_images, rebased_images;
std::map<inode_t, inode_t> chains; // child => parent pairs
std::vector<std::vector<inode_rev_t>> versioned_chains;
json11::Json::object sub_cfg;
size_t i = 0;
int state = 0;
std::function<bool(cli_result_t &)> sub_cb;
cli_result_t result;
bool is_done()
{
return state == 100;
}
void join_chains()
{
bool changed = true;
while (changed)
{
changed = false;
auto ino_it = chains.begin();
while (ino_it != chains.end())
{
auto child_id = ino_it->first;
auto parent_id = ino_it->second;
auto & parent_cfg = parent->cli->st_cli.inode_config.at(parent_id);
if (parent_cfg.parent_id)
{
auto chain_it = chains.find(parent_cfg.parent_id);
if (chain_it != chains.end())
{
changed = true;
ino_it->second = chain_it->second;
chains.erase(chain_it);
}
}
ino_it = chains.upper_bound(child_id);
}
}
// Remember metadata modification revisions to check for parallel changes
versioned_chains.clear();
for (auto cp: chains)
{
auto child_id = cp.first;
auto parent_id = cp.second;
std::vector<inode_rev_t> ver_chain;
do
{
auto & inode_cfg = parent->cli->st_cli.inode_config.at(child_id);
ver_chain.push_back((inode_rev_t){ .inode_num = child_id, .meta_rev = inode_cfg.mod_revision });
child_id = inode_cfg.parent_id;
} while (child_id && child_id != parent_id);
versioned_chains.push_back(std::move(ver_chain));
}
// Sort chains based on parent inode rank to first delete child-most layers
std::map<inode_t, uint64_t> ranks;
for (auto cp: chains)
{
auto parent_id = cp.second, cur_id = parent_id;
uint64_t rank = 0;
do
{
rank++;
cur_id = parent->cli->st_cli.inode_config.at(cur_id).parent_id;
} while (cur_id && cur_id != parent_id);
ranks[parent_id] = rank;
}
std::sort(versioned_chains.begin(), versioned_chains.end(), [&](const std::vector<inode_rev_t> & a, const std::vector<inode_rev_t> & b)
{
return ranks[a.back().inode_num] > ranks[b.back().inode_num];
});
}
void loop()
{
if (state == 0)
goto resume_0;
if (state == 1)
goto resume_1;
else if (state == 100)
goto resume_100;
resume_0:
state = 0;
chains.clear();
// Select images to delete
for (auto & ic: parent->cli->st_cli.inode_config)
{
for (auto & glob: globs)
{
if (exact ? (ic.second.name == glob) : stupid_glob(ic.second.name, glob))
{
chains[ic.first] = ic.first;
break;
}
}
}
// Arrange them into chains
join_chains();
// Delete each chain
i = 0;
while (i < versioned_chains.size())
{
// Check for parallel changes
for (auto & irev: versioned_chains[i])
{
auto inode_it = parent->cli->st_cli.inode_config.find(irev.inode_num);
if (inode_it == parent->cli->st_cli.inode_config.end() ||
inode_it->second.mod_revision > irev.meta_rev)
{
if (inode_it != parent->cli->st_cli.inode_config.end())
fprintf(stderr, "Warning: image %s modified by someone else during deletion, restarting wildcard deletion\n", inode_it->second.name.c_str());
else
fprintf(stderr, "Warning: inode %lx modified by someone else during deletion, retrying wildcard deletion\n", irev.inode_num);
goto resume_0;
}
}
// Delete
{
auto from_cfg = parent->cli->st_cli.inode_config.at(versioned_chains[i].back().inode_num);
auto to_cfg = parent->cli->st_cli.inode_config.at(versioned_chains[i].front().inode_num);
sub_cfg = cfg.object_items();
sub_cfg.erase("globs");
sub_cfg.erase("exact");
sub_cfg["from"] = from_cfg.name;
sub_cfg["to"] = to_cfg.name;
sub_cb = parent->start_rm(sub_cfg);
}
resume_1:
while (!sub_cb(result))
{
state = 1;
return;
}
sub_cb = NULL;
i++;
merge_result();
if (result.err)
{
break;
}
}
state = 100;
result.data = json11::Json::object{
{ "deleted_ids", deleted_ids },
{ "deleted_images", deleted_images },
{ "rebased_images", rebased_images },
};
resume_100:
// Done
return;
}
void merge_result()
{
for (auto & item: result.data["deleted_ids"].array_items())
deleted_ids.push_back(item);
for (auto & item: result.data["deleted_images"].array_items())
deleted_images.push_back(item == result.data["renamed_to"] ? result.data["renamed_from"] : item);
for (auto & item: result.data["rebased_images"].array_items())
rebased_images.push_back(item);
}
};
std::function<bool(cli_result_t &)> cli_tool_t::start_rm_wildcard(json11::Json cfg)
{
auto wildcard_remover = new wildcard_remover_t();
wildcard_remover->parent = this;
wildcard_remover->cfg = cfg;
for (auto & glob: cfg["globs"].array_items())
wildcard_remover->globs.push_back(glob.string_value());
wildcard_remover->exact = cfg["exact"].bool_value();
return [wildcard_remover](cli_result_t & result)
{
wildcard_remover->loop();
if (wildcard_remover->is_done())
{
result = wildcard_remover->result;
delete wildcard_remover;
return true;
}
return false;
};
}

View File

@ -671,6 +671,17 @@ void kv_db_t::stop_writing_new(uint64_t offset)
} }
} }
static bool is_zero(void *buf, int size)
{
assert(!(size % 8));
size /= 8;
uint64_t *ptr = (uint64_t*)buf;
for (int i = 0; i < size/8; i++)
if (ptr[i])
return false;
return true;
}
// Find approximate index size // Find approximate index size
// Phase 1: try 2^i-1 for i=0,1,2,... * ino_block_size // Phase 1: try 2^i-1 for i=0,1,2,... * ino_block_size
// Phase 2: binary search between 2^(N-1)-1 and 2^N-1 * ino_block_size // Phase 2: binary search between 2^(N-1)-1 and 2^N-1 * ino_block_size

View File

@ -336,8 +336,8 @@ std::vector<osd_chain_read_t> osd_t::collect_chained_read_requests(osd_op_t *cur
for (int chain_pos = 0; chain_pos < op_data->chain_size; chain_pos++) for (int chain_pos = 0; chain_pos < op_data->chain_size; chain_pos++)
{ {
uint8_t *part_bitmap = ((uint8_t*)op_data->snapshot_bitmaps) + chain_pos*stripe_count*clean_entry_bitmap_size; uint8_t *part_bitmap = ((uint8_t*)op_data->snapshot_bitmaps) + chain_pos*stripe_count*clean_entry_bitmap_size;
int start = !cur_op->req.rw.len ? 0 : (cur_op->req.rw.offset - op_data->oid.stripe)/bs_bitmap_granularity; int start = (cur_op->req.rw.offset - op_data->oid.stripe)/bs_bitmap_granularity;
int end = !cur_op->req.rw.len ? op_data->pg_data_size*clean_entry_bitmap_size : start + cur_op->req.rw.len/bs_bitmap_granularity; int end = start + cur_op->req.rw.len/bs_bitmap_granularity;
// Skip unneeded part in the beginning // Skip unneeded part in the beginning
while (start < end && ( while (start < end && (
((global_bitmap[start>>3] >> (start&7)) & 1) || ((global_bitmap[start>>3] >> (start&7)) & 1) ||

View File

@ -485,21 +485,3 @@ std::string format_datetime(uint64_t unixtime)
int len = strftime(buf, 128, "%Y-%m-%d %H:%M:%S", &lt); int len = strftime(buf, 128, "%Y-%m-%d %H:%M:%S", &lt);
return std::string(buf, len); return std::string(buf, len);
} }
bool is_zero(void *buf, size_t size)
{
size_t i = 0;
while (i <= size-8)
{
if (*(uint64_t*)((uint8_t*)buf + i))
return false;
i += 8;
}
while (i < size)
{
if (*((uint8_t*)buf + i))
return false;
i++;
}
return true;
}

View File

@ -31,4 +31,3 @@ std::string auto_addslashes(const std::string & str, const char *toescape = "\\\
std::string addslashes(const std::string & str, const char *toescape = "\\\""); std::string addslashes(const std::string & str, const char *toescape = "\\\"");
std::string realpath_str(std::string path, bool nofail = true); std::string realpath_str(std::string path, bool nofail = true);
std::string format_datetime(uint64_t unixtime); std::string format_datetime(uint64_t unixtime);
bool is_zero(void *buf, size_t size);

View File

@ -46,8 +46,6 @@ IMMEDIATE_COMMIT=1 ./test_rebalance_verify.sh
SCHEME=ec ./test_rebalance_verify.sh SCHEME=ec ./test_rebalance_verify.sh
SCHEME=ec IMMEDIATE_COMMIT=1 ./test_rebalance_verify.sh SCHEME=ec IMMEDIATE_COMMIT=1 ./test_rebalance_verify.sh
./test_dd.sh
./test_root_node.sh ./test_root_node.sh
./test_switch_primary.sh ./test_switch_primary.sh

View File

@ -1,20 +0,0 @@
#!/bin/bash -ex
. `dirname $0`/run_3osds.sh
# pipe in - pipe out
dd if=/dev/urandom of=./testdata/testfile bs=1M count=128
build/src/cmd/vitastor-cli --etcd_address $ETCD_URL dd oimg=testimg iodepth=4 bs=1M count=128 < ./testdata/testfile
build/src/cmd/vitastor-cli --etcd_address $ETCD_URL dd iimg=testimg iodepth=4 bs=1M count=128 > ./testdata/testfile1
diff ./testdata/testfile ./testdata/testfile1
rm ./testdata/testfile1
# snapshot
dd if=/dev/urandom of=./testdata/over bs=1M count=4
dd if=./testdata/over of=./testdata/testfile bs=1M seek=17 conv=notrunc
build/src/cmd/vitastor-cli --etcd_address $ETCD_URL snap-create testimg@snap1
build/src/cmd/vitastor-cli --etcd_address $ETCD_URL dd iodepth=4 if=./testdata/over oimg=testimg bs=1M seek=17
build/src/cmd/vitastor-cli --etcd_address $ETCD_URL dd iodepth=4 iimg=testimg of=./testdata/testfile1
diff ./testdata/testfile ./testdata/testfile1
format_green OK