Patch cinder driver: create snapshot from volume & create image #8
Loading…
Reference in New Issue
There is no content yet.
Delete Branch "huynp911/vitastor:master"
Deleting a branch is permanent. Although the deleted branch may exist for a short time before cleaning up, in most cases it CANNOT be undone. Continue?
Fix create snapshot from volume
vol_name
is the new ID of the snapshot, not the parent ID of snapshot volume. That's why_get_name
function can't find any valid volumeAdd failure case of creating image to avoid infinite loop
@ -456,3 +461,3 @@
snap_name = utils.convert_str(snapshot.name)
snap = self._get_image(vol_name+'@'+snap_name)
snap = self._get_image("volume-"+snapshot.volume_id+'@'+snap_name)
Doesn't it give the same result?
No it not the same, for example:
volume layer 1 (ID1) --> snapshot layer 1 (ID2) --> volume layer 2 (ID3)
I want to create a volume from the snapshot whose name is
volume-ID1@snapshot-ID2
.But the problem is,
vol_name
has valuevolume-ID3
, notvolume-ID1
. And cinder will try to findvolume-ID3@snapshot-ID2
, which not exist.So I use
snapshot.volume_id
which has value ID1, and add a prefixvolume-
, and cinder will able to find the snapshot.Ok, I see
@ -525,3 +530,3 @@
args = [
'vitastor-cli', 'rm-data', '--pool', str(kv['value']['pool_id']),
'--inode', str(kv['value']['id']), '--progress', '0',
'--inode', str(kv['value']['id']), '--iodepth', '4', '--progress', '0',
4 is the default :-), why did you want to change it?
My centos cluster can't delete a large volume with higher iodepth. (it causes OSDs restart constantly, as I mentioned before)
After switching to debian, this issue is gone. So you can ignore this :v
There was a bug in inode removal which I finally fixed in 0.7.1 - it was listing too many PGs at once regardless of iodepth and parallel_osds setting :-) so maybe now your problem won't reproduce anyway...
@ -585,2 +590,4 @@
**cfg, 'name': vol_name,
}) } },
], 'failure': [
{ 'request_put': { 'key': 'index/maxid/'+pool_s, 'value': image_id } },
It's definitely not good, the idea is:
What if I put an inode using etcdctl: id = 1, maxid at that time was still 0.
You can see the conflict here, maxid = 0 but inode with ID 1 already existed.
-> openstack cinder can't create a new volume
Ok, so you want to fix the case where some inodes are created by hand and index/maxid is absent or incorrect?
I think in this case you should do a request_range on failure, check if the inode that you tried to create actually exists, and then scan available inode numbers if it does...
Or maybe I'll implement it in
vitastor-cli create
and then I'll just rework cinder driver to use vitastor-cli instead of direct etcd communication...Hi, sorry, I missed your PR. :-)
Ok, I merged only the first change with volume from snapshot creation fix.
The case with missing /index/maxid was already handled in
vitastor-cli create
, but it had a bug which I just fixed in masterPull request closed