pve-qemu/debian/patches/pve/0008-PVE-Up-glusterfs-allow...

78 lines
2.9 KiB
Diff
Raw Normal View History

From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
2017-04-05 11:49:19 +03:00
From: Wolfgang Bumiller <w.bumiller@proxmox.com>
Date: Mon, 6 Apr 2020 12:16:38 +0200
Subject: [PATCH] PVE: [Up] glusterfs: allow partial reads
2017-04-05 11:49:19 +03:00
This should deal with qemu bug #1644754 until upstream
decides which way to go. The general direction seems to be
away from sector based block APIs and with that in mind, and
when comparing to other network block backends (eg. nfs)
treating partial reads as errors doesn't seem to make much
sense.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2017-04-05 11:49:19 +03:00
---
block/gluster.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/block/gluster.c b/block/gluster.c
update submodule and patches to 7.1.0 Notable changes: * The only big change is the switch to using a custom QIOChannel for savevm-async, because the previously used QEMUFileOps was dropped. Changes to the current implementation: * Switch to vector based methods as required for an IO channel. For short reads the passed-in IO vector is stuffed with zeroes at the end, just to be sure. * For reading: The documentation in include/io/channel.h states that at least one byte should be read, so also error out when whe are at the very end instead of returning 0. * For reading: Fix off-by-one error when request goes beyond end. The wrong code piece was: if ((pos + size) > maxlen) { size = maxlen - pos - 1; } Previously, the last byte would not be read. It's actually possible to get a snapshot .raw file that has content all the way up the final 512 byte (= BDRV_SECTOR_SIZE) boundary without any trailing zero bytes (I wrote a script to do it). Luckily, it didn't cause a real issue, because qemu_loadvm_state() is not interested in the final (i.e. QEMU_VM_VMDESCRIPTION) section. The buffer for reading it is simply freed up afterwards and the function will assume that it read the whole section, even if that's not the case. * For writing: Make use of the generated blk_pwritev() wrapper instead of manually wrapping the coroutine to simplify and save a few lines. * Adapt to changed interfaces for blk_{pread,pwrite}: * a9262f551e ("block: Change blk_{pread,pwrite}() param order") * 3b35d4542c ("block: Add a 'flags' param to blk_pread()") * bf5b16fa40 ("block: Make blk_{pread,pwrite}() return 0 on success") Those changes especially affected the qemu-img dd patches, because the context also changed, but also some of our block drivers used the functions. * Drop qemu-common.h include: it got renamed after essentially everything was moved to other headers. The only remaining user I could find for things dropped from the header between 7.0 and 7.1 was qemu_get_vm_name() in the iscsi-initiatorname patch, but it already includes the header to which the function was moved. Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2022-10-14 15:07:13 +03:00
index 93da76bc31..1079b6186b 100644
2017-04-05 11:49:19 +03:00
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -57,6 +57,7 @@ typedef struct GlusterAIOCB {
2017-04-05 12:38:26 +03:00
int ret;
2017-04-05 11:49:19 +03:00
Coroutine *coroutine;
AioContext *aio_context;
+ bool is_write;
} GlusterAIOCB;
typedef struct BDRVGlusterState {
@@ -752,8 +753,10 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret,
2017-04-05 11:49:19 +03:00
acb->ret = 0; /* Success */
} else if (ret < 0) {
acb->ret = -errno; /* Read/Write failed */
+ } else if (acb->is_write) {
+ acb->ret = -EIO; /* Partial write - fail it */
} else {
- acb->ret = -EIO; /* Partial read/write - fail it */
+ acb->ret = 0; /* Success */
}
2017-04-05 12:38:26 +03:00
aio_co_schedule(acb->aio_context, acb->coroutine);
@@ -1022,6 +1025,7 @@ static coroutine_fn int qemu_gluster_co_pwrite_zeroes(BlockDriverState *bs,
2017-04-05 11:49:19 +03:00
acb.ret = 0;
acb.coroutine = qemu_coroutine_self();
acb.aio_context = bdrv_get_aio_context(bs);
+ acb.is_write = true;
ret = glfs_zerofill_async(s->fd, offset, bytes, gluster_finish_aiocb, &acb);
2017-04-05 11:49:19 +03:00
if (ret < 0) {
@@ -1203,9 +1207,11 @@ static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs,
2017-04-05 11:49:19 +03:00
acb.aio_context = bdrv_get_aio_context(bs);
if (write) {
+ acb.is_write = true;
ret = glfs_pwritev_async(s->fd, qiov->iov, qiov->niov, offset, 0,
gluster_finish_aiocb, &acb);
} else {
+ acb.is_write = false;
ret = glfs_preadv_async(s->fd, qiov->iov, qiov->niov, offset, 0,
gluster_finish_aiocb, &acb);
}
@@ -1269,6 +1275,7 @@ static coroutine_fn int qemu_gluster_co_flush_to_disk(BlockDriverState *bs)
2017-04-05 11:49:19 +03:00
acb.ret = 0;
acb.coroutine = qemu_coroutine_self();
acb.aio_context = bdrv_get_aio_context(bs);
+ acb.is_write = true;
ret = glfs_fsync_async(s->fd, gluster_finish_aiocb, &acb);
if (ret < 0) {
@@ -1317,6 +1324,7 @@ static coroutine_fn int qemu_gluster_co_pdiscard(BlockDriverState *bs,
2017-04-05 11:49:19 +03:00
acb.ret = 0;
acb.coroutine = qemu_coroutine_self();
acb.aio_context = bdrv_get_aio_context(bs);
+ acb.is_write = true;
ret = glfs_discard_async(s->fd, offset, bytes, gluster_finish_aiocb, &acb);
2017-04-05 11:49:19 +03:00
if (ret < 0) {