pve-qemu/debian/patches/pve/0013-PVE-virtio-balloon-imp...

152 lines
5.7 KiB
Diff
Raw Normal View History

From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
2017-04-05 11:49:19 +03:00
From: Wolfgang Bumiller <w.bumiller@proxmox.com>
Date: Mon, 6 Apr 2020 12:16:43 +0200
Subject: [PATCH] PVE: virtio-balloon: improve query-balloon
2017-04-05 11:49:19 +03:00
Actually provide memory information via the query-balloon
command.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
update submodule and patches to QEMU 8.0.0 Many changes were necessary this time around: * QAPI was changed to avoid redundant has_* variables, see commit 44ea9d9be3 ("qapi: Start to elide redundant has_FOO in generated C") for details. This affected many QMP commands added by Proxmox too. * Pending querying for migration got split into two functions, one to estimate, one for exact value, see commit c8df4a7aef ("migration: Split save_live_pending() into state_pending_*") for details. Relevant for savevm-async and PBS dirty bitmap. * Some block (driver) functions got converted to coroutines, so the Proxmox block drivers needed to be adapted. * Alloc track auto-detaching during PBS live restore got broken by AioContext-related changes resulting in a deadlock. The current, hacky method was replaced by a simpler one. Stefan apparently ran into a problem with that when he wrote the driver, but there were improvements in the stream job code since then and I didn't manage to reproduce the issue. It's a separate patch "alloc-track: fix deadlock during drop" for now, you can find the details there. * Async snapshot-related changes: - The pending querying got adapted to the above-mentioned split and a patch is added to optimize it/make it more similar to what upstream code does. - Added initialization of the compression counters (for future-proofing). - It's necessary the hold the BQL (big QEMU lock = iothread mutex) during the setup phase, because block layer functions are used there and not doing so leads to racy, hard-to-debug crashes or hangs. It's necessary to change some upstream code too for this, a version of the patch "migration: for snapshots, hold the BQL during setup callbacks" is intended to be upstreamed. - Need to take the bdrv graph read lock before flushing. * hmp_info_balloon was moved to a different file. * Needed to include a new headers from time to time to still get the correct functions. Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-05-15 16:39:53 +03:00
[FE: add BalloonInfo to member name exceptions list
rebase for 8.0 - moved to hw/core/machine-hmp-cmds.c]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2017-04-05 11:49:19 +03:00
---
update submodule and patches to QEMU 8.0.0 Many changes were necessary this time around: * QAPI was changed to avoid redundant has_* variables, see commit 44ea9d9be3 ("qapi: Start to elide redundant has_FOO in generated C") for details. This affected many QMP commands added by Proxmox too. * Pending querying for migration got split into two functions, one to estimate, one for exact value, see commit c8df4a7aef ("migration: Split save_live_pending() into state_pending_*") for details. Relevant for savevm-async and PBS dirty bitmap. * Some block (driver) functions got converted to coroutines, so the Proxmox block drivers needed to be adapted. * Alloc track auto-detaching during PBS live restore got broken by AioContext-related changes resulting in a deadlock. The current, hacky method was replaced by a simpler one. Stefan apparently ran into a problem with that when he wrote the driver, but there were improvements in the stream job code since then and I didn't manage to reproduce the issue. It's a separate patch "alloc-track: fix deadlock during drop" for now, you can find the details there. * Async snapshot-related changes: - The pending querying got adapted to the above-mentioned split and a patch is added to optimize it/make it more similar to what upstream code does. - Added initialization of the compression counters (for future-proofing). - It's necessary the hold the BQL (big QEMU lock = iothread mutex) during the setup phase, because block layer functions are used there and not doing so leads to racy, hard-to-debug crashes or hangs. It's necessary to change some upstream code too for this, a version of the patch "migration: for snapshots, hold the BQL during setup callbacks" is intended to be upstreamed. - Need to take the bdrv graph read lock before flushing. * hmp_info_balloon was moved to a different file. * Needed to include a new headers from time to time to still get the correct functions. Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-05-15 16:39:53 +03:00
hw/core/machine-hmp-cmds.c | 30 +++++++++++++++++++++++++++++-
2017-04-05 11:49:19 +03:00
hw/virtio/virtio-balloon.c | 33 +++++++++++++++++++++++++++++++--
qapi/machine.json | 22 +++++++++++++++++++++-
qapi/pragma.json | 1 +
4 files changed, 82 insertions(+), 4 deletions(-)
2017-04-05 11:49:19 +03:00
update submodule and patches to QEMU 8.0.0 Many changes were necessary this time around: * QAPI was changed to avoid redundant has_* variables, see commit 44ea9d9be3 ("qapi: Start to elide redundant has_FOO in generated C") for details. This affected many QMP commands added by Proxmox too. * Pending querying for migration got split into two functions, one to estimate, one for exact value, see commit c8df4a7aef ("migration: Split save_live_pending() into state_pending_*") for details. Relevant for savevm-async and PBS dirty bitmap. * Some block (driver) functions got converted to coroutines, so the Proxmox block drivers needed to be adapted. * Alloc track auto-detaching during PBS live restore got broken by AioContext-related changes resulting in a deadlock. The current, hacky method was replaced by a simpler one. Stefan apparently ran into a problem with that when he wrote the driver, but there were improvements in the stream job code since then and I didn't manage to reproduce the issue. It's a separate patch "alloc-track: fix deadlock during drop" for now, you can find the details there. * Async snapshot-related changes: - The pending querying got adapted to the above-mentioned split and a patch is added to optimize it/make it more similar to what upstream code does. - Added initialization of the compression counters (for future-proofing). - It's necessary the hold the BQL (big QEMU lock = iothread mutex) during the setup phase, because block layer functions are used there and not doing so leads to racy, hard-to-debug crashes or hangs. It's necessary to change some upstream code too for this, a version of the patch "migration: for snapshots, hold the BQL during setup callbacks" is intended to be upstreamed. - Need to take the bdrv graph read lock before flushing. * hmp_info_balloon was moved to a different file. * Needed to include a new headers from time to time to still get the correct functions. Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-05-15 16:39:53 +03:00
diff --git a/hw/core/machine-hmp-cmds.c b/hw/core/machine-hmp-cmds.c
index c3e55ef9e9..0e32e6201f 100644
--- a/hw/core/machine-hmp-cmds.c
+++ b/hw/core/machine-hmp-cmds.c
@@ -169,7 +169,35 @@ void hmp_info_balloon(Monitor *mon, const QDict *qdict)
return;
}
- monitor_printf(mon, "balloon: actual=%" PRId64 "\n", info->actual >> 20);
+ monitor_printf(mon, "balloon: actual=%" PRId64, info->actual >> 20);
+ monitor_printf(mon, " max_mem=%" PRId64, info->max_mem >> 20);
+ if (info->has_total_mem) {
+ monitor_printf(mon, " total_mem=%" PRId64, info->total_mem >> 20);
+ }
+ if (info->has_free_mem) {
+ monitor_printf(mon, " free_mem=%" PRId64, info->free_mem >> 20);
+ }
+
+ if (info->has_mem_swapped_in) {
+ monitor_printf(mon, " mem_swapped_in=%" PRId64, info->mem_swapped_in);
+ }
+ if (info->has_mem_swapped_out) {
+ monitor_printf(mon, " mem_swapped_out=%" PRId64, info->mem_swapped_out);
+ }
+ if (info->has_major_page_faults) {
+ monitor_printf(mon, " major_page_faults=%" PRId64,
+ info->major_page_faults);
+ }
+ if (info->has_minor_page_faults) {
+ monitor_printf(mon, " minor_page_faults=%" PRId64,
+ info->minor_page_faults);
+ }
+ if (info->has_last_update) {
+ monitor_printf(mon, " last_update=%" PRId64,
+ info->last_update);
+ }
+
+ monitor_printf(mon, "\n");
qapi_free_BalloonInfo(info);
}
2017-04-05 11:49:19 +03:00
diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
index d004cf29d2..2660ed520b 100644
2017-04-05 11:49:19 +03:00
--- a/hw/virtio/virtio-balloon.c
+++ b/hw/virtio/virtio-balloon.c
@@ -782,8 +782,37 @@ static uint64_t virtio_balloon_get_features(VirtIODevice *vdev, uint64_t f,
2017-04-05 11:49:19 +03:00
static void virtio_balloon_stat(void *opaque, BalloonInfo *info)
{
VirtIOBalloon *dev = opaque;
- info->actual = get_current_ram_size() - ((uint64_t) dev->actual <<
- VIRTIO_BALLOON_PFN_SHIFT);
+ ram_addr_t ram_size = get_current_ram_size();
+ info->actual = ram_size - ((uint64_t) dev->actual <<
+ VIRTIO_BALLOON_PFN_SHIFT);
+
+ info->max_mem = ram_size;
+
+ if (!(balloon_stats_enabled(dev) && balloon_stats_supported(dev) &&
+ dev->stats_last_update)) {
+ return;
+ }
+
+ info->last_update = dev->stats_last_update;
+ info->has_last_update = true;
+
+ info->mem_swapped_in = dev->stats[VIRTIO_BALLOON_S_SWAP_IN];
+ info->has_mem_swapped_in = info->mem_swapped_in >= 0 ? true : false;
+
+ info->mem_swapped_out = dev->stats[VIRTIO_BALLOON_S_SWAP_OUT];
+ info->has_mem_swapped_out = info->mem_swapped_out >= 0 ? true : false;
+
+ info->major_page_faults = dev->stats[VIRTIO_BALLOON_S_MAJFLT];
+ info->has_major_page_faults = info->major_page_faults >= 0 ? true : false;
+
+ info->minor_page_faults = dev->stats[VIRTIO_BALLOON_S_MINFLT];
+ info->has_minor_page_faults = info->minor_page_faults >= 0 ? true : false;
+
+ info->free_mem = dev->stats[VIRTIO_BALLOON_S_MEMFREE];
+ info->has_free_mem = info->free_mem >= 0 ? true : false;
+
+ info->total_mem = dev->stats[VIRTIO_BALLOON_S_MEMTOT];
+ info->has_total_mem = info->total_mem >= 0 ? true : false;
}
static void virtio_balloon_to_target(void *opaque, ram_addr_t target)
diff --git a/qapi/machine.json b/qapi/machine.json
index a08b6576ca..5c9a4d55f4 100644
--- a/qapi/machine.json
+++ b/qapi/machine.json
@@ -1063,9 +1063,29 @@
# @actual: the logical size of the VM in bytes Formula used:
# logical_vm_size = vm_ram_size - balloon_size
2017-04-05 11:49:19 +03:00
#
2017-04-05 12:38:26 +03:00
+# @last_update: time when stats got updated from guest
2017-04-05 11:49:19 +03:00
+#
2017-04-05 12:38:26 +03:00
+# @mem_swapped_in: number of pages swapped in within the guest
2017-04-05 11:49:19 +03:00
+#
2017-04-05 12:38:26 +03:00
+# @mem_swapped_out: number of pages swapped out within the guest
2017-04-05 11:49:19 +03:00
+#
2017-04-05 12:38:26 +03:00
+# @major_page_faults: number of major page faults within the guest
+#
2017-04-05 12:38:26 +03:00
+# @minor_page_faults: number of minor page faults within the guest
2017-04-05 11:49:19 +03:00
+#
2017-04-05 12:38:26 +03:00
+# @free_mem: amount of memory (in bytes) free in the guest
2017-04-05 11:49:19 +03:00
+#
2017-04-05 12:38:26 +03:00
+# @total_mem: amount of memory (in bytes) visible to the guest
2017-04-05 11:49:19 +03:00
+#
+# @max_mem: amount of memory (in bytes) assigned to the guest
+#
# Since: 0.14
2017-04-05 11:49:19 +03:00
##
-{ 'struct': 'BalloonInfo', 'data': {'actual': 'int' } }
+{ 'struct': 'BalloonInfo',
+ 'data': {'actual': 'int', '*last_update': 'int', '*mem_swapped_in': 'int',
+ '*mem_swapped_out': 'int', '*major_page_faults': 'int',
+ '*minor_page_faults': 'int', '*free_mem': 'int',
+ '*total_mem': 'int', 'max_mem': 'int' } }
##
# @query-balloon:
diff --git a/qapi/pragma.json b/qapi/pragma.json
index 7f810b0e97..325e684411 100644
--- a/qapi/pragma.json
+++ b/qapi/pragma.json
@@ -35,6 +35,7 @@
'member-name-exceptions': [ # visible in:
'ACPISlotType', # query-acpi-ospm-status
'AcpiTableOptions', # -acpitable
+ 'BalloonInfo', # query-balloon
'BlkdebugEvent', # blockdev-add, -blockdev
'BlkdebugSetStateOptions', # blockdev-add, -blockdev
'BlockDeviceInfo', # query-block