Commit Graph

98213 Commits (410840cdb1342751f58a3521f48d5a9faf694c3b)

Author SHA1 Message Date
Bin Meng 0370f239ad tests/unit: Update test-io-channel-socket.c for Windows
Change to dynamically include the test cases by checking AF_UNIX
availability using a new helper socket_check_afunix_support().
With such changes testing on a Windows host can be covered as well.

Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20220802075200.907360-5-bmeng.cn@gmail.com>
2022-09-02 15:54:47 +04:00
Bin Meng 120fa5e0e6 chardev/char-socket: Update AF_UNIX for Windows
Now that AF_UNIX has come to Windows, update the existing logic in
qemu_chr_compute_filename() and qmp_chardev_open_socket() for Windows.

Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20220802075200.907360-4-bmeng.cn@gmail.com>
2022-09-02 15:54:46 +04:00
Bin Meng d409373b9d util/qemu-sockets: Enable unix socket support on Windows
Support for the unix socket has existed both in BSD and Linux for the
longest time, but not on Windows. Since Windows 10 build 17063 [1],
the native support for the unix socket has come to Windows. Starting
this build, two Win32 processes can use the AF_UNIX address family
over Winsock API to communicate with each other.

[1] https://devblogs.microsoft.com/commandline/af_unix-comes-to-windows/

Signed-off-by: Xuzhou Cheng <xuzhou.cheng@windriver.com>
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20220802075200.907360-3-bmeng.cn@gmail.com>
2022-09-02 15:54:46 +04:00
Zheyu Ma 36a894aeb6 net: tulip: Restrict DMA engine to memories
The DMA engine is started by I/O access and then itself accesses the
I/O registers, triggering a reentrancy bug.

The following log can reveal it:
==5637==ERROR: AddressSanitizer: stack-overflow
    #0 0x5595435f6078 in tulip_xmit_list_update qemu/hw/net/tulip.c:673
    #1 0x5595435f204a in tulip_write qemu/hw/net/tulip.c:805:13
    #2 0x559544637f86 in memory_region_write_accessor qemu/softmmu/memory.c:492:5
    #3 0x5595446379fa in access_with_adjusted_size qemu/softmmu/memory.c:554:18
    #4 0x5595446372fa in memory_region_dispatch_write qemu/softmmu/memory.c
    #5 0x55954468b74c in flatview_write_continue qemu/softmmu/physmem.c:2825:23
    #6 0x559544683662 in flatview_write qemu/softmmu/physmem.c:2867:12
    #7 0x5595446833f3 in address_space_write qemu/softmmu/physmem.c:2963:18
    #8 0x5595435fb082 in dma_memory_rw_relaxed qemu/include/sysemu/dma.h:87:12
    #9 0x5595435fb082 in dma_memory_rw qemu/include/sysemu/dma.h:130:12
    #10 0x5595435fb082 in dma_memory_write qemu/include/sysemu/dma.h:171:12
    #11 0x5595435fb082 in stl_le_dma qemu/include/sysemu/dma.h:272:1
    #12 0x5595435fb082 in stl_le_pci_dma qemu/include/hw/pci/pci.h:910:1
    #13 0x5595435fb082 in tulip_desc_write qemu/hw/net/tulip.c:101:9
    #14 0x5595435f7e3d in tulip_xmit_list_update qemu/hw/net/tulip.c:706:9
    #15 0x5595435f204a in tulip_write qemu/hw/net/tulip.c:805:13

Fix this bug by restricting the DMA engine to memories regions.

Signed-off-by: Zheyu Ma <zheyuma97@gmail.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Zhang Chen 3772cf0d1b net/colo.c: Fix the pointer issue reported by Coverity.
When enabled the virtio-net-pci, guest network packet will
load the vnet_hdr. In COLO status, the primary VM's network
packet maybe redirect to another VM, it needs filter-redirect
enable the vnet_hdr flag at the same time, COLO-proxy will
correctly parse the original network packet. If have any
misconfiguration here, the vnet_hdr_len is wrong for parse
the packet, the data+offset will point to wrong place.

Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Eugenio Pérez 0e3fdcffea vdpa: Delete CVQ migration blocker
We can restore the device state in the destination via CVQ now. Remove
the migration blocker.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Eugenio Pérez dd036d8d27 vdpa: Add virtio-net mac address via CVQ at start
This is needed so the destination vdpa device see the same state a the
guest set in the source.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Eugenio Pérez 539573c317 vhost_net: add NetClientState->load() callback
It allows per-net client operations right after device's successful
start. In particular, to load the device status.

Vhost-vdpa net will use it to add the CVQ buffers to restore the device
status.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Eugenio Pérez be4278b65f vdpa: extract vhost_vdpa_net_cvq_add from vhost_vdpa_net_handle_ctrl_avail
So we can reuse it to inject state messages.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
--
v7:
* Remove double free error

v6:
* Do not assume in buffer sent to the device is sizeof(virtio_net_ctrl_ack)

v5:
* Do not use an artificial !NULL VirtQueueElement
* Use only out size instead of iovec dev_buffers for these functions.

Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Eugenio Pérez 7a7f87e94c vdpa: Move command buffers map to start of net device
As this series will reuse them to restore the device state at the end of
a migration (or a device start), let's allocate only once at the device
start so we don't duplicate their map and unmap.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Eugenio Pérez f8972b56ee vdpa: add net_vhost_vdpa_cvq_info NetClientInfo
Next patches will add a new info callback to restore NIC status through
CVQ. Since only the CVQ vhost device is needed, create it with a new
NetClientInfo.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Eugenio Pérez c5e5269d8a vhost_net: Add NetClientInfo stop callback
Used by the backend to perform actions after the device is stopped.

In particular, vdpa net use it to unmap CVQ buffers to the device,
cleaning the actions performed in prepare().

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Eugenio Pérez eb92b75380 vhost_net: Add NetClientInfo start callback
This is used by the backend to perform actions before the device is
started.

In particular, vdpa net use it to map CVQ buffers to the device, so it
can send control commands using them.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Eugenio Pérez d368c0b052 vhost: Do not depend on !NULL VirtQueueElement on vhost_svq_flush
Since QEMU will be able to inject new elements on CVQ to restore the
state, we need not to depend on a VirtQueueElement to know if a new
element has been used by the device or not. Instead of check that, check
if there are new elements only using used idx on vhost_svq_flush.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Eugenio Pérez 9e193cec5d vhost: Delete useless read memory barrier
As discussed in previous series [1], this memory barrier is useless with
the atomic read of used idx at vhost_svq_more_used. Deleting it.

[1] https://lists.nongnu.org/archive/html/qemu-devel/2022-07/msg02616.html

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Eugenio Pérez 86f5f2546f vhost: use SVQ element ndescs instead of opaque data for desc validation
Since we're going to allow SVQ to add elements without the guest's
knowledge and without its own VirtQueueElement, it's easier to check if
an element is a valid head checking a different thing than the
VirtQueueElement.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Eugenio Pérez 9c2ab2f1ec vhost: stop transfer elem ownership in vhost_handle_guest_kick
It was easier to allow vhost_svq_add to handle the memory. Now that we
will allow qemu to add elements to a SVQ without the guest's knowledge,
it's better to handle it in the caller.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Eugenio Pérez 8b6d6119ad vdpa: Use ring hwaddr at vhost_vdpa_svq_unmap_ring
Reduce code duplication.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Eugenio Pérez 8b64e48642 vhost: Always store new kick fd on vhost_svq_set_svq_kick_fd
We can unbind twice a file descriptor if we call twice
vhost_svq_set_svq_kick_fd because of this. Since it comes from vhost and
not from SVQ, that file descriptor could be a different thing that
guest's vhost notifier.

Likewise, it can happens the same if a guest start and stop the device
multiple times.

Reported-by: Lei Yang <leiyang@redhat.com>
Fixes: dff4426fa6 ("vhost: Add Shadow VirtQueue kick forwarding capabilities")
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Eugenio Pérez 5b590f51b9 vdpa: Make SVQ vring unmapping return void
Nothing actually reads the return value, but an error in cleaning some
entries could cause device stop to abort, making a restart impossible.
Better ignore explicitely the return value.

Reported-by: Lei Yang <leiyang@redhat.com>
Fixes: 34e3c94eda ("vdpa: Add custom IOTLB translations to SVQ")
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Eugenio Pérez b37c12be96 vdpa: Remove SVQ vring from iova_tree at shutdown
Although the device will be reset before usage, the right thing to do is
to clean it.

Reported-by: Lei Yang <leiyang@redhat.com>
Fixes: 34e3c94eda ("vdpa: Add custom IOTLB translations to SVQ")
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Eugenio Pérez 69292a8e40 util: accept iova_tree_remove_parameter by value
It's convenient to call iova_tree_remove from a map returned from
iova_tree_find or iova_tree_find_iova. With the current code this is not
possible, since we will free it, and then we will try to search for it
again.

Fix it making accepting the map by value, forcing a copy of the
argument. Not applying a fixes tag, since there is no use like that at
the moment.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Eugenio Pérez 7dab70bec3 vdpa: do not save failed dma maps in SVQ iova tree
If a map fails for whatever reason, it must not be saved in the tree.
Otherwise, qemu will try to unmap it in cleanup, leaving to more errors.

Fixes: 34e3c94eda ("vdpa: Add custom IOTLB translations to SVQ")
Reported-by: Lei Yang <leiyang@redhat.com>
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Eugenio Pérez 10dab9f263 vdpa: Skip the maps not in the iova tree
Next patch will skip the registering of dma maps that the vdpa device
rejects in the iova tree. We need to consider that here or we cause a
SIGSEGV accessing result.

Reported-by: Lei Yang <leiyang@redhat.com>
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-09-02 10:22:39 +08:00
Stefan Hajnoczi c125b55207 Fix avr_cpu_tlb_fill use of probe argument
Fix skip instructions being separated from the next insn (#1118)
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmMQRs4dHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV+7cAgAtlUxw9kNnIdrz1HG
 mkXO1kOfj0si8OHeAddy221lOL7zUm/Tw6vOdqxBsUjzkERLTNC6MhtVu6s3msyP
 Yi+Hh1lC9tk+YTYNnIeMqgEQYno3RFGAIaDHHRGQn8ha9PWWr0yGGaWTOZjm3Idf
 QYvFxiKfgTOEVekP4GYwkMsM02ItHu0hLLUUryKrQrCISNYzkF7AEtPxfxG4eDIr
 kN0QQndN5pfhRWnV6cvo6VVmAGz70YfKnlJgAFveeCZETYNpHP1npcsc4uj52JGk
 o0jxUSbZEzIbqLWSHqxa3KXydx/070sh0qmTmCzJSU7hOfmYpBHnT4ApHkijrIGI
 3lrrJw==
 =5lX1
 -----END PGP SIGNATURE-----

Merge tag 'pull-avr-20220901' of https://gitlab.com/rth7680/qemu into staging

Fix avr_cpu_tlb_fill use of probe argument
Fix skip instructions being separated from the next insn (#1118)

# -----BEGIN PGP SIGNATURE-----
#
# iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmMQRs4dHHJpY2hhcmQu
# aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV+7cAgAtlUxw9kNnIdrz1HG
# mkXO1kOfj0si8OHeAddy221lOL7zUm/Tw6vOdqxBsUjzkERLTNC6MhtVu6s3msyP
# Yi+Hh1lC9tk+YTYNnIeMqgEQYno3RFGAIaDHHRGQn8ha9PWWr0yGGaWTOZjm3Idf
# QYvFxiKfgTOEVekP4GYwkMsM02ItHu0hLLUUryKrQrCISNYzkF7AEtPxfxG4eDIr
# kN0QQndN5pfhRWnV6cvo6VVmAGz70YfKnlJgAFveeCZETYNpHP1npcsc4uj52JGk
# o0jxUSbZEzIbqLWSHqxa3KXydx/070sh0qmTmCzJSU7hOfmYpBHnT4ApHkijrIGI
# 3lrrJw==
# =5lX1
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 01 Sep 2022 01:44:46 EDT
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* tag 'pull-avr-20220901' of https://gitlab.com/rth7680/qemu:
  target/avr: Disable interrupts when env->skip set
  target/avr: Only execute one interrupt at a time
  target/avr: Call avr_cpu_do_interrupt directly
  target/avr: Support probe argument to tlb_fill

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-09-01 16:26:45 -04:00
Paul Brook a64fc26919 target/i386: AVX+AES helpers prep
Make the AES vector helpers AVX ready

No functional changes to existing helpers

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-22-paul@nowt.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook 5a09df21f7 target/i386: AVX pclmulqdq prep
Make the pclmulqdq helper AVX ready

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-21-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook 0e29cea589 target/i386: Rewrite blendv helpers
Rewrite the blendv helpers so that they can easily be extended to support
the AVX encodings, which make all 4 arguments explicit.

No functional changes to the existing helpers

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-20-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook fd17264ad1 target/i386: Misc AVX helper prep
Fixup various vector helpers that either trivially exten to 256 bit,
or don't have 256 bit variants.

No functional changes to existing helpers

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-19-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook 6567ffb4f2 target/i386: Destructive FP helpers for AVX
Perpare the horizontal atithmetic vector helpers for AVX
These currently use a dummy Reg typed variable to store the result then
assign the whole register.  This will cause 128 bit operations to corrupt
the upper half of the register, so replace it with explicit temporaries
and element assignments.

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-18-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook 6f218d6e99 target/i386: Dot product AVX helper prep
Make the dpps and dppd helpers AVX-ready

I can't see any obvious reason why dppd shouldn't work on 256 bit ymm
registers, but both AMD and Intel agree that it's xmm only.

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-17-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook cbf4ad5498 target/i386: reimplement AVX comparison helpers
AVX includes an additional set of comparison predicates, some of which
our softfloat implementation does not expose as separate functions.
Rewrite the helpers in terms of floatN_compare for future extensibility.

Signed-off-by: Paul Brook <paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220424220204.2493824-24-paul@nowt.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook 3403cafeee target/i386: Floating point arithmetic helper AVX prep
Prepare the "easy" floating point vector helpers for AVX

No functional changes to existing helpers.

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-16-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook d45b0de63d target/i386: Destructive vector helpers for AVX
These helpers need to take special care to avoid overwriting source values
before the wole result has been calculated.  Currently they use a dummy
Reg typed variable to store the result then assign the whole register.
This will cause 128 bit operations to corrupt the upper half of the register,
so replace it with explicit temporaries and element assignments.

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-14-paul@nowt.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook e894bae8cb target/i386: Misc integer AVX helper prep
More preparatory work for AVX support in various integer vector helpers

No functional changes to existing helpers.

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-13-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook ee04a3c86d target/i386: Rewrite simple integer vector helpers
Rewrite the "simple" vector integer helpers in preperation for AVX support.

While the current code is able to use the same prototype for unary
(a = F(b)) and binary (a = F(b, c)) operations, future changes will cause
them to diverge.

No functional changes to existing helpers

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-12-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook 18592d2ec2 target/i386: Rewrite vector shift helper
Rewrite the vector shift helpers in preperation for AVX support (3 operand
form and 256 bit vectors).

For now keep the existing two operand interface.

No functional changes to existing helpers.

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-11-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paolo Bonzini 25bdec79c6 target/i386: rewrite destructive 3DNow operations
Remove use of the MOVE macro, since it will be purged from
MMX/SSE as well.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook 71964f1b69 target/i386: Add CHECK_NO_VEX
Reject invalid VEX encodings on MMX instructions.

Signed-off-by: Paul Brook <paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220424220204.2493824-7-paul@nowt.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paolo Bonzini 7f32690243 target/i386: do not cast gen_helper_* function pointers
Use a union to store the various possible kinds of function pointers, and
access the correct one based on the flags.

SSEOpHelper_table6 and SSEOpHelper_table7 right now only have one case,
but this would change with AVX's 3- and 4-argument operations.  Use
unions there too, to keep the code more similar for the three tables.

Extracted from a patch by Paul Brook <paul@nowt.org>.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paolo Bonzini ce4fa29f94 target/i386: Add size suffix to vector FP helpers
For AVX we're going to need both 128 bit (xmm) and 256 bit (ymm) variants of
floating point helpers. Add the register type suffix to the existing
*PS and *PD helpers (SS and SD variants are only valid on 128 bit vectors)

No functional changes.

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-15-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paolo Bonzini d7a851f89a target/i386: isolate MMX code more
Extracted from a patch by Paul Brook <paul@nowt.org>.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paolo Bonzini 2607e76ffd target/i386: check SSE table flags instead of hardcoding opcodes
Put more flags to work to avoid hardcoding lists of opcodes.  The op7 case
for SSE_OPF_CMP is included for homogeneity and because AVX needs it, but
it is never used by SSE or MMX.

Extracted from a patch by Paul Brook <paul@nowt.org>.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook 622ef8f291 target/i386: Move 3DNOW decoder
Handle 3DNOW instructions early to avoid complicating the MMX/SSE logic.

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-25-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook 491f0f1962 target/i386: Rework sse_op_table6/7
Add a flags field each row in sse_op_table6 and sse_op_table7.

Initially this is only used as a replacement for the magic SSE41_SPECIAL
pointer.  The other flags are mostly relevant for the AVX implementation
but can be applied to SSE as well.

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-6-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook f2dbc28947 target/i386: Rework sse_op_table1
Add a flags field to each row in sse_op_table1.

Initially this is only used as a replacement for the magic
SSE_SPECIAL and SSE_DUMMY pointers, the other flags are mostly
relevant for the AVX implementation but can be applied to SSE as well.

Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-5-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook 36fc7ee299 target/i386: Add ZMM_OFFSET macro
Add a convenience macro to get the address of an xmm_regs element within
CPUX86State.

This was originally going to be the basis of an implementation that broke
operations into 128 bit chunks. I scrapped that idea, so this is now a purely
cosmetic change. But I think a worthwhile one - it reduces the number of
function calls that need to be split over multiple lines.

No functional changes.

Signed-off-by: Paul Brook <paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220424220204.2493824-9-paul@nowt.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paolo Bonzini da1a7edb5d target/i386: formatting fixes
Extracted from a patch by Paul Brook <paul@nowt.org>.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paolo Bonzini 3dd116e32e target/i386: do not use MOVL to move data between SSE registers
Write down explicitly the load/store sequence.

Extracted from a patch by Paul Brook <paul@nowt.org>.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00
Paul Brook 91117bc546 tests/tcg: i386: add SSE tests
Tests for correct operation of most x86-64 SSE instructions.
It should cover all combinations of overlapping register and memory
operands on a set of random-ish data.

Results are bit-identical to an Intel i5-8500, with the exception of
the RCPSS and RSQRT approximations where the real CPU gives less accurate
results (the Intel spec allows relative errors up to 1.5 * 2^-12)

Signed-off-by: Paul Brook <paul@nowt.org>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20220424220204.2493824-42-paul@nowt.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-01 20:16:33 +02:00