diff options
author | Tomáš Mózes <tomas.mozes@gmail.com> | 2024-11-22 16:39:33 +0100 |
---|---|---|
committer | Tomáš Mózes <tomas.mozes@gmail.com> | 2024-11-22 16:39:33 +0100 |
commit | 783ab7a536155a1f34168145bd94a14bd54532f1 (patch) | |
tree | 2c024e431cb954a9b7ba1d24ab458c7c7086dada | |
parent | Xen 4.18.4-pre-patchset-0 (diff) | |
download | xen-upstream-patches-783ab7a536155a1f34168145bd94a14bd54532f1.tar.gz xen-upstream-patches-783ab7a536155a1f34168145bd94a14bd54532f1.tar.bz2 xen-upstream-patches-783ab7a536155a1f34168145bd94a14bd54532f1.zip |
Xen 4.18.4-pre-patchset-14.18.4-pre-patchset-14.18
Signed-off-by: Tomáš Mózes <tomas.mozes@gmail.com>
57 files changed, 3008 insertions, 52 deletions
diff --git a/0001-automation-update-tests-to-use-Debian-Bookworm.patch b/0001-automation-update-tests-to-use-Debian-Bookworm.patch index b3657e5..5ff1843 100644 --- a/0001-automation-update-tests-to-use-Debian-Bookworm.patch +++ b/0001-automation-update-tests-to-use-Debian-Bookworm.patch @@ -1,7 +1,7 @@ From 868a0985bf10b9c6f6139471c292f1232ee847aa Mon Sep 17 00:00:00 2001 From: Roger Pau Monne <roger.pau@citrix.com> Date: Tue, 21 Nov 2023 17:03:56 +0100 -Subject: [PATCH 01/25] automation: update tests to use Debian Bookworm +Subject: [PATCH 01/56] automation: update tests to use Debian Bookworm MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @@ -98,5 +98,5 @@ index 61e642cce0..4d190777e1 100644 - ./automation/scripts/qemu-smoke-x86-64.sh pvh 2>&1 | tee ${LOGFILE} needs: -- -2.46.1 +2.47.0 diff --git a/0002-automation-disable-Yocto-jobs.patch b/0002-automation-disable-Yocto-jobs.patch index a8f4062..94034a5 100644 --- a/0002-automation-disable-Yocto-jobs.patch +++ b/0002-automation-disable-Yocto-jobs.patch @@ -1,7 +1,7 @@ From 9530cae01ad3fc44509d1cb1d348f406c12de961 Mon Sep 17 00:00:00 2001 From: Stefano Stabellini <stefano.stabellini@amd.com> Date: Fri, 9 Aug 2024 23:59:18 -0700 -Subject: [PATCH 02/25] automation: disable Yocto jobs +Subject: [PATCH 02/56] automation: disable Yocto jobs The Yocto jobs take a long time to run. We are changing Gitlab ARM64 runners and the new runners might not be able to finish the Yocto jobs @@ -44,5 +44,5 @@ index 32af30cced..84e9dde25a 100644 # Cppcheck analysis jobs -- -2.46.1 +2.47.0 diff --git a/0003-automation-use-expect-to-run-QEMU.patch b/0003-automation-use-expect-to-run-QEMU.patch index 3a65ea0..56fee14 100644 --- a/0003-automation-use-expect-to-run-QEMU.patch +++ b/0003-automation-use-expect-to-run-QEMU.patch @@ -1,7 +1,7 @@ From 781c25126117f664b3ac42643b832d9ff98cc03a Mon Sep 17 00:00:00 2001 From: Stefano Stabellini <stefano.stabellini@amd.com> Date: Wed, 14 Aug 2024 17:49:51 -0700 -Subject: [PATCH 03/25] automation: use expect to run QEMU +Subject: [PATCH 03/56] automation: use expect to run QEMU Use expect to invoke QEMU so that we can terminate the test as soon as we get the right string in the output instead of waiting until the @@ -271,5 +271,5 @@ index 3ec9cf74e1..51807c3cd4 100755 -exit 0 +./automation/scripts/qemu-key.exp -- -2.46.1 +2.47.0 diff --git a/0004-x86-vLAPIC-prevent-undue-recursion-of-vlapic_error.patch b/0004-x86-vLAPIC-prevent-undue-recursion-of-vlapic_error.patch index 7911056..77c0578 100644 --- a/0004-x86-vLAPIC-prevent-undue-recursion-of-vlapic_error.patch +++ b/0004-x86-vLAPIC-prevent-undue-recursion-of-vlapic_error.patch @@ -1,7 +1,7 @@ From 641b8f2a924b86ab086878b5baaf2d50ba3658f1 Mon Sep 17 00:00:00 2001 From: Jan Beulich <jbeulich@suse.com> Date: Tue, 24 Sep 2024 14:49:18 +0200 -Subject: [PATCH 04/25] x86/vLAPIC: prevent undue recursion of vlapic_error() +Subject: [PATCH 04/56] x86/vLAPIC: prevent undue recursion of vlapic_error() With the error vector set to an illegal value, the function invoking vlapic_set_irq() would bring execution back here, with the non-recursive @@ -53,5 +53,5 @@ index ba569043ea..70431ba438 100644 } spin_unlock_irqrestore(&vlapic->esr_lock, flags); -- -2.46.1 +2.47.0 diff --git a/0005-update-Xen-version-to-4.18.4-pre.patch b/0005-update-Xen-version-to-4.18.4-pre.patch index dd6539a..7908e3c 100644 --- a/0005-update-Xen-version-to-4.18.4-pre.patch +++ b/0005-update-Xen-version-to-4.18.4-pre.patch @@ -1,7 +1,7 @@ From 5210dc1c303dcd36ee59ad43325f615cc1e78231 Mon Sep 17 00:00:00 2001 From: Jan Beulich <jbeulich@suse.com> Date: Tue, 24 Sep 2024 14:50:34 +0200 -Subject: [PATCH 05/25] update Xen version to 4.18.4-pre +Subject: [PATCH 05/56] update Xen version to 4.18.4-pre --- xen/Makefile | 2 +- @@ -21,5 +21,5 @@ index 56000ae82c..68b14fb356 100644 -include xen-version -- -2.46.1 +2.47.0 diff --git a/0006-x86-hvm-Fix-Misra-Rule-19.1-regression.patch b/0006-x86-hvm-Fix-Misra-Rule-19.1-regression.patch index fb8bd56..abf3685 100644 --- a/0006-x86-hvm-Fix-Misra-Rule-19.1-regression.patch +++ b/0006-x86-hvm-Fix-Misra-Rule-19.1-regression.patch @@ -1,7 +1,7 @@ From 6c2827e1330ecf37756391f2e080494e9b0076d4 Mon Sep 17 00:00:00 2001 From: Andrew Cooper <andrew.cooper3@citrix.com> Date: Tue, 24 Sep 2024 14:51:24 +0200 -Subject: [PATCH 06/25] x86/hvm: Fix Misra Rule 19.1 regression +Subject: [PATCH 06/56] x86/hvm: Fix Misra Rule 19.1 regression Despite noticing an impending Rule 19.1 violation, the adjustment made (the uint32_t cast) wasn't sufficient to avoid it. Try again. @@ -50,5 +50,5 @@ index e5fa682f85..fd390cefe1 100644 #ifndef NDEBUG -- -2.46.1 +2.47.0 diff --git a/0007-Arm-correct-FIXADDR_TOP.patch b/0007-Arm-correct-FIXADDR_TOP.patch index d467932..0c6e3a0 100644 --- a/0007-Arm-correct-FIXADDR_TOP.patch +++ b/0007-Arm-correct-FIXADDR_TOP.patch @@ -1,7 +1,7 @@ From 87d2cdd51327ab001d3cb68a714260f54bafba41 Mon Sep 17 00:00:00 2001 From: Jan Beulich <jbeulich@suse.com> Date: Tue, 24 Sep 2024 14:52:15 +0200 -Subject: [PATCH 07/25] Arm: correct FIXADDR_TOP +Subject: [PATCH 07/56] Arm: correct FIXADDR_TOP While reviewing a RISC-V patch cloning the Arm code, I noticed an off-by-1 here: FIX_PMAP_{BEGIN,END} being an inclusive range and @@ -54,5 +54,5 @@ index c34cc94c90..1ff67ff2b5 100644 static lpae_t *xen_map_table(mfn_t mfn) -- -2.46.1 +2.47.0 diff --git a/0008-xl-fix-incorrect-output-in-help-command.patch b/0008-xl-fix-incorrect-output-in-help-command.patch index 5d8d54d..6586c39 100644 --- a/0008-xl-fix-incorrect-output-in-help-command.patch +++ b/0008-xl-fix-incorrect-output-in-help-command.patch @@ -1,7 +1,7 @@ From 0d5f15e6face071c628bd569957d11ced887b42f Mon Sep 17 00:00:00 2001 From: "John E. Krokes" <mag@netherworld.org> Date: Tue, 24 Sep 2024 14:52:42 +0200 -Subject: [PATCH 08/25] xl: fix incorrect output in "help" command +Subject: [PATCH 08/56] xl: fix incorrect output in "help" command In "xl help", the output includes this line: @@ -32,5 +32,5 @@ index 62bdb2aeaa..5843590794 100644 }, { "vsnd-detach", -- -2.46.1 +2.47.0 diff --git a/0009-x86-pv-Introduce-x86_merge_dr6-and-fix-do_debug.patch b/0009-x86-pv-Introduce-x86_merge_dr6-and-fix-do_debug.patch index 855d952..bc5e9b0 100644 --- a/0009-x86-pv-Introduce-x86_merge_dr6-and-fix-do_debug.patch +++ b/0009-x86-pv-Introduce-x86_merge_dr6-and-fix-do_debug.patch @@ -1,7 +1,7 @@ From d32c77f471fb8400b6512c171a14cdd58f04f0a3 Mon Sep 17 00:00:00 2001 From: Andrew Cooper <andrew.cooper3@citrix.com> Date: Tue, 24 Sep 2024 14:53:22 +0200 -Subject: [PATCH 09/25] x86/pv: Introduce x86_merge_dr6() and fix do_debug() +Subject: [PATCH 09/56] x86/pv: Introduce x86_merge_dr6() and fix do_debug() MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @@ -136,5 +136,5 @@ index 45e1b277ea..9d72ebce55 100644 if ( guest_kernel_mode(v, regs) && v->domain->debugger_attached ) { -- -2.46.1 +2.47.0 diff --git a/0010-x86-pv-Fix-merging-of-new-status-bits-into-dr6.patch b/0010-x86-pv-Fix-merging-of-new-status-bits-into-dr6.patch index 13d3433..52aebc4 100644 --- a/0010-x86-pv-Fix-merging-of-new-status-bits-into-dr6.patch +++ b/0010-x86-pv-Fix-merging-of-new-status-bits-into-dr6.patch @@ -1,7 +1,7 @@ From cecee35dd426bb49daf0b58dcf6966024fdc0f0c Mon Sep 17 00:00:00 2001 From: Andrew Cooper <andrew.cooper3@citrix.com> Date: Tue, 24 Sep 2024 14:53:59 +0200 -Subject: [PATCH 10/25] x86/pv: Fix merging of new status bits into %dr6 +Subject: [PATCH 10/56] x86/pv: Fix merging of new status bits into %dr6 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @@ -218,5 +218,5 @@ index 698750267a..e348e3c1d3 100644 /* -- -2.46.1 +2.47.0 diff --git a/0011-x86-pv-Address-Coverity-complaint-in-check_guest_io_.patch b/0011-x86-pv-Address-Coverity-complaint-in-check_guest_io_.patch index 3eae2c5..165dfe2 100644 --- a/0011-x86-pv-Address-Coverity-complaint-in-check_guest_io_.patch +++ b/0011-x86-pv-Address-Coverity-complaint-in-check_guest_io_.patch @@ -1,7 +1,7 @@ From 774d27c807dc5464a945a3242c5d1e8c6f723ab1 Mon Sep 17 00:00:00 2001 From: Andrew Cooper <andrew.cooper3@citrix.com> Date: Tue, 24 Sep 2024 14:54:35 +0200 -Subject: [PATCH 11/25] x86/pv: Address Coverity complaint in +Subject: [PATCH 11/56] x86/pv: Address Coverity complaint in check_guest_io_breakpoint() Commit 08aacc392d86 ("x86/emul: Fix misaligned IO breakpoint behaviour in PV @@ -108,5 +108,5 @@ index 15c83b9d23..b90f745c75 100644 if ( (start < (port + len)) && ((start + width) > port) ) match |= 1u << i; -- -2.46.1 +2.47.0 diff --git a/0012-x86emul-always-set-operand-size-for-AVX-VNNI-INT8-in.patch b/0012-x86emul-always-set-operand-size-for-AVX-VNNI-INT8-in.patch index c4cebee..2145505 100644 --- a/0012-x86emul-always-set-operand-size-for-AVX-VNNI-INT8-in.patch +++ b/0012-x86emul-always-set-operand-size-for-AVX-VNNI-INT8-in.patch @@ -1,7 +1,7 @@ From 1024fc729398131d62bec368553f6d69432c31cb Mon Sep 17 00:00:00 2001 From: Jan Beulich <jbeulich@suse.com> Date: Tue, 24 Sep 2024 14:55:11 +0200 -Subject: [PATCH 12/25] x86emul: always set operand size for AVX-VNNI-INT8 +Subject: [PATCH 12/56] x86emul: always set operand size for AVX-VNNI-INT8 insns Unlike for AVX-VNNI-INT16 I failed to notice that op_bytes may still be @@ -32,5 +32,5 @@ index d6b60f0539..941941ef15 100644 case X86EMUL_OPC_VEX_66(0x0f38, 0x50): /* vpdpbusd [xy]mm/mem,[xy]mm,[xy]mm */ -- -2.46.1 +2.47.0 diff --git a/0013-x86emul-set-fake-operand-size-for-AVX512CD-broadcast.patch b/0013-x86emul-set-fake-operand-size-for-AVX512CD-broadcast.patch index a21ff9f..dac074b 100644 --- a/0013-x86emul-set-fake-operand-size-for-AVX512CD-broadcast.patch +++ b/0013-x86emul-set-fake-operand-size-for-AVX512CD-broadcast.patch @@ -1,7 +1,7 @@ From 092d673dcba9262ae3da0459d5e6aa4ddd68f966 Mon Sep 17 00:00:00 2001 From: Jan Beulich <jbeulich@suse.com> Date: Tue, 24 Sep 2024 14:55:48 +0200 -Subject: [PATCH 13/25] x86emul: set (fake) operand size for AVX512CD broadcast +Subject: [PATCH 13/56] x86emul: set (fake) operand size for AVX512CD broadcast insns Back at the time I failed to pay attention to op_bytes still being zero @@ -31,5 +31,5 @@ index 941941ef15..9d70de1eb4 100644 case X86EMUL_OPC_EVEX_66(0x0f38, 0xc4): /* vpconflict{d,q} [xyz]mm/mem,[xyz]mm{k} */ fault_suppression = false; -- -2.46.1 +2.47.0 diff --git a/0014-x86-x2APIC-correct-cluster-tracking-upon-CPUs-going-.patch b/0014-x86-x2APIC-correct-cluster-tracking-upon-CPUs-going-.patch index f6df29c..6144578 100644 --- a/0014-x86-x2APIC-correct-cluster-tracking-upon-CPUs-going-.patch +++ b/0014-x86-x2APIC-correct-cluster-tracking-upon-CPUs-going-.patch @@ -1,7 +1,7 @@ From f29c2fb064ef15b6a2530f1b2dd99c4be76a39af Mon Sep 17 00:00:00 2001 From: Jan Beulich <jbeulich@suse.com> Date: Tue, 24 Sep 2024 14:56:16 +0200 -Subject: [PATCH 14/25] x86/x2APIC: correct cluster tracking upon CPUs going +Subject: [PATCH 14/56] x86/x2APIC: correct cluster tracking upon CPUs going down for S3 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 @@ -48,5 +48,5 @@ index 371dd100c7..d531035fa4 100644 if ( per_cpu(cluster_cpus, cpu) ) { -- -2.46.1 +2.47.0 diff --git a/0015-x86-dom0-disable-SMAP-for-PV-domain-building-only.patch b/0015-x86-dom0-disable-SMAP-for-PV-domain-building-only.patch index ae7ea22..f82f9c6 100644 --- a/0015-x86-dom0-disable-SMAP-for-PV-domain-building-only.patch +++ b/0015-x86-dom0-disable-SMAP-for-PV-domain-building-only.patch @@ -1,7 +1,7 @@ From 4cb8c289873aafdba7086d1933665aaea83292ec Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com> Date: Tue, 24 Sep 2024 14:56:45 +0200 -Subject: [PATCH 15/25] x86/dom0: disable SMAP for PV domain building only +Subject: [PATCH 15/56] x86/dom0: disable SMAP for PV domain building only MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @@ -140,5 +140,5 @@ index f2592c3dc9..18503300e7 100644 } -- -2.46.1 +2.47.0 diff --git a/0016-x86-HVM-correct-partial-HPET_STATUS-write-emulation.patch b/0016-x86-HVM-correct-partial-HPET_STATUS-write-emulation.patch index d7e00d6..01e462c 100644 --- a/0016-x86-HVM-correct-partial-HPET_STATUS-write-emulation.patch +++ b/0016-x86-HVM-correct-partial-HPET_STATUS-write-emulation.patch @@ -1,7 +1,7 @@ From 582a83da12bf0d8c6186aaf0aa11aa0b9850d0ad Mon Sep 17 00:00:00 2001 From: Jan Beulich <jbeulich@suse.com> Date: Tue, 24 Sep 2024 14:57:21 +0200 -Subject: [PATCH 16/25] x86/HVM: correct partial HPET_STATUS write emulation +Subject: [PATCH 16/56] x86/HVM: correct partial HPET_STATUS write emulation For partial writes the non-written parts of registers are folded into the full 64-bit value from what they're presently set to. That's wrong @@ -33,5 +33,5 @@ index 80f323316c..21b30d2900 100644 { bool active; -- -2.46.1 +2.47.0 diff --git a/0017-Arm64-adjust-__irq_to_desc-to-fix-build-with-gcc14.patch b/0017-Arm64-adjust-__irq_to_desc-to-fix-build-with-gcc14.patch index 275f1e3..5f3761a 100644 --- a/0017-Arm64-adjust-__irq_to_desc-to-fix-build-with-gcc14.patch +++ b/0017-Arm64-adjust-__irq_to_desc-to-fix-build-with-gcc14.patch @@ -1,7 +1,7 @@ From 133b92bf78c21f40c6a316fc000422a188c01a7a Mon Sep 17 00:00:00 2001 From: Jan Beulich <jbeulich@suse.com> Date: Tue, 24 Sep 2024 14:57:43 +0200 -Subject: [PATCH 17/25] Arm64: adjust __irq_to_desc() to fix build with gcc14 +Subject: [PATCH 17/56] Arm64: adjust __irq_to_desc() to fix build with gcc14 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @@ -57,5 +57,5 @@ index ae69fb4aeb..014b2d7982 100644 if ( irq < NR_LOCAL_IRQS ) return &this_cpu(local_irq_desc)[irq]; -- -2.46.1 +2.47.0 diff --git a/0018-libxl-Fix-nul-termination-of-the-return-value-of-lib.patch b/0018-libxl-Fix-nul-termination-of-the-return-value-of-lib.patch index b420fdc..f8fb5b2 100644 --- a/0018-libxl-Fix-nul-termination-of-the-return-value-of-lib.patch +++ b/0018-libxl-Fix-nul-termination-of-the-return-value-of-lib.patch @@ -1,7 +1,7 @@ From e077d26621a31fb707c64d8251f5022991c979a9 Mon Sep 17 00:00:00 2001 From: Javi Merino <javi.merino@cloud.com> Date: Tue, 24 Sep 2024 14:58:13 +0200 -Subject: [PATCH 18/25] libxl: Fix nul-termination of the return value of +Subject: [PATCH 18/56] libxl: Fix nul-termination of the return value of libxl_xen_console_read_line() MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 @@ -96,5 +96,5 @@ index d5732d1c37..e5477c7668 100644 unsigned int incremental; unsigned int index; -- -2.46.1 +2.47.0 diff --git a/0019-SUPPORT.md-split-XSM-from-Flask.patch b/0019-SUPPORT.md-split-XSM-from-Flask.patch index bdec878..90c9c72 100644 --- a/0019-SUPPORT.md-split-XSM-from-Flask.patch +++ b/0019-SUPPORT.md-split-XSM-from-Flask.patch @@ -1,7 +1,7 @@ From 37fcb4c206a47e6923f49207dabcde9829d1eb2e Mon Sep 17 00:00:00 2001 From: Jan Beulich <jbeulich@suse.com> Date: Tue, 24 Sep 2024 14:58:45 +0200 -Subject: [PATCH 19/25] SUPPORT.md: split XSM from Flask +Subject: [PATCH 19/56] SUPPORT.md: split XSM from Flask MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @@ -62,5 +62,5 @@ index b4715a65b5..24157088d2 100644 ### x86/Nested PV -- -2.46.1 +2.47.0 diff --git a/0020-x86-fix-UP-build-with-gcc14.patch b/0020-x86-fix-UP-build-with-gcc14.patch index e635bbd..49611ca 100644 --- a/0020-x86-fix-UP-build-with-gcc14.patch +++ b/0020-x86-fix-UP-build-with-gcc14.patch @@ -1,7 +1,7 @@ From f562deb29bbccd6606b684105aa718ef263f274e Mon Sep 17 00:00:00 2001 From: Jan Beulich <jbeulich@suse.com> Date: Tue, 24 Sep 2024 14:58:58 +0200 -Subject: [PATCH 20/25] x86: fix UP build with gcc14 +Subject: [PATCH 20/56] x86: fix UP build with gcc14 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @@ -59,5 +59,5 @@ index 4c54ecbc91..f7078130cd 100644 cpumask_set_cpu(cpu1, per_cpu(cpu_sibling_mask, cpu2)); cpumask_set_cpu(cpu2, per_cpu(cpu_sibling_mask, cpu1)); -- -2.46.1 +2.47.0 diff --git a/0021-x86emul-test-fix-build-with-gas-2.43.patch b/0021-x86emul-test-fix-build-with-gas-2.43.patch index 1b7e796..2b62d5b 100644 --- a/0021-x86emul-test-fix-build-with-gas-2.43.patch +++ b/0021-x86emul-test-fix-build-with-gas-2.43.patch @@ -1,7 +1,7 @@ From acab1a90f931debe3e13dc9dbe6eb11ec2bdf818 Mon Sep 17 00:00:00 2001 From: Jan Beulich <jbeulich@suse.com> Date: Tue, 24 Sep 2024 14:59:22 +0200 -Subject: [PATCH 21/25] x86emul/test: fix build with gas 2.43 +Subject: [PATCH 21/56] x86emul/test: fix build with gas 2.43 Drop explicit {evex} pseudo-prefixes. New gas (validly) complains when they're used on things other than instructions. Our use was potentially @@ -82,5 +82,5 @@ index 263cea662d..d68a7364c2 100644 t_; \ }) -- -2.46.1 +2.47.0 diff --git a/0022-x86-HVM-properly-reject-indirect-VRAM-writes.patch b/0022-x86-HVM-properly-reject-indirect-VRAM-writes.patch index c9f221f..584b57b 100644 --- a/0022-x86-HVM-properly-reject-indirect-VRAM-writes.patch +++ b/0022-x86-HVM-properly-reject-indirect-VRAM-writes.patch @@ -1,7 +1,7 @@ From b7f66ed124985563c73dadeec84189c48870cd1a Mon Sep 17 00:00:00 2001 From: Jan Beulich <jbeulich@suse.com> Date: Tue, 24 Sep 2024 15:00:07 +0200 -Subject: [PATCH 22/25] x86/HVM: properly reject "indirect" VRAM writes +Subject: [PATCH 22/56] x86/HVM: properly reject "indirect" VRAM writes While ->count will only be different from 1 for "indirect" (data in guest memory) accesses, it being 1 does not exclude the request being an @@ -41,5 +41,5 @@ index 2586891863..6419211266 100644 * not active since we can assert, when in stdvga mode, that writes * to VRAM have no side effect and thus we can try to buffer them. -- -2.46.1 +2.47.0 diff --git a/0023-xen-x86-pvh-handle-ACPI-RSDT-table-in-PVH-Dom0-build.patch b/0023-xen-x86-pvh-handle-ACPI-RSDT-table-in-PVH-Dom0-build.patch index d24d6d1..3ad91c6 100644 --- a/0023-xen-x86-pvh-handle-ACPI-RSDT-table-in-PVH-Dom0-build.patch +++ b/0023-xen-x86-pvh-handle-ACPI-RSDT-table-in-PVH-Dom0-build.patch @@ -1,7 +1,7 @@ From b7e54ae8389dad2f0582d32edb667f6bdbf9df37 Mon Sep 17 00:00:00 2001 From: Stefano Stabellini <stefano.stabellini@amd.com> Date: Tue, 24 Sep 2024 15:00:29 +0200 -Subject: [PATCH 23/25] xen/x86/pvh: handle ACPI RSDT table in PVH Dom0 build +Subject: [PATCH 23/56] xen/x86/pvh: handle ACPI RSDT table in PVH Dom0 build MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @@ -59,5 +59,5 @@ index c7d47d0d4c..411beb3f06 100644 xsdt->table_offset_entry[0] = madt_addr; -- -2.46.1 +2.47.0 diff --git a/0024-blkif-reconcile-protocol-specification-with-in-use-i.patch b/0024-blkif-reconcile-protocol-specification-with-in-use-i.patch index 375be4a..fd85158 100644 --- a/0024-blkif-reconcile-protocol-specification-with-in-use-i.patch +++ b/0024-blkif-reconcile-protocol-specification-with-in-use-i.patch @@ -1,7 +1,7 @@ From 834518a8d055149f250d191a3c50f96013756c01 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com> Date: Tue, 24 Sep 2024 15:00:55 +0200 -Subject: [PATCH 24/25] blkif: reconcile protocol specification with in-use +Subject: [PATCH 24/56] blkif: reconcile protocol specification with in-use implementations MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 @@ -179,5 +179,5 @@ index 22f1eef0c0..9b00d633d3 100644 uint8_t operation; /* BLKIF_OP_INDIRECT */ uint8_t indirect_op; /* BLKIF_OP_{READ/WRITE} */ -- -2.46.1 +2.47.0 diff --git a/0025-xen-ucode-Fix-buffer-under-run-when-parsing-AMD-cont.patch b/0025-xen-ucode-Fix-buffer-under-run-when-parsing-AMD-cont.patch index 006e48e..a2e4d5a 100644 --- a/0025-xen-ucode-Fix-buffer-under-run-when-parsing-AMD-cont.patch +++ b/0025-xen-ucode-Fix-buffer-under-run-when-parsing-AMD-cont.patch @@ -1,7 +1,7 @@ From 2c5f888204d988110fee9823b102f433c6212d9d Mon Sep 17 00:00:00 2001 From: Demi Marie Obenour <demi@invisiblethingslab.com> Date: Tue, 24 Sep 2024 15:01:15 +0200 -Subject: [PATCH 25/25] xen/ucode: Fix buffer under-run when parsing AMD +Subject: [PATCH 25/56] xen/ucode: Fix buffer under-run when parsing AMD containers The AMD container format has no formal spec. It is, at best, precision @@ -58,5 +58,5 @@ index d8f7646e88..dc735ee480 100644 printk(XENLOG_ERR "microcode: Bad equivalent cpu table\n"); error = -EINVAL; -- -2.46.1 +2.47.0 diff --git a/0026-xen-ucode-Make-Intel-s-microcode_sanity_check-strict.patch b/0026-xen-ucode-Make-Intel-s-microcode_sanity_check-strict.patch new file mode 100644 index 0000000..8bf513c --- /dev/null +++ b/0026-xen-ucode-Make-Intel-s-microcode_sanity_check-strict.patch @@ -0,0 +1,43 @@ +From a897560155a58b36bec721eb3b994a62a0432996 Mon Sep 17 00:00:00 2001 +From: Demi Marie Obenour <demi@invisiblethingslab.com> +Date: Tue, 29 Oct 2024 16:35:52 +0100 +Subject: [PATCH 26/56] xen/ucode: Make Intel's microcode_sanity_check() + stricter + +The SDM states that data size must be a multiple of 4, but Xen doesn't check +this propery. + +This is liable to cause a later failures, but should be checked explicitly. + +Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com> +Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> +Reviewed-by: Jan Beulich <jbeulich@suse.com> +master commit: 8752ad83e79754f8109457cff796e5f86f644348 +master date: 2024-09-24 18:57:38 +0100 +--- + xen/arch/x86/cpu/microcode/intel.c | 7 +++++-- + 1 file changed, 5 insertions(+), 2 deletions(-) + +diff --git a/xen/arch/x86/cpu/microcode/intel.c b/xen/arch/x86/cpu/microcode/intel.c +index a2d88e3ac0..bd15236709 100644 +--- a/xen/arch/x86/cpu/microcode/intel.c ++++ b/xen/arch/x86/cpu/microcode/intel.c +@@ -155,10 +155,13 @@ static int microcode_sanity_check(const struct microcode_patch *patch) + uint32_t sum; + + /* +- * Total size must be a multiple of 1024 bytes. Data size and the header +- * must fit within it. ++ * The SDM states: ++ * - Data size must be a multiple of 4. ++ * - Total size must be a multiple of 1024 bytes. Data size and the ++ * header must fit within it. + */ + if ( (total_size & 1023) || ++ (data_size & 3) || + data_size > (total_size - MC_HEADER_SIZE) ) + { + printk(XENLOG_WARNING "microcode: Bad size\n"); +-- +2.47.0 + diff --git a/0027-x86-PV-simplify-and-thus-correct-guest-accessor-func.patch b/0027-x86-PV-simplify-and-thus-correct-guest-accessor-func.patch new file mode 100644 index 0000000..0eb91e8 --- /dev/null +++ b/0027-x86-PV-simplify-and-thus-correct-guest-accessor-func.patch @@ -0,0 +1,201 @@ +From 0902958b51a6135ce43bee2c9eadd43f481e311d Mon Sep 17 00:00:00 2001 +From: Jan Beulich <jbeulich@suse.com> +Date: Tue, 29 Oct 2024 16:37:12 +0100 +Subject: [PATCH 27/56] x86/PV: simplify (and thus correct) guest accessor + functions + +Taking a fault on a non-byte-granular insn means that the "number of +bytes not handled" return value would need extra care in calculating, if +we want callers to be able to derive e.g. exception context (to be +injected to the guest) - CR2 for #PF in particular - from the value. To +simplify things rather than complicating them, reduce inline assembly to +just byte-granular string insns. On recent CPUs that's also supposed to +be more efficient anyway. + +For singular element accessors, however, alignment checks are added, +hence slightly complicating the code. Misaligned (user) buffer accesses +will now be forwarded to copy_{from,to}_guest_ll(). + +Naturally copy_{from,to}_unsafe_ll() accessors end up being adjusted the +same way, as they're produced by mere re-processing of the same code. +Otoh copy_{from,to}_unsafe() aren't similarly adjusted, but have their +comments made match reality; down the road we may want to change their +return types, e.g. to bool. + +Fixes: 76974398a63c ("Added user-memory accessing functionality for x86_64") +Fixes: 7b8c36701d26 ("Introduce clear_user and clear_guest") +Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> +Signed-off-by: Jan Beulich <jbeulich@suse.com> +Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> +Tested-by: Andrew Cooper <andrew.cooper3@citrix.com> +master commit: 67a8e5721e1ea9c28526883036bf08fb2e8a8c9c +master date: 2024-10-01 09:44:55 +0200 +--- + xen/arch/x86/include/asm/uaccess.h | 12 +++--- + xen/arch/x86/usercopy.c | 66 ++++-------------------------- + 2 files changed, 14 insertions(+), 64 deletions(-) + +diff --git a/xen/arch/x86/include/asm/uaccess.h b/xen/arch/x86/include/asm/uaccess.h +index 74bb222c03..633eb79797 100644 +--- a/xen/arch/x86/include/asm/uaccess.h ++++ b/xen/arch/x86/include/asm/uaccess.h +@@ -251,7 +251,8 @@ do { \ + static always_inline unsigned long + __copy_to_guest_pv(void __user *to, const void *from, unsigned long n) + { +- if (__builtin_constant_p(n)) { ++ if ( __builtin_constant_p(n) && !((unsigned long)to & (n - 1)) ) ++ { + unsigned long ret; + + switch (n) { +@@ -291,7 +292,8 @@ __copy_to_guest_pv(void __user *to, const void *from, unsigned long n) + static always_inline unsigned long + __copy_from_guest_pv(void *to, const void __user *from, unsigned long n) + { +- if (__builtin_constant_p(n)) { ++ if ( __builtin_constant_p(n) && !((unsigned long)from & (n - 1)) ) ++ { + unsigned long ret; + + switch (n) { +@@ -321,8 +323,7 @@ __copy_from_guest_pv(void *to, const void __user *from, unsigned long n) + * + * Copy data from hypervisor space to a potentially unmapped area. + * +- * Returns number of bytes that could not be copied. +- * On success, this will be zero. ++ * Returns zero on success and non-zero if some bytes could not be copied. + */ + static always_inline unsigned int + copy_to_unsafe(void __user *to, const void *from, unsigned int n) +@@ -358,8 +359,7 @@ copy_to_unsafe(void __user *to, const void *from, unsigned int n) + * + * Copy data from a potentially unmapped area space to hypervisor space. + * +- * Returns number of bytes that could not be copied. +- * On success, this will be zero. ++ * Returns zero on success and non-zero if some bytes could not be copied. + * + * If some data could not be copied, this function will pad the copied + * data to the requested size using zero bytes. +diff --git a/xen/arch/x86/usercopy.c b/xen/arch/x86/usercopy.c +index b8c2d1cc0b..7ab2009efe 100644 +--- a/xen/arch/x86/usercopy.c ++++ b/xen/arch/x86/usercopy.c +@@ -16,42 +16,19 @@ + + unsigned int copy_to_guest_ll(void __user *to, const void *from, unsigned int n) + { +- unsigned dummy; ++ GUARD(unsigned dummy); + + stac(); + asm volatile ( + GUARD( + " guest_access_mask_ptr %[to], %q[scratch1], %q[scratch2]\n" + ) +- " cmp $"STR(2*BYTES_PER_LONG-1)", %[cnt]\n" +- " jbe 1f\n" +- " mov %k[to], %[cnt]\n" +- " neg %[cnt]\n" +- " and $"STR(BYTES_PER_LONG-1)", %[cnt]\n" +- " sub %[cnt], %[aux]\n" +- "4: rep movsb\n" /* make 'to' address aligned */ +- " mov %[aux], %[cnt]\n" +- " shr $"STR(LONG_BYTEORDER)", %[cnt]\n" +- " and $"STR(BYTES_PER_LONG-1)", %[aux]\n" +- " .align 2,0x90\n" +- "0: rep movs"__OS"\n" /* as many words as possible... */ +- " mov %[aux],%[cnt]\n" +- "1: rep movsb\n" /* ...remainder copied as bytes */ ++ "1: rep movsb\n" + "2:\n" +- ".section .fixup,\"ax\"\n" +- "5: add %[aux], %[cnt]\n" +- " jmp 2b\n" +- "3: lea (%q[aux], %q[cnt], "STR(BYTES_PER_LONG)"), %[cnt]\n" +- " jmp 2b\n" +- ".previous\n" +- _ASM_EXTABLE(4b, 5b) +- _ASM_EXTABLE(0b, 3b) + _ASM_EXTABLE(1b, 2b) +- : [cnt] "+c" (n), [to] "+D" (to), [from] "+S" (from), +- [aux] "=&r" (dummy) ++ : [cnt] "+c" (n), [to] "+D" (to), [from] "+S" (from) + GUARD(, [scratch1] "=&r" (dummy), [scratch2] "=&r" (dummy)) +- : "[aux]" (n) +- : "memory" ); ++ :: "memory" ); + clac(); + + return n; +@@ -66,25 +43,9 @@ unsigned int copy_from_guest_ll(void *to, const void __user *from, unsigned int + GUARD( + " guest_access_mask_ptr %[from], %q[scratch1], %q[scratch2]\n" + ) +- " cmp $"STR(2*BYTES_PER_LONG-1)", %[cnt]\n" +- " jbe 1f\n" +- " mov %k[to], %[cnt]\n" +- " neg %[cnt]\n" +- " and $"STR(BYTES_PER_LONG-1)", %[cnt]\n" +- " sub %[cnt], %[aux]\n" +- "4: rep movsb\n" /* make 'to' address aligned */ +- " mov %[aux],%[cnt]\n" +- " shr $"STR(LONG_BYTEORDER)", %[cnt]\n" +- " and $"STR(BYTES_PER_LONG-1)", %[aux]\n" +- " .align 2,0x90\n" +- "0: rep movs"__OS"\n" /* as many words as possible... */ +- " mov %[aux], %[cnt]\n" +- "1: rep movsb\n" /* ...remainder copied as bytes */ ++ "1: rep movsb\n" + "2:\n" + ".section .fixup,\"ax\"\n" +- "5: add %[aux], %[cnt]\n" +- " jmp 6f\n" +- "3: lea (%q[aux], %q[cnt], "STR(BYTES_PER_LONG)"), %[cnt]\n" + "6: mov %[cnt], %k[from]\n" + " xchg %%eax, %[aux]\n" + " xor %%eax, %%eax\n" +@@ -93,14 +54,11 @@ unsigned int copy_from_guest_ll(void *to, const void __user *from, unsigned int + " mov %k[from], %[cnt]\n" + " jmp 2b\n" + ".previous\n" +- _ASM_EXTABLE(4b, 5b) +- _ASM_EXTABLE(0b, 3b) + _ASM_EXTABLE(1b, 6b) + : [cnt] "+c" (n), [to] "+D" (to), [from] "+S" (from), + [aux] "=&r" (dummy) + GUARD(, [scratch1] "=&r" (dummy), [scratch2] "=&r" (dummy)) +- : "[aux]" (n) +- : "memory" ); ++ :: "memory" ); + clac(); + + return n; +@@ -145,20 +103,12 @@ unsigned int clear_guest_pv(void __user *to, unsigned int n) + stac(); + asm volatile ( + " guest_access_mask_ptr %[to], %[scratch1], %[scratch2]\n" +- "0: rep stos"__OS"\n" +- " mov %[bytes], %[cnt]\n" + "1: rep stosb\n" + "2:\n" +- ".section .fixup,\"ax\"\n" +- "3: lea (%q[bytes], %q[longs], "STR(BYTES_PER_LONG)"), %[cnt]\n" +- " jmp 2b\n" +- ".previous\n" +- _ASM_EXTABLE(0b,3b) + _ASM_EXTABLE(1b,2b) +- : [cnt] "=&c" (n), [to] "+D" (to), [scratch1] "=&r" (dummy), ++ : [cnt] "+c" (n), [to] "+D" (to), [scratch1] "=&r" (dummy), + [scratch2] "=&r" (dummy) +- : [bytes] "r" (n & (BYTES_PER_LONG - 1)), +- [longs] "0" (n / BYTES_PER_LONG), "a" (0) ); ++ : "a" (0) ); + clac(); + } + +-- +2.47.0 + diff --git a/0028-x86-traps-Re-enable-interrupts-after-reading-cr2-in-.patch b/0028-x86-traps-Re-enable-interrupts-after-reading-cr2-in-.patch new file mode 100644 index 0000000..7b9294d --- /dev/null +++ b/0028-x86-traps-Re-enable-interrupts-after-reading-cr2-in-.patch @@ -0,0 +1,104 @@ +From a5823065558b98f2c8ae78dfa882f2293e1a8a2f Mon Sep 17 00:00:00 2001 +From: Alejandro Vallejo <alejandro.vallejo@cloud.com> +Date: Tue, 29 Oct 2024 16:37:32 +0100 +Subject: [PATCH 28/56] x86/traps: Re-enable interrupts after reading cr2 in + the #PF handler +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +Hitting a page fault clobbers %cr2, so if a page fault is handled while +handling a previous page fault then %cr2 will hold the address of the +latter fault rather than the former. In particular, if a debug key +handler happens to trigger during #PF and before %cr2 is read, and that +handler itself encounters a #PF, then %cr2 will be corrupt for the outer #PF +handler. + +This patch makes the page fault path delay re-enabling IRQs until %cr2 +has been read in order to ensure it stays consistent. + +A similar argument holds in additional cases, but they happen to be safe: + * %dr6 inside #DB: Safe because IST exceptions don't re-enable IRQs. + * MSR_XFD_ERR inside #NM: Safe because AMX isn't used in #NM handler. + +While in the area, remove redundant q suffix to a movq in entry.S and +the space after the comma. + +Fixes: a4cd20a19073 ("[XEN] 'd' key dumps both host and guest state.") +Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com> +Acked-by: Roger Pau Monné <roger.pau@citrix.com> +master commit: b06e76db7c35974f1b127762683e7852ca0c8e76 +master date: 2024-10-01 09:45:49 +0200 +--- + xen/arch/x86/traps.c | 8 ++++++++ + xen/arch/x86/x86_64/entry.S | 20 ++++++++++++++++---- + 2 files changed, 24 insertions(+), 4 deletions(-) + +diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c +index abd3019976..d702ffd38c 100644 +--- a/xen/arch/x86/traps.c ++++ b/xen/arch/x86/traps.c +@@ -1628,6 +1628,14 @@ void do_page_fault(struct cpu_user_regs *regs) + + addr = read_cr2(); + ++ /* ++ * Don't re-enable interrupts if we were running an IRQ-off region when ++ * we hit the page fault, or we'll break that code. ++ */ ++ ASSERT(!local_irq_is_enabled()); ++ if ( regs->flags & X86_EFLAGS_IF ) ++ local_irq_enable(); ++ + /* fixup_page_fault() might change regs->error_code, so cache it here. */ + error_code = regs->error_code; + +diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S +index d3def49ea3..df3f3b4ea7 100644 +--- a/xen/arch/x86/x86_64/entry.S ++++ b/xen/arch/x86/x86_64/entry.S +@@ -832,9 +832,9 @@ handle_exception_saved: + #elif !defined(CONFIG_PV) + ASSERT_CONTEXT_IS_XEN + #endif /* CONFIG_PV */ +- sti +-1: movq %rsp,%rdi +- movzbl UREGS_entry_vector(%rsp),%eax ++.Ldispatch_exceptions: ++ mov %rsp, %rdi ++ movzbl UREGS_entry_vector(%rsp), %eax + #ifdef CONFIG_PERF_COUNTERS + lea per_cpu__perfcounters(%rip), %rcx + add STACK_CPUINFO_FIELD(per_cpu_offset)(%r14), %rcx +@@ -854,7 +854,19 @@ handle_exception_saved: + jmp .L_exn_dispatch_done; \ + .L_ ## vec ## _done: + ++ /* ++ * IRQs kept off to derisk being hit by a nested interrupt before ++ * reading %cr2. Otherwise a page fault in the nested interrupt handler ++ * would corrupt %cr2. ++ */ + DISPATCH(X86_EXC_PF, do_page_fault) ++ ++ /* Only re-enable IRQs if they were active before taking the fault */ ++ testb $X86_EFLAGS_IF >> 8, UREGS_eflags + 1(%rsp) ++ jz 1f ++ sti ++1: ++ + DISPATCH(X86_EXC_GP, do_general_protection) + DISPATCH(X86_EXC_UD, do_invalid_op) + DISPATCH(X86_EXC_NM, do_device_not_available) +@@ -899,7 +911,7 @@ exception_with_ints_disabled: + movq %rsp,%rdi + call search_pre_exception_table + testq %rax,%rax # no fixup code for faulting EIP? +- jz 1b ++ jz .Ldispatch_exceptions + movq %rax,UREGS_rip(%rsp) # fixup regular stack + + #ifdef CONFIG_XEN_SHSTK +-- +2.47.0 + diff --git a/0029-x86-pv-Rework-guest_io_okay-to-return-X86EMUL_.patch b/0029-x86-pv-Rework-guest_io_okay-to-return-X86EMUL_.patch new file mode 100644 index 0000000..cdaa845 --- /dev/null +++ b/0029-x86-pv-Rework-guest_io_okay-to-return-X86EMUL_.patch @@ -0,0 +1,127 @@ +From 0f23a771b02bd07296d7f7be784ef5e1e4040800 Mon Sep 17 00:00:00 2001 +From: Andrew Cooper <andrew.cooper3@citrix.com> +Date: Tue, 29 Oct 2024 16:38:17 +0100 +Subject: [PATCH 29/56] x86/pv: Rework guest_io_okay() to return X86EMUL_* + +In order to fix a bug with guest_io_okay() (subsequent patch), rework +guest_io_okay() to take in an emulation context, and return X86EMUL_* rather +than a boolean. + +For the failing case, take the opportunity to inject #GP explicitly, rather +than returning X86EMUL_UNHANDLEABLE. There is a logical difference between +"we know what this is, and it's #GP", vs "we don't know what this is". + +There is no change in practice as emulation is the final step on general #GP +resolution, but returning X86EMUL_UNHANDLEABLE would be a latent bug if a +subsequent action were to appear. + +No practical change. + +Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> +Reviewed-by: Jan Beulich <jbeulich@suse.com> +master commit: 7429e1cc071b0e20ea9581da4893fb9b2f6d21d4 +master date: 2024-10-01 14:58:18 +0100 +--- + xen/arch/x86/pv/emul-priv-op.c | 36 ++++++++++++++++++++++------------ + 1 file changed, 23 insertions(+), 13 deletions(-) + +diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c +index b90f745c75..cc66ffbf8e 100644 +--- a/xen/arch/x86/pv/emul-priv-op.c ++++ b/xen/arch/x86/pv/emul-priv-op.c +@@ -156,14 +156,16 @@ static bool iopl_ok(const struct vcpu *v, const struct cpu_user_regs *regs) + } + + /* Has the guest requested sufficient permission for this I/O access? */ +-static bool guest_io_okay(unsigned int port, unsigned int bytes, +- struct vcpu *v, struct cpu_user_regs *regs) ++static int guest_io_okay(unsigned int port, unsigned int bytes, ++ struct x86_emulate_ctxt *ctxt) + { ++ const struct cpu_user_regs *regs = ctxt->regs; ++ struct vcpu *v = current; + /* If in user mode, switch to kernel mode just to read I/O bitmap. */ + const bool user_mode = !(v->arch.flags & TF_kernel_mode); + + if ( iopl_ok(v, regs) ) +- return true; ++ return X86EMUL_OKAY; + + if ( (port + bytes) <= v->arch.pv.iobmp_limit ) + { +@@ -190,10 +192,12 @@ static bool guest_io_okay(unsigned int port, unsigned int bytes, + toggle_guest_pt(v); + + if ( (x.mask & (((1 << bytes) - 1) << (port & 7))) == 0 ) +- return true; ++ return X86EMUL_OKAY; + } + +- return false; ++ x86_emul_hw_exception(X86_EXC_GP, 0, ctxt); ++ ++ return X86EMUL_EXCEPTION; + } + + /* Has the administrator granted sufficient permission for this I/O access? */ +@@ -353,12 +357,14 @@ static int cf_check read_io( + struct priv_op_ctxt *poc = container_of(ctxt, struct priv_op_ctxt, ctxt); + struct vcpu *curr = current; + struct domain *currd = current->domain; ++ int rc; + + /* INS must not come here. */ + ASSERT((ctxt->opcode & ~9) == 0xe4); + +- if ( !guest_io_okay(port, bytes, curr, ctxt->regs) ) +- return X86EMUL_UNHANDLEABLE; ++ rc = guest_io_okay(port, bytes, ctxt); ++ if ( rc != X86EMUL_OKAY ) ++ return rc; + + poc->bpmatch = check_guest_io_breakpoint(curr, port, bytes); + +@@ -458,12 +464,14 @@ static int cf_check write_io( + struct priv_op_ctxt *poc = container_of(ctxt, struct priv_op_ctxt, ctxt); + struct vcpu *curr = current; + struct domain *currd = current->domain; ++ int rc; + + /* OUTS must not come here. */ + ASSERT((ctxt->opcode & ~9) == 0xe6); + +- if ( !guest_io_okay(port, bytes, curr, ctxt->regs) ) +- return X86EMUL_UNHANDLEABLE; ++ rc = guest_io_okay(port, bytes, ctxt); ++ if ( rc != X86EMUL_OKAY ) ++ return rc; + + poc->bpmatch = check_guest_io_breakpoint(curr, port, bytes); + +@@ -612,8 +620,9 @@ static int cf_check rep_ins( + + *reps = 0; + +- if ( !guest_io_okay(port, bytes_per_rep, curr, ctxt->regs) ) +- return X86EMUL_UNHANDLEABLE; ++ rc = guest_io_okay(port, bytes_per_rep, ctxt); ++ if ( rc != X86EMUL_OKAY ) ++ return rc; + + rc = read_segment(x86_seg_es, &sreg, ctxt); + if ( rc != X86EMUL_OKAY ) +@@ -678,8 +687,9 @@ static int cf_check rep_outs( + + *reps = 0; + +- if ( !guest_io_okay(port, bytes_per_rep, curr, ctxt->regs) ) +- return X86EMUL_UNHANDLEABLE; ++ rc = guest_io_okay(port, bytes_per_rep, ctxt); ++ if ( rc != X86EMUL_OKAY ) ++ return rc; + + rc = read_segment(seg, &sreg, ctxt); + if ( rc != X86EMUL_OKAY ) +-- +2.47.0 + diff --git a/0030-x86-pv-Handle-PF-correctly-when-reading-the-IO-permi.patch b/0030-x86-pv-Handle-PF-correctly-when-reading-the-IO-permi.patch new file mode 100644 index 0000000..9c9aa0b --- /dev/null +++ b/0030-x86-pv-Handle-PF-correctly-when-reading-the-IO-permi.patch @@ -0,0 +1,82 @@ +From 008808ac9523efcbdc514d8ae35b4db07bca16ec Mon Sep 17 00:00:00 2001 +From: Andrew Cooper <andrew.cooper3@citrix.com> +Date: Tue, 29 Oct 2024 16:38:29 +0100 +Subject: [PATCH 30/56] x86/pv: Handle #PF correctly when reading the IO + permission bitmap + +The switch statement in guest_io_okay() is a very expensive way of +pre-initialising x with ~0, and performing a partial read into it. + +However, the logic isn't correct either. + +In a real TSS, the CPU always reads two bytes (like here), and any TSS limit +violation turns silently into no-access. But, in-limit accesses trigger #PF +as usual. AMD document this property explicitly, and while Intel don't (so +far as I can tell), they do behave consistently with AMD. + +Switch from __copy_from_guest_offset() to __copy_from_guest_pv(), like +everything else in this file. This removes code generation setting up +copy_from_user_hvm() (in the likely path even), and safety LFENCEs from +evaluate_nospec(). + +Change the logic to raise #PF if __copy_from_guest_pv() fails, rather than +disallowing the IO port access. This brings the behaviour better in line with +normal x86. + +Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> +Reviewed-by: Jan Beulich <jbeulich@suse.com> +master commit: 8a6c495d725408d333c1b47bb8af44615a5bfb18 +master date: 2024-10-01 14:58:18 +0100 +--- + xen/arch/x86/pv/emul-priv-op.c | 27 ++++++++++++--------------- + 1 file changed, 12 insertions(+), 15 deletions(-) + +diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c +index cc66ffbf8e..e35285d4ab 100644 +--- a/xen/arch/x86/pv/emul-priv-op.c ++++ b/xen/arch/x86/pv/emul-priv-op.c +@@ -169,29 +169,26 @@ static int guest_io_okay(unsigned int port, unsigned int bytes, + + if ( (port + bytes) <= v->arch.pv.iobmp_limit ) + { +- union { uint8_t bytes[2]; uint16_t mask; } x; ++ const void *__user addr = v->arch.pv.iobmp.p + (port >> 3); ++ uint16_t mask; ++ int rc; + +- /* +- * Grab permission bytes from guest space. Inaccessible bytes are +- * read as 0xff (no access allowed). +- */ ++ /* Grab permission bytes from guest space. */ + if ( user_mode ) + toggle_guest_pt(v); + +- switch ( __copy_from_guest_offset(x.bytes, v->arch.pv.iobmp, +- port>>3, 2) ) +- { +- default: x.bytes[0] = ~0; +- /* fallthrough */ +- case 1: x.bytes[1] = ~0; +- /* fallthrough */ +- case 0: break; +- } ++ rc = __copy_from_guest_pv(&mask, addr, 2); + + if ( user_mode ) + toggle_guest_pt(v); + +- if ( (x.mask & (((1 << bytes) - 1) << (port & 7))) == 0 ) ++ if ( rc ) ++ { ++ x86_emul_pagefault(0, (unsigned long)addr + bytes - rc, ctxt); ++ return X86EMUL_EXCEPTION; ++ } ++ ++ if ( (mask & (((1 << bytes) - 1) << (port & 7))) == 0 ) + return X86EMUL_OKAY; + } + +-- +2.47.0 + diff --git a/0031-x86-pv-Rename-pv.iobmp_limit-to-iobmp_nr-and-clarify.patch b/0031-x86-pv-Rename-pv.iobmp_limit-to-iobmp_nr-and-clarify.patch new file mode 100644 index 0000000..4297b5c --- /dev/null +++ b/0031-x86-pv-Rename-pv.iobmp_limit-to-iobmp_nr-and-clarify.patch @@ -0,0 +1,87 @@ +From 313ff5a2d5d24feb21cb98f5329d834e413446c4 Mon Sep 17 00:00:00 2001 +From: Andrew Cooper <andrew.cooper3@citrix.com> +Date: Tue, 29 Oct 2024 16:38:41 +0100 +Subject: [PATCH 31/56] x86/pv: Rename pv.iobmp_limit to iobmp_nr and clarify + behaviour + +Ever since it's introduction in commit 013351bd7ab3 ("Define new event-channel +and physdev hypercalls") in 2006, the public interface was named nr_ports +while the internal field was called iobmp_limit. + +Rename the internal field to iobmp_nr to match the public interface, and +clarify that, when nonzero, Xen will read 2 bytes. + +There isn't a perfect parallel with a real TSS, but iobmp_nr being 0 is the +paravirt "no IOPB" case, and it is important that no read occurs in this case. + +Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> +Reviewed-by: Jan Beulich <jbeulich@suse.com> +master commit: 633ee8b2df963f7e5cb8de1219c1a48bfb4447f6 +master date: 2024-10-01 14:58:18 +0100 +--- + xen/arch/x86/include/asm/domain.h | 2 +- + xen/arch/x86/physdev.c | 2 +- + xen/arch/x86/pv/emul-priv-op.c | 6 +++++- + xen/include/public/physdev.h | 3 +++ + 4 files changed, 10 insertions(+), 3 deletions(-) + +diff --git a/xen/arch/x86/include/asm/domain.h b/xen/arch/x86/include/asm/domain.h +index 53876472fe..0d2d2b6623 100644 +--- a/xen/arch/x86/include/asm/domain.h ++++ b/xen/arch/x86/include/asm/domain.h +@@ -574,7 +574,7 @@ struct pv_vcpu + + /* I/O-port access bitmap. */ + XEN_GUEST_HANDLE(uint8) iobmp; /* Guest kernel vaddr of the bitmap. */ +- unsigned int iobmp_limit; /* Number of ports represented in the bitmap. */ ++ unsigned int iobmp_nr; /* Number of ports represented in the bitmap. */ + #define IOPL(val) MASK_INSR(val, X86_EFLAGS_IOPL) + unsigned int iopl; /* Current IOPL for this VCPU, shifted left by + * 12 to match the eflags register. */ +diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c +index 2f1d955a96..39967cf2e5 100644 +--- a/xen/arch/x86/physdev.c ++++ b/xen/arch/x86/physdev.c +@@ -433,7 +433,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) + #else + guest_from_compat_handle(curr->arch.pv.iobmp, set_iobitmap.bitmap); + #endif +- curr->arch.pv.iobmp_limit = set_iobitmap.nr_ports; ++ curr->arch.pv.iobmp_nr = set_iobitmap.nr_ports; + break; + } + +diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c +index e35285d4ab..70150c2722 100644 +--- a/xen/arch/x86/pv/emul-priv-op.c ++++ b/xen/arch/x86/pv/emul-priv-op.c +@@ -167,7 +167,11 @@ static int guest_io_okay(unsigned int port, unsigned int bytes, + if ( iopl_ok(v, regs) ) + return X86EMUL_OKAY; + +- if ( (port + bytes) <= v->arch.pv.iobmp_limit ) ++ /* ++ * When @iobmp_nr is non-zero, Xen, like real CPUs and the TSS IOPB, ++ * always reads 2 bytes from @iobmp, which might be one byte @iobmp_nr. ++ */ ++ if ( (port + bytes) <= v->arch.pv.iobmp_nr ) + { + const void *__user addr = v->arch.pv.iobmp.p + (port >> 3); + uint16_t mask; +diff --git a/xen/include/public/physdev.h b/xen/include/public/physdev.h +index f0c0d4727c..d694104cd8 100644 +--- a/xen/include/public/physdev.h ++++ b/xen/include/public/physdev.h +@@ -87,6 +87,9 @@ DEFINE_XEN_GUEST_HANDLE(physdev_set_iopl_t); + /* + * Set the current VCPU's I/O-port permissions bitmap. + * @arg == pointer to physdev_set_iobitmap structure. ++ * ++ * When @nr_ports is non-zero, Xen, like real CPUs and the TSS IOPB, always ++ * reads 2 bytes from @bitmap, which might be one byte beyond @nr_ports. + */ + #define PHYSDEVOP_set_iobitmap 7 + struct physdev_set_iobitmap { +-- +2.47.0 + diff --git a/0032-stubdom-Fix-newlib-build-with-GCC-14.patch b/0032-stubdom-Fix-newlib-build-with-GCC-14.patch new file mode 100644 index 0000000..de7bc91 --- /dev/null +++ b/0032-stubdom-Fix-newlib-build-with-GCC-14.patch @@ -0,0 +1,58 @@ +From 706da365c23d5d93aef377f15002942faaf73f2e Mon Sep 17 00:00:00 2001 +From: Andrew Cooper <andrew.cooper3@citrix.com> +Date: Tue, 29 Oct 2024 16:39:22 +0100 +Subject: [PATCH 32/56] stubdom: Fix newlib build with GCC-14 + +Based on a fix from OpenSUSE, but adjusted to be Clang-compatible too. Pass +-Wno-implicit-function-declaration library-wide rather than using local GCC +pragmas. + +Fix of copy_past_newline() to avoid triggering -Wstrict-prototypes. + +Link: https://build.opensuse.org/request/show/1178775 +Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> +Reviewed-by: Anthony PERARD <anthony.perard@vates.tech> +master commit: 444cb9350f2c1cc202b6b86176ddd8e57525e2d9 +master date: 2024-10-03 10:07:25 +0100 +--- + stubdom/Makefile | 2 ++ + stubdom/newlib-fix-copy_past_newline.patch | 10 ++++++++++ + 2 files changed, 12 insertions(+) + create mode 100644 stubdom/newlib-fix-copy_past_newline.patch + +diff --git a/stubdom/Makefile b/stubdom/Makefile +index 888fa20d72..52c345a940 100644 +--- a/stubdom/Makefile ++++ b/stubdom/Makefile +@@ -97,10 +97,12 @@ newlib-$(NEWLIB_VERSION): newlib-$(NEWLIB_VERSION).tar.gz + patch -d $@ -p1 < newlib-disable-texinfo.patch + patch -d $@ -p1 < newlib-cygmon-gmon.patch + patch -d $@ -p1 < newlib-makedoc.patch ++ patch -d $@ -p1 < newlib-fix-copy_past_newline.patch + find $@ -type f | xargs perl -i.bak \ + -pe 's/\b_(tzname|daylight|timezone)\b/$$1/g' + touch $@ + ++NEWLIB_CFLAGS += -Wno-implicit-function-declaration + NEWLIB_STAMPFILE=$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libc.a + .PHONY: cross-newlib + cross-newlib: $(NEWLIB_STAMPFILE) +diff --git a/stubdom/newlib-fix-copy_past_newline.patch b/stubdom/newlib-fix-copy_past_newline.patch +new file mode 100644 +index 0000000000..f8452480bc +--- /dev/null ++++ b/stubdom/newlib-fix-copy_past_newline.patch +@@ -0,0 +1,10 @@ ++--- newlib-1.16.0/newlib/doc/makedoc.c.orig +++++ newlib-1.16.0/newlib/doc/makedoc.c ++@@ -798,6 +798,7 @@ DEFUN( iscommand,(ptr, idx), ++ } ++ ++ +++static unsigned int ++ DEFUN(copy_past_newline,(ptr, idx, dst), ++ string_type *ptr AND ++ unsigned int idx AND +-- +2.47.0 + diff --git a/0033-x86-dpci-do-not-leak-pending-interrupts-on-CPU-offli.patch b/0033-x86-dpci-do-not-leak-pending-interrupts-on-CPU-offli.patch new file mode 100644 index 0000000..779814f --- /dev/null +++ b/0033-x86-dpci-do-not-leak-pending-interrupts-on-CPU-offli.patch @@ -0,0 +1,75 @@ +From 9cf2b44c8eb506f72de34ce0e65751472740da78 Mon Sep 17 00:00:00 2001 +From: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com> +Date: Tue, 29 Oct 2024 16:39:43 +0100 +Subject: [PATCH 33/56] x86/dpci: do not leak pending interrupts on CPU offline +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +The current dpci logic relies on a softirq being executed as a side effect of +the cpu_notifier_call_chain() call in the code path that offlines the target +CPU. However the call to cpu_notifier_call_chain() won't trigger any softirq +processing, and even if it did, such processing should be done after all +interrupts have been migrated off the current CPU, otherwise new pending dpci +interrupts could still appear. + +Currently the ASSERT() in the cpu callback notifier is fairly easy to trigger +by doing CPU offline from a PVH dom0. + +Solve this by instead moving out any dpci interrupts pending processing once +the CPU is dead. This might introduce more latency than attempting to drain +before the CPU is put offline, but it's less complex, and CPU online/offline is +not a common action. Any extra introduced latency should be tolerable. + +Fixes: f6dd295381f4 ('dpci: replace tasklet with softirq') +Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> +Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> +master commit: 29555668b5725b9d5393b72bfe7ff9a3fa606714 +master date: 2024-10-07 11:10:21 +0200 +--- + xen/drivers/passthrough/x86/hvm.c | 20 ++++++++++++-------- + 1 file changed, 12 insertions(+), 8 deletions(-) + +diff --git a/xen/drivers/passthrough/x86/hvm.c b/xen/drivers/passthrough/x86/hvm.c +index 8175ba629a..f73292fd6c 100644 +--- a/xen/drivers/passthrough/x86/hvm.c ++++ b/xen/drivers/passthrough/x86/hvm.c +@@ -1105,23 +1105,27 @@ static int cf_check cpu_callback( + struct notifier_block *nfb, unsigned long action, void *hcpu) + { + unsigned int cpu = (unsigned long)hcpu; ++ unsigned long flags; + + switch ( action ) + { + case CPU_UP_PREPARE: + INIT_LIST_HEAD(&per_cpu(dpci_list, cpu)); + break; ++ + case CPU_UP_CANCELED: +- case CPU_DEAD: +- /* +- * On CPU_DYING this callback is called (on the CPU that is dying) +- * with an possible HVM_DPIC_SOFTIRQ pending - at which point we can +- * clear out any outstanding domains (by the virtue of the idle loop +- * calling the softirq later). In CPU_DEAD case the CPU is deaf and +- * there are no pending softirqs for us to handle so we can chill. +- */ + ASSERT(list_empty(&per_cpu(dpci_list, cpu))); + break; ++ ++ case CPU_DEAD: ++ if ( list_empty(&per_cpu(dpci_list, cpu)) ) ++ break; ++ /* Take whatever dpci interrupts are pending on the dead CPU. */ ++ local_irq_save(flags); ++ list_splice_init(&per_cpu(dpci_list, cpu), &this_cpu(dpci_list)); ++ local_irq_restore(flags); ++ raise_softirq(HVM_DPCI_SOFTIRQ); ++ break; + } + + return NOTIFY_DONE; +-- +2.47.0 + diff --git a/0034-ioreq-don-t-wrongly-claim-success-in-ioreq_send_buff.patch b/0034-ioreq-don-t-wrongly-claim-success-in-ioreq_send_buff.patch new file mode 100644 index 0000000..be440c9 --- /dev/null +++ b/0034-ioreq-don-t-wrongly-claim-success-in-ioreq_send_buff.patch @@ -0,0 +1,44 @@ +From ea63850c0a12c80bde4b76996ddf425acd5030a8 Mon Sep 17 00:00:00 2001 +From: Jan Beulich <jbeulich@suse.com> +Date: Tue, 29 Oct 2024 16:40:46 +0100 +Subject: [PATCH 34/56] ioreq: don't wrongly claim "success" in + ioreq_send_buffered() + +Returning a literal number is a bad idea anyway when all other returns +use IOREQ_STATUS_* values. The function is dead on Arm, and mapping to +X86EMUL_OKAY is surely wrong on x86. + +Fixes: f6bf39f84f82 ("x86/hvm: add support for broadcast of buffered ioreqs...") +Signed-off-by: Jan Beulich <jbeulich@suse.com> +Reviewed-by: Julien Grall <jgrall@amazon.com> +master commit: 2e0b545b847df7d4feb07308d50bad708bd35a66 +master date: 2024-10-08 14:36:27 +0200 +--- + xen/common/ioreq.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c +index 62b907f4c4..1e8e5e885e 100644 +--- a/xen/common/ioreq.c ++++ b/xen/common/ioreq.c +@@ -1175,7 +1175,7 @@ static int ioreq_send_buffered(struct ioreq_server *s, ioreq_t *p) + return IOREQ_STATUS_UNHANDLED; + + /* +- * Return 0 for the cases we can't deal with: ++ * Return UNHANDLED for the cases we can't deal with: + * - 'addr' is only a 20-bit field, so we cannot address beyond 1MB + * - we cannot buffer accesses to guest memory buffers, as the guest + * may expect the memory buffer to be synchronously accessed +@@ -1183,7 +1183,7 @@ static int ioreq_send_buffered(struct ioreq_server *s, ioreq_t *p) + * support data_is_ptr we do not waste space for the count field either + */ + if ( (p->addr > 0xfffffUL) || p->data_is_ptr || (p->count != 1) ) +- return 0; ++ return IOREQ_STATUS_UNHANDLED; + + switch ( p->size ) + { +-- +2.47.0 + diff --git a/0035-x86-domctl-fix-maximum-number-of-MSRs-in-XEN_DOMCTL_.patch b/0035-x86-domctl-fix-maximum-number-of-MSRs-in-XEN_DOMCTL_.patch new file mode 100644 index 0000000..52ce59d --- /dev/null +++ b/0035-x86-domctl-fix-maximum-number-of-MSRs-in-XEN_DOMCTL_.patch @@ -0,0 +1,51 @@ +From 2f5fc982f5e7193e5e22baeaa23df3a2f4b1e399 Mon Sep 17 00:00:00 2001 +From: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com> +Date: Tue, 29 Oct 2024 16:40:58 +0100 +Subject: [PATCH 35/56] x86/domctl: fix maximum number of MSRs in + XEN_DOMCTL_{get,set}_vcpu_msrs +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +Since the addition of the MSR_AMD64_DR{1-4}_ADDRESS_MASK MSRs to the +msrs_to_send array, the calculations for the maximum number of MSRs that +the hypercall can handle is off by 4. + +Remove the addition of 4 to the maximum number of MSRs that +XEN_DOMCTL_{set,get}_vcpu_msrs supports, as those are already part of the +array. + +A further adjustment could be to subtract 4 from the maximum size if the DBEXT +CPUID feature is not exposed to the guest, but guest_{rd,wr}msr() will already +perform that check when fetching or loading the MSRs. The maximum array is +used to indicate the caller of the buffer it needs to allocate in the get case, +and as an early input sanitation in the set case, using a buffer size slightly +lager than required is not an issue. + +Fixes: 86d47adcd3c4 ('x86/msr: Handle MSR_AMD64_DR{0-3}_ADDRESS_MASK in the new MSR infrastructure') +Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> +Reviewed-by: Jan Beulich <jbeulich@suse.com> +master commit: c95cd5f9c5a8c1c6ab1b0b366d829fa8561958fd +master date: 2024-10-08 14:37:53 +0200 +--- + xen/arch/x86/domctl.c | 4 ---- + 1 file changed, 4 deletions(-) + +diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c +index 1a8b4cff48..9bb90a83cf 100644 +--- a/xen/arch/x86/domctl.c ++++ b/xen/arch/x86/domctl.c +@@ -1110,10 +1110,6 @@ long arch_do_domctl( + !is_pv_domain(d) ) + break; + +- /* Count maximum number of optional msrs. */ +- if ( boot_cpu_has(X86_FEATURE_DBEXT) ) +- nr_msrs += 4; +- + if ( domctl->cmd == XEN_DOMCTL_get_vcpu_msrs ) + { + ret = 0; copyback = true; +-- +2.47.0 + diff --git a/0036-xen-spinlock-Fix-UBSAN-load-of-address-with-insuffic.patch b/0036-xen-spinlock-Fix-UBSAN-load-of-address-with-insuffic.patch new file mode 100644 index 0000000..f051ced --- /dev/null +++ b/0036-xen-spinlock-Fix-UBSAN-load-of-address-with-insuffic.patch @@ -0,0 +1,67 @@ +From c2b8041904378ef5ecc8182fed4b904b1b30f021 Mon Sep 17 00:00:00 2001 +From: Andrew Cooper <andrew.cooper3@citrix.com> +Date: Tue, 29 Oct 2024 16:41:30 +0100 +Subject: [PATCH 36/56] xen/spinlock: Fix UBSAN "load of address with + insufficient space" in lock_prof_init() + +UBSAN complains: + + (XEN) ================================================================================ + (XEN) UBSAN: Undefined behaviour in common/spinlock.c:794:10 + (XEN) load of address ffff82d040ae24c8 with insufficient space + (XEN) for an object of type 'struct lock_profile *' + (XEN) ----[ Xen-4.20-unstable x86_64 debug=y ubsan=y Tainted: C ]---- + +This shows up with GCC-14, but not with GCC-12. I have not bisected further. + +Either way, the types for __lock_profile_{start,end} are incorrect. + +They are an array of struct lock_profile pointers. Correct the extern's +types, and adjust the loop to match. + +No practical change. + +Reported-by: Andreas Glashauser <ag@andreasglashauser.com> +Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> +Reviewed-by: Juergen Gross <jgross@suse.com> +master commit: 542ac112fc68c66cfafc577e252404c21da4f75b +master date: 2024-10-14 16:14:26 +0100 +--- + xen/common/spinlock.c | 8 ++++---- + 1 file changed, 4 insertions(+), 4 deletions(-) + +diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c +index 7f453234a9..4fc6f00177 100644 +--- a/xen/common/spinlock.c ++++ b/xen/common/spinlock.c +@@ -501,9 +501,6 @@ struct lock_profile_anc { + typedef void lock_profile_subfunc( + struct lock_profile *, int32_t, int32_t, void *); + +-extern struct lock_profile *__lock_profile_start; +-extern struct lock_profile *__lock_profile_end; +- + static s_time_t lock_profile_start; + static struct lock_profile_anc lock_profile_ancs[] = { + [LOCKPROF_TYPE_GLOBAL] = { .name = "Global" }, +@@ -659,13 +656,16 @@ void _lock_profile_deregister_struct( + spin_unlock(&lock_profile_lock); + } + ++extern struct lock_profile *__lock_profile_start[]; ++extern struct lock_profile *__lock_profile_end[]; ++ + static int __init cf_check lock_prof_init(void) + { + struct lock_profile **q; + + BUILD_BUG_ON(ARRAY_SIZE(lock_profile_ancs) != LOCKPROF_TYPE_N); + +- for ( q = &__lock_profile_start; q < &__lock_profile_end; q++ ) ++ for ( q = __lock_profile_start; q < __lock_profile_end; q++ ) + { + (*q)->next = lock_profile_glb_q.elem_q; + lock_profile_glb_q.elem_q = *q; +-- +2.47.0 + diff --git a/0037-iommu-amd-vi-do-not-error-if-device-referenced-in-IV.patch b/0037-iommu-amd-vi-do-not-error-if-device-referenced-in-IV.patch new file mode 100644 index 0000000..b09c6a3 --- /dev/null +++ b/0037-iommu-amd-vi-do-not-error-if-device-referenced-in-IV.patch @@ -0,0 +1,52 @@ +From b9bf85b5fd9106f4d9e27867ffd1d02bb3ff264b Mon Sep 17 00:00:00 2001 +From: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com> +Date: Tue, 29 Oct 2024 16:41:42 +0100 +Subject: [PATCH 37/56] iommu/amd-vi: do not error if device referenced in IVMD + is not behind any IOMMU +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +IVMD table contains restrictions about memory which must be mandatory assigned +to devices (and which permissions it should use), or memory that should be +never accessible to devices. + +Some hardware however contains ranges in IVMD that reference devices outside of +the IVHD tables (in other words, devices not behind any IOMMU). Such mismatch +will cause Xen to fail in register_range_for_device(), ultimately leading to +the IOMMU being disabled, and Xen crashing as x2APIC support might be already +enabled and relying on the IOMMU functionality. + +Relax IVMD parsing: allow IVMD blocks to reference devices not assigned to any +IOMMU. It's impossible for Xen to fulfill the requirement in the IVMD block if +the device is not behind any IOMMU, but it's no worse than booting without +IOMMU support, and thus not parsing ACPI IVRS in the first place. + +Reported-by: Willi Junga <xenproject@ymy.be> +Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> +Acked-by: Jan Beulich <jbeulich@suse.com> +master commit: 2defb544900a11f93104ac68d2f8beba89d4bd02 +master date: 2024-10-15 14:23:59 +0200 +--- + xen/drivers/passthrough/amd/iommu_acpi.c | 5 +++-- + 1 file changed, 3 insertions(+), 2 deletions(-) + +diff --git a/xen/drivers/passthrough/amd/iommu_acpi.c b/xen/drivers/passthrough/amd/iommu_acpi.c +index 96d8879e7b..59d30a4a2c 100644 +--- a/xen/drivers/passthrough/amd/iommu_acpi.c ++++ b/xen/drivers/passthrough/amd/iommu_acpi.c +@@ -248,8 +248,9 @@ static int __init register_range_for_device( + iommu = find_iommu_for_device(seg, bdf); + if ( !iommu ) + { +- AMD_IOMMU_ERROR("IVMD: no IOMMU for Dev_Id %#x\n", bdf); +- return -ENODEV; ++ AMD_IOMMU_WARN("IVMD: no IOMMU for device %pp - ignoring constrain\n", ++ &PCI_SBDF(seg, bdf)); ++ return 0; + } + req = ivrs_mappings[bdf].dte_requestor_id; + +-- +2.47.0 + diff --git a/0038-x86-boot-Fix-microcode-module-handling-during-PVH-bo.patch b/0038-x86-boot-Fix-microcode-module-handling-during-PVH-bo.patch new file mode 100644 index 0000000..4edeed3 --- /dev/null +++ b/0038-x86-boot-Fix-microcode-module-handling-during-PVH-bo.patch @@ -0,0 +1,166 @@ +From 9043f31c4085c4f7db7b5fb0bdbf7a2eae0408ce Mon Sep 17 00:00:00 2001 +From: "Daniel P. Smith" <dpsmith@apertussolutions.com> +Date: Tue, 29 Oct 2024 16:42:16 +0100 +Subject: [PATCH 38/56] x86/boot: Fix microcode module handling during PVH boot +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +As detailed in commit 0fe607b2a144 ("x86/boot: Fix PVH boot during boot_info +transition period"), the use of __va(mbi->mods_addr) constitutes a +use-after-free on the PVH boot path. + +This pattern has been in use since before PVH support was added. Inside a PVH +VM, it will go unnoticed as long as the microcode container parser doesn't +choke on the random data it finds. + +The use within early_microcode_init() happens to be safe because it's prior to +move_xen(). microcode_init_cache() is after move_xen(), and therefore unsafe. + +Plumb the boot_info pointer down, replacing module_map and mbi. Importantly, +bi->mods[].mod is a safe way to access the module list during PVH boot. + +Note: microcode_scan_module() is still bogusly stashing a bootstrap_map()'d + pointer in ucode_blob.data, which constitutes a different + use-after-free, and only works in general because of a second bug. This + is unrelated to PVH, and needs untangling differently. + +Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com> +Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> +Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com> +Acked-by: Roger Pau Monné <roger.pau@citrix.com> +master commit: 8ddf63a252a6eae6e619ba2df9ad6b6f82e660c1 +master date: 2024-10-23 18:14:24 +0100 +--- + xen/arch/x86/cpu/microcode/core.c | 21 +++++++++++---------- + xen/arch/x86/include/asm/microcode.h | 7 +++++-- + xen/arch/x86/setup.c | 4 ++-- + 3 files changed, 18 insertions(+), 14 deletions(-) + +diff --git a/xen/arch/x86/cpu/microcode/core.c b/xen/arch/x86/cpu/microcode/core.c +index 8a47f4471f..2ee0db5b21 100644 +--- a/xen/arch/x86/cpu/microcode/core.c ++++ b/xen/arch/x86/cpu/microcode/core.c +@@ -151,9 +151,9 @@ custom_param("ucode", parse_ucode); + + static void __init microcode_scan_module( + unsigned long *module_map, +- const multiboot_info_t *mbi) ++ const multiboot_info_t *mbi, ++ const module_t mod[]) + { +- module_t *mod = (module_t *)__va(mbi->mods_addr); + uint64_t *_blob_start; + unsigned long _blob_size; + struct cpio_data cd; +@@ -203,10 +203,9 @@ static void __init microcode_scan_module( + + static void __init microcode_grab_module( + unsigned long *module_map, +- const multiboot_info_t *mbi) ++ const multiboot_info_t *mbi, ++ const module_t mod[]) + { +- module_t *mod = (module_t *)__va(mbi->mods_addr); +- + if ( ucode_mod_idx < 0 ) + ucode_mod_idx += mbi->mods_count; + if ( ucode_mod_idx <= 0 || ucode_mod_idx >= mbi->mods_count || +@@ -215,7 +214,7 @@ static void __init microcode_grab_module( + ucode_mod = mod[ucode_mod_idx]; + scan: + if ( ucode_scan ) +- microcode_scan_module(module_map, mbi); ++ microcode_scan_module(module_map, mbi, mod); + } + + static struct microcode_ops __ro_after_init ucode_ops; +@@ -801,7 +800,8 @@ static int __init early_update_cache(const void *data, size_t len) + } + + int __init microcode_init_cache(unsigned long *module_map, +- const struct multiboot_info *mbi) ++ const struct multiboot_info *mbi, ++ const module_t mods[]) + { + int rc = 0; + +@@ -810,7 +810,7 @@ int __init microcode_init_cache(unsigned long *module_map, + + if ( ucode_scan ) + /* Need to rescan the modules because they might have been relocated */ +- microcode_scan_module(module_map, mbi); ++ microcode_scan_module(module_map, mbi, mods); + + if ( ucode_mod.mod_end ) + rc = early_update_cache(bootstrap_map(&ucode_mod), +@@ -857,7 +857,8 @@ static int __init early_microcode_update_cpu(void) + } + + int __init early_microcode_init(unsigned long *module_map, +- const struct multiboot_info *mbi) ++ const struct multiboot_info *mbi, ++ const module_t mods[]) + { + const struct cpuinfo_x86 *c = &boot_cpu_data; + int rc = 0; +@@ -906,7 +907,7 @@ int __init early_microcode_init(unsigned long *module_map, + return -ENODEV; + } + +- microcode_grab_module(module_map, mbi); ++ microcode_grab_module(module_map, mbi, mods); + + if ( ucode_mod.mod_end || ucode_blob.size ) + rc = early_microcode_update_cpu(); +diff --git a/xen/arch/x86/include/asm/microcode.h b/xen/arch/x86/include/asm/microcode.h +index 62ce3418f7..bfb1820d21 100644 +--- a/xen/arch/x86/include/asm/microcode.h ++++ b/xen/arch/x86/include/asm/microcode.h +@@ -3,6 +3,7 @@ + + #include <xen/types.h> + #include <xen/percpu.h> ++#include <xen/multiboot.h> + + #include <public/xen.h> + +@@ -24,9 +25,11 @@ DECLARE_PER_CPU(struct cpu_signature, cpu_sig); + void microcode_set_module(unsigned int idx); + int microcode_update(XEN_GUEST_HANDLE(const_void), unsigned long len); + int early_microcode_init(unsigned long *module_map, +- const struct multiboot_info *mbi); ++ const struct multiboot_info *mbi, ++ const module_t mods[]); + int microcode_init_cache(unsigned long *module_map, +- const struct multiboot_info *mbi); ++ const struct multiboot_info *mbi, ++ const module_t mods[]); + int microcode_update_one(void); + + #endif /* ASM_X86__MICROCODE_H */ +diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c +index 18503300e7..1d5d3f8a66 100644 +--- a/xen/arch/x86/setup.c ++++ b/xen/arch/x86/setup.c +@@ -1316,7 +1316,7 @@ void __init noreturn __start_xen(unsigned long mbi_p) + * TODO: load ucode earlier once multiboot modules become accessible + * at an earlier stage. + */ +- early_microcode_init(module_map, mbi); ++ early_microcode_init(module_map, mbi, mod); + + if ( xen_phys_start ) + { +@@ -1842,7 +1842,7 @@ void __init noreturn __start_xen(unsigned long mbi_p) + + timer_init(); + +- microcode_init_cache(module_map, mbi); /* Needs xmalloc() */ ++ microcode_init_cache(module_map, mbi, mod); /* Needs xmalloc() */ + + tsx_init(); /* Needs microcode. May change HLE/RTM feature bits. */ + +-- +2.47.0 + diff --git a/0039-x86-boot-Fix-XSM-module-handling-during-PVH-boot.patch b/0039-x86-boot-Fix-XSM-module-handling-during-PVH-boot.patch new file mode 100644 index 0000000..714db5a --- /dev/null +++ b/0039-x86-boot-Fix-XSM-module-handling-during-PVH-boot.patch @@ -0,0 +1,120 @@ +From 2b18f341cb5c66bbc3260a8e0dd9f42b2f58d78c Mon Sep 17 00:00:00 2001 +From: "Daniel P. Smith" <dpsmith@apertussolutions.com> +Date: Tue, 29 Oct 2024 16:42:29 +0100 +Subject: [PATCH 39/56] x86/boot: Fix XSM module handling during PVH boot +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +As detailed in commit 0fe607b2a144 ("x86/boot: Fix PVH boot during boot_info +transition period"), the use of __va(mbi->mods_addr) constitutes a +use-after-free on the PVH boot path. + +This pattern has been in use since before PVH support was added. This has +most likely gone unnoticed because no-one's tried using a detached Flask +policy in a PVH VM before. + +Plumb the boot_info pointer down, replacing module_map and mbi. Importantly, +bi->mods[].mod is a safe way to access the module list during PVH boot. + +As this is the final non-bi use of mbi in __start_xen(), make the pointer +unusable once bi has been established, to prevent new uses creeping back in. +This is a stopgap until mbi can be fully removed. + +Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com> +Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> +Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com> +Acked-by: Roger Pau Monné <roger.pau@citrix.com> +master commit: 6cf0aaeb8df951fb34679f0408461a5c67cb02c6 +master date: 2024-10-23 18:14:24 +0100 +--- + xen/arch/x86/setup.c | 2 +- + xen/include/xsm/xsm.h | 7 +++++-- + xen/xsm/xsm_core.c | 7 ++++--- + xen/xsm/xsm_policy.c | 2 +- + 4 files changed, 11 insertions(+), 7 deletions(-) + +diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c +index 1d5d3f8a66..689f828d6a 100644 +--- a/xen/arch/x86/setup.c ++++ b/xen/arch/x86/setup.c +@@ -1771,7 +1771,7 @@ void __init noreturn __start_xen(unsigned long mbi_p) + mmio_ro_ranges = rangeset_new(NULL, "r/o mmio ranges", + RANGESETF_prettyprint_hex); + +- xsm_multiboot_init(module_map, mbi); ++ xsm_multiboot_init(module_map, mbi, mod); + + /* + * IOMMU-related ACPI table parsing may require some of the system domains +diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h +index 627c0d2731..5867ccceaf 100644 +--- a/xen/include/xsm/xsm.h ++++ b/xen/include/xsm/xsm.h +@@ -779,9 +779,11 @@ static inline int xsm_argo_send(const struct domain *d, const struct domain *t) + + #ifdef CONFIG_MULTIBOOT + int xsm_multiboot_init( +- unsigned long *module_map, const multiboot_info_t *mbi); ++ unsigned long *module_map, const multiboot_info_t *mbi, ++ const module_t mods[]); + int xsm_multiboot_policy_init( + unsigned long *module_map, const multiboot_info_t *mbi, ++ const module_t mods[], + void **policy_buffer, size_t *policy_size); + #endif + +@@ -829,7 +831,8 @@ static const inline struct xsm_ops *silo_init(void) + + #ifdef CONFIG_MULTIBOOT + static inline int xsm_multiboot_init ( +- unsigned long *module_map, const multiboot_info_t *mbi) ++ unsigned long *module_map, const multiboot_info_t *mbi, ++ const module_t mods[]) + { + return 0; + } +diff --git a/xen/xsm/xsm_core.c b/xen/xsm/xsm_core.c +index eaa028109b..82b0d76d40 100644 +--- a/xen/xsm/xsm_core.c ++++ b/xen/xsm/xsm_core.c +@@ -140,7 +140,8 @@ static int __init xsm_core_init(const void *policy_buffer, size_t policy_size) + + #ifdef CONFIG_MULTIBOOT + int __init xsm_multiboot_init( +- unsigned long *module_map, const multiboot_info_t *mbi) ++ unsigned long *module_map, const multiboot_info_t *mbi, ++ const module_t mods[]) + { + int ret = 0; + void *policy_buffer = NULL; +@@ -150,8 +151,8 @@ int __init xsm_multiboot_init( + + if ( XSM_MAGIC ) + { +- ret = xsm_multiboot_policy_init(module_map, mbi, &policy_buffer, +- &policy_size); ++ ret = xsm_multiboot_policy_init(module_map, mbi, mods, ++ &policy_buffer, &policy_size); + if ( ret ) + { + bootstrap_map(NULL); +diff --git a/xen/xsm/xsm_policy.c b/xen/xsm/xsm_policy.c +index 8dafbc9381..9244a3612d 100644 +--- a/xen/xsm/xsm_policy.c ++++ b/xen/xsm/xsm_policy.c +@@ -32,10 +32,10 @@ + #ifdef CONFIG_MULTIBOOT + int __init xsm_multiboot_policy_init( + unsigned long *module_map, const multiboot_info_t *mbi, ++ const module_t mod[], + void **policy_buffer, size_t *policy_size) + { + int i; +- module_t *mod = (module_t *)__va(mbi->mods_addr); + int rc = 0; + u32 *_policy_start; + unsigned long _policy_len; +-- +2.47.0 + diff --git a/0040-Config-Update-MiniOS-revision.patch b/0040-Config-Update-MiniOS-revision.patch new file mode 100644 index 0000000..79b29b8 --- /dev/null +++ b/0040-Config-Update-MiniOS-revision.patch @@ -0,0 +1,28 @@ +From 3c81457aa3389b2d3dd453a6cdb15f2247c45d7f Mon Sep 17 00:00:00 2001 +From: Andrew Cooper <andrew.cooper3@citrix.com> +Date: Wed, 30 Oct 2024 18:00:22 +0000 +Subject: [PATCH 40/56] Config: Update MiniOS revision + +Commit ff13dabd3099 ("mman: correct m{,un}lock() definitions") + +Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> +--- + Config.mk | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/Config.mk b/Config.mk +index 477f287a6c..75b4aa0d84 100644 +--- a/Config.mk ++++ b/Config.mk +@@ -224,7 +224,7 @@ QEMU_UPSTREAM_URL ?= https://xenbits.xen.org/git-http/qemu-xen.git + QEMU_UPSTREAM_REVISION ?= qemu-xen-4.18.1 + + MINIOS_UPSTREAM_URL ?= https://xenbits.xen.org/git-http/mini-os.git +-MINIOS_UPSTREAM_REVISION ?= xen-RELEASE-4.18.1 ++MINIOS_UPSTREAM_REVISION ?= a400dd51706867565ed1382b23d3475bb30668c2 + + SEABIOS_UPSTREAM_URL ?= https://xenbits.xen.org/git-http/seabios.git + SEABIOS_UPSTREAM_REVISION ?= rel-1.16.2 +-- +2.47.0 + diff --git a/0041-CI-Mark-Archlinux-x86-as-allowing-failures.patch b/0041-CI-Mark-Archlinux-x86-as-allowing-failures.patch new file mode 100644 index 0000000..ee1fbf8 --- /dev/null +++ b/0041-CI-Mark-Archlinux-x86-as-allowing-failures.patch @@ -0,0 +1,38 @@ +From 243c61f3a309b8436fb9b19899105cdc5a7f5ec9 Mon Sep 17 00:00:00 2001 +From: Andrew Cooper <andrew.cooper3@citrix.com> +Date: Wed, 10 Jul 2024 13:38:52 +0100 +Subject: [PATCH 41/56] CI: Mark Archlinux/x86 as allowing failures + +Archlinux is a rolling distro. As a consequence, rebuilding the container +periodically changes the toolchain, and this affects all stable branches in +one go. + +Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> +Reviewed-by: Anthony PERARD <anthony.perard@vates.tech> +Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com> +(cherry picked from commit 5e1773dc863d6e1fb4c1398e380bdfc754342f7b) +--- + automation/gitlab-ci/build.yaml | 2 ++ + 1 file changed, 2 insertions(+) + +diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml +index 84e9dde25a..3f82e2109b 100644 +--- a/automation/gitlab-ci/build.yaml ++++ b/automation/gitlab-ci/build.yaml +@@ -624,11 +624,13 @@ archlinux-gcc: + extends: .gcc-x86-64-build + variables: + CONTAINER: archlinux:current ++ allow_failure: true + + archlinux-gcc-debug: + extends: .gcc-x86-64-build-debug + variables: + CONTAINER: archlinux:current ++ allow_failure: true + + centos-7-gcc: + extends: .gcc-x86-64-build +-- +2.47.0 + diff --git a/0042-Config-Fix-MiniOS-revision.patch b/0042-Config-Fix-MiniOS-revision.patch new file mode 100644 index 0000000..cbf0642 --- /dev/null +++ b/0042-Config-Fix-MiniOS-revision.patch @@ -0,0 +1,29 @@ +From e42f5e4b0cb759a5533b6f6befbf2249c3d1e940 Mon Sep 17 00:00:00 2001 +From: Andrew Cooper <andrew.cooper3@citrix.com> +Date: Mon, 11 Nov 2024 14:49:09 +0000 +Subject: [PATCH 42/56] Config: Fix MiniOS revision + +This is the 4.19 revision, not the 4.18 one. + +Fixes: 3c81457aa338 ("Config: Update MiniOS revision") +Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> +--- + Config.mk | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/Config.mk b/Config.mk +index 75b4aa0d84..bd389cfb4d 100644 +--- a/Config.mk ++++ b/Config.mk +@@ -224,7 +224,7 @@ QEMU_UPSTREAM_URL ?= https://xenbits.xen.org/git-http/qemu-xen.git + QEMU_UPSTREAM_REVISION ?= qemu-xen-4.18.1 + + MINIOS_UPSTREAM_URL ?= https://xenbits.xen.org/git-http/mini-os.git +-MINIOS_UPSTREAM_REVISION ?= a400dd51706867565ed1382b23d3475bb30668c2 ++MINIOS_UPSTREAM_REVISION ?= ff13dabd3099687921145a5e3e960ba8337e7488 + + SEABIOS_UPSTREAM_URL ?= https://xenbits.xen.org/git-http/seabios.git + SEABIOS_UPSTREAM_REVISION ?= rel-1.16.2 +-- +2.47.0 + diff --git a/0043-CI-Resync-.cirrus.yml-for-FreeBSD-testing.patch b/0043-CI-Resync-.cirrus.yml-for-FreeBSD-testing.patch new file mode 100644 index 0000000..5512aed --- /dev/null +++ b/0043-CI-Resync-.cirrus.yml-for-FreeBSD-testing.patch @@ -0,0 +1,27 @@ +From 8623dfa12acb8036d908108740d3325d64e34cae Mon Sep 17 00:00:00 2001 +From: Andrew Cooper <andrew.cooper3@citrix.com> +Date: Mon, 11 Nov 2024 17:02:39 +0000 +Subject: [PATCH 43/56] CI: Resync .cirrus.yml for FreeBSD testing + +Includes: + commit ebb7c6b2faf2 ("cirrus-ci: update to FreeBSD 14.1 image") + +Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> +--- + .cirrus.yml | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/.cirrus.yml b/.cirrus.yml +index e961877881..0ec9586e2c 100644 +--- a/.cirrus.yml ++++ b/.cirrus.yml +@@ -23,5 +23,5 @@ task: + task: + name: 'FreeBSD 14' + freebsd_instance: +- image_family: freebsd-14-0-snap ++ image_family: freebsd-14-1 + << : *FREEBSD_TEMPLATE +-- +2.47.0 + diff --git a/0044-CI-Stop-building-QEMU-in-general.patch b/0044-CI-Stop-building-QEMU-in-general.patch new file mode 100644 index 0000000..9fde392 --- /dev/null +++ b/0044-CI-Stop-building-QEMU-in-general.patch @@ -0,0 +1,67 @@ +From eecb33b3bf114089557aadbbe3fe7d8f9687f3d5 Mon Sep 17 00:00:00 2001 +From: Andrew Cooper <andrew.cooper3@citrix.com> +Date: Sat, 13 Jul 2024 17:50:30 +0100 +Subject: [PATCH 44/56] CI: Stop building QEMU in general + +We spend an awful lot of CI time building QEMU, even though most changes don't +touch the subset of tools/libs/ used by QEMU. Some numbers taken at a time +when CI was otherwise quiet: + + With Without + Alpine: 13m38s 6m04s + Debian 12: 10m05s 8m10s + OpenSUSE Tumbleweed: 11m40s 7m54s + Ubuntu 24.04: 14m56s 8m06s + +which is a >50% improvement in wallclock time in some cases. + +The only build we have that needs QEMU is alpine-3.18-gcc-debug. This is the +build deployed and used by the QubesOS ADL-* and Zen3p-* jobs. + +Xilinx-x86_64 deploys it too, but is PVH-only and doesn't use QEMU. + +QEMU is also built by CirrusCI for FreeBSD (fully Clang/LLVM toolchain). + +This should help quite a lot with Gitlab CI capacity. + +Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> +Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> +(cherry picked from commit e305256e69b1c943db3ca20173da6ded3db2d252) +--- + automation/gitlab-ci/build.yaml | 1 + + automation/scripts/build | 7 ++----- + 2 files changed, 3 insertions(+), 5 deletions(-) + +diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml +index 3f82e2109b..201acbbffd 100644 +--- a/automation/gitlab-ci/build.yaml ++++ b/automation/gitlab-ci/build.yaml +@@ -357,6 +357,7 @@ alpine-3.18-gcc-debug: + extends: .gcc-x86-64-build-debug + variables: + CONTAINER: alpine:3.18 ++ BUILD_QEMU_XEN: y + + debian-stretch-gcc-debug: + extends: .gcc-x86-64-build-debug +diff --git a/automation/scripts/build b/automation/scripts/build +index b3c71fb6fb..b90a7ff980 100755 +--- a/automation/scripts/build ++++ b/automation/scripts/build +@@ -80,11 +80,8 @@ else + cfgargs+=("--with-extra-qemuu-configure-args=\"--disable-werror\"") + fi + +- # Qemu requires Python 3.5 or later, and ninja +- # and Clang 10 or later +- if ! type python3 || python3 -c "import sys; res = sys.version_info < (3, 5); exit(not(res))" \ +- || [[ "$cc_is_clang" == y && "$cc_ver" -lt 0x0a0000 ]] \ +- || ! type ninja; then ++ # QEMU is only for those who ask ++ if [[ "$BUILD_QEMU_XEN" != "y" ]]; then + cfgargs+=("--with-system-qemu=/bin/false") + fi + +-- +2.47.0 + diff --git a/0045-x86-HVM-drop-stdvga-s-cache-struct-member.patch b/0045-x86-HVM-drop-stdvga-s-cache-struct-member.patch new file mode 100644 index 0000000..f25ef15 --- /dev/null +++ b/0045-x86-HVM-drop-stdvga-s-cache-struct-member.patch @@ -0,0 +1,146 @@ +From c41c3d8c44ac72c63bf7c41a72436baca150f304 Mon Sep 17 00:00:00 2001 +From: Jan Beulich <jbeulich@suse.com> +Date: Tue, 12 Nov 2024 13:50:54 +0100 +Subject: [PATCH 45/56] x86/HVM: drop stdvga's "cache" struct member + +Since 68e1183411be ("libxc: introduce a xc_dom_arch for hvm-3.0-x86_32 +guests"), HVM guests are built using XEN_DOMCTL_sethvmcontext, which +ends up disabling stdvga caching because of arch_hvm_load() being +involved in the processing of the request. With that the field is +useless, and can be dropped. Drop the helper functions manipulating / +checking as well right away, but leave the use sites of +stdvga_cache_is_enabled() with the hard-coded result the function would +have produced, to aid validation of subsequent dropping of further code. + +This is part of XSA-463 / CVE-2024-45818 + +Signed-off-by: Jan Beulich <jbeulich@suse.com> +Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> +(cherry picked from commit 53b7246bdfb3c280adcdf714918e4decb7e108f4) +--- + xen/arch/x86/hvm/save.c | 3 --- + xen/arch/x86/hvm/stdvga.c | 44 +++---------------------------- + xen/arch/x86/include/asm/hvm/io.h | 7 ----- + 3 files changed, 3 insertions(+), 51 deletions(-) + +diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c +index 79713cd6ca..832a10f67e 100644 +--- a/xen/arch/x86/hvm/save.c ++++ b/xen/arch/x86/hvm/save.c +@@ -64,9 +64,6 @@ int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr) + /* Time when restore started */ + d->arch.hvm.sync_tsc = rdtsc(); + +- /* VGA state is not saved/restored, so we nobble the cache. */ +- d->arch.hvm.stdvga.cache = STDVGA_CACHE_DISABLED; +- + return 0; + } + +diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c +index 6419211266..ac7056b00b 100644 +--- a/xen/arch/x86/hvm/stdvga.c ++++ b/xen/arch/x86/hvm/stdvga.c +@@ -100,37 +100,6 @@ static void vram_put(struct hvm_hw_stdvga *s, void *p) + unmap_domain_page(p); + } + +-static void stdvga_try_cache_enable(struct hvm_hw_stdvga *s) +-{ +- /* +- * Caching mode can only be enabled if the the cache has +- * never been used before. As soon as it is disabled, it will +- * become out-of-sync with the VGA device model and since no +- * mechanism exists to acquire current VRAM state from the +- * device model, re-enabling it would lead to stale data being +- * seen by the guest. +- */ +- if ( s->cache != STDVGA_CACHE_UNINITIALIZED ) +- return; +- +- gdprintk(XENLOG_INFO, "entering caching mode\n"); +- s->cache = STDVGA_CACHE_ENABLED; +-} +- +-static void stdvga_cache_disable(struct hvm_hw_stdvga *s) +-{ +- if ( s->cache != STDVGA_CACHE_ENABLED ) +- return; +- +- gdprintk(XENLOG_INFO, "leaving caching mode\n"); +- s->cache = STDVGA_CACHE_DISABLED; +-} +- +-static bool_t stdvga_cache_is_enabled(const struct hvm_hw_stdvga *s) +-{ +- return s->cache == STDVGA_CACHE_ENABLED; +-} +- + static int stdvga_outb(uint64_t addr, uint8_t val) + { + struct hvm_hw_stdvga *s = ¤t->domain->arch.hvm.stdvga; +@@ -170,7 +139,6 @@ static int stdvga_outb(uint64_t addr, uint8_t val) + if ( !prev_stdvga && s->stdvga ) + { + gdprintk(XENLOG_INFO, "entering stdvga mode\n"); +- stdvga_try_cache_enable(s); + } + else if ( prev_stdvga && !s->stdvga ) + { +@@ -468,7 +436,7 @@ static int cf_check stdvga_mem_write( + }; + struct ioreq_server *srv; + +- if ( !stdvga_cache_is_enabled(s) || !s->stdvga ) ++ if ( true || !s->stdvga ) + goto done; + + /* Intercept mmio write */ +@@ -536,18 +504,12 @@ static bool cf_check stdvga_mem_accept( + * We cannot return X86EMUL_UNHANDLEABLE on anything other then the + * first cycle of an I/O. So, since we cannot guarantee to always be + * able to send buffered writes, we have to reject any multi-cycle +- * or "indirect" I/O and, since we are rejecting an I/O, we must +- * invalidate the cache. +- * Single-cycle write transactions are accepted even if the cache is +- * not active since we can assert, when in stdvga mode, that writes +- * to VRAM have no side effect and thus we can try to buffer them. ++ * or "indirect" I/O. + */ +- stdvga_cache_disable(s); +- + goto reject; + } + else if ( p->dir == IOREQ_READ && +- (!stdvga_cache_is_enabled(s) || !s->stdvga) ) ++ (true || !s->stdvga) ) + goto reject; + + /* s->lock intentionally held */ +diff --git a/xen/arch/x86/include/asm/hvm/io.h b/xen/arch/x86/include/asm/hvm/io.h +index e5225e75ef..1abe1ab67b 100644 +--- a/xen/arch/x86/include/asm/hvm/io.h ++++ b/xen/arch/x86/include/asm/hvm/io.h +@@ -110,19 +110,12 @@ struct vpci_arch_msix_entry { + int pirq; + }; + +-enum stdvga_cache_state { +- STDVGA_CACHE_UNINITIALIZED, +- STDVGA_CACHE_ENABLED, +- STDVGA_CACHE_DISABLED +-}; +- + struct hvm_hw_stdvga { + uint8_t sr_index; + uint8_t sr[8]; + uint8_t gr_index; + uint8_t gr[9]; + bool_t stdvga; +- enum stdvga_cache_state cache; + uint32_t latch; + struct page_info *vram_page[64]; /* shadow of 0xa0000-0xaffff */ + spinlock_t lock; +-- +2.47.0 + diff --git a/0046-x86-HVM-drop-stdvga-s-stdvga-struct-member.patch b/0046-x86-HVM-drop-stdvga-s-stdvga-struct-member.patch new file mode 100644 index 0000000..1c1c08a --- /dev/null +++ b/0046-x86-HVM-drop-stdvga-s-stdvga-struct-member.patch @@ -0,0 +1,112 @@ +From 14efb1298fa684f48d568fd769c05fa4d2a3eaeb Mon Sep 17 00:00:00 2001 +From: Jan Beulich <jbeulich@suse.com> +Date: Tue, 12 Nov 2024 13:51:30 +0100 +Subject: [PATCH 46/56] x86/HVM: drop stdvga's "stdvga" struct member + +Two of its consumers are dead (in compile-time constant conditionals) +and the only remaining ones are merely controlling debug logging. Hence +the field is now pointless to set, which in particular allows to get rid +of the questionable conditional from which the field's value was +established (afaict 551ceee97513 ["x86, hvm: stdvga cache always on"] +had dropped too much of the earlier extra check that was there, and +quite likely further checks were missing). + +This is part of XSA-463 / CVE-2024-45818 + +Signed-off-by: Jan Beulich <jbeulich@suse.com> +Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> +(cherry picked from commit b740a9369e81bdda675a9780130ce2b9e75d4ec9) +--- + xen/arch/x86/hvm/stdvga.c | 30 +++++------------------------- + xen/arch/x86/include/asm/hvm/io.h | 1 - + 2 files changed, 5 insertions(+), 26 deletions(-) + +diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c +index ac7056b00b..76f0d83297 100644 +--- a/xen/arch/x86/hvm/stdvga.c ++++ b/xen/arch/x86/hvm/stdvga.c +@@ -103,7 +103,7 @@ static void vram_put(struct hvm_hw_stdvga *s, void *p) + static int stdvga_outb(uint64_t addr, uint8_t val) + { + struct hvm_hw_stdvga *s = ¤t->domain->arch.hvm.stdvga; +- int rc = 1, prev_stdvga = s->stdvga; ++ int rc = 1; + + switch ( addr ) + { +@@ -132,19 +132,6 @@ static int stdvga_outb(uint64_t addr, uint8_t val) + break; + } + +- /* When in standard vga mode, emulate here all writes to the vram buffer +- * so we can immediately satisfy reads without waiting for qemu. */ +- s->stdvga = (s->sr[7] == 0x00); +- +- if ( !prev_stdvga && s->stdvga ) +- { +- gdprintk(XENLOG_INFO, "entering stdvga mode\n"); +- } +- else if ( prev_stdvga && !s->stdvga ) +- { +- gdprintk(XENLOG_INFO, "leaving stdvga mode\n"); +- } +- + return rc; + } + +@@ -425,7 +412,6 @@ static int cf_check stdvga_mem_write( + const struct hvm_io_handler *handler, uint64_t addr, uint32_t size, + uint64_t data) + { +- struct hvm_hw_stdvga *s = ¤t->domain->arch.hvm.stdvga; + ioreq_t p = { + .type = IOREQ_TYPE_COPY, + .addr = addr, +@@ -436,8 +422,7 @@ static int cf_check stdvga_mem_write( + }; + struct ioreq_server *srv; + +- if ( true || !s->stdvga ) +- goto done; ++ goto done; + + /* Intercept mmio write */ + switch ( size ) +@@ -498,19 +483,14 @@ static bool cf_check stdvga_mem_accept( + + spin_lock(&s->lock); + +- if ( p->dir == IOREQ_WRITE && (p->data_is_ptr || p->count != 1) ) ++ if ( p->dir != IOREQ_WRITE || p->data_is_ptr || p->count != 1 ) + { + /* +- * We cannot return X86EMUL_UNHANDLEABLE on anything other then the +- * first cycle of an I/O. So, since we cannot guarantee to always be +- * able to send buffered writes, we have to reject any multi-cycle +- * or "indirect" I/O. ++ * Only accept single direct writes, as that's the only thing we can ++ * accelerate using buffered ioreq handling. + */ + goto reject; + } +- else if ( p->dir == IOREQ_READ && +- (true || !s->stdvga) ) +- goto reject; + + /* s->lock intentionally held */ + return 1; +diff --git a/xen/arch/x86/include/asm/hvm/io.h b/xen/arch/x86/include/asm/hvm/io.h +index 1abe1ab67b..28dbaf2e1b 100644 +--- a/xen/arch/x86/include/asm/hvm/io.h ++++ b/xen/arch/x86/include/asm/hvm/io.h +@@ -115,7 +115,6 @@ struct hvm_hw_stdvga { + uint8_t sr[8]; + uint8_t gr_index; + uint8_t gr[9]; +- bool_t stdvga; + uint32_t latch; + struct page_info *vram_page[64]; /* shadow of 0xa0000-0xaffff */ + spinlock_t lock; +-- +2.47.0 + diff --git a/0047-x86-HVM-remove-unused-MMIO-handling-code.patch b/0047-x86-HVM-remove-unused-MMIO-handling-code.patch new file mode 100644 index 0000000..10d5202 --- /dev/null +++ b/0047-x86-HVM-remove-unused-MMIO-handling-code.patch @@ -0,0 +1,392 @@ +From b2c7f59ae99ac8514f1e141da93bd7676460eb42 Mon Sep 17 00:00:00 2001 +From: Jan Beulich <jbeulich@suse.com> +Date: Tue, 12 Nov 2024 13:51:51 +0100 +Subject: [PATCH 47/56] x86/HVM: remove unused MMIO handling code + +All read accesses are rejected by the ->accept handler, while writes +bypass the bulk of the function body. Drop the dead code, leaving an +assertion in the read handler. + +A number of other static items (and a macro) are then unreferenced and +hence also need (want) dropping. The same applies to the "latch" field +of the state structure. + +This is part of XSA-463 / CVE-2024-45818 + +Signed-off-by: Jan Beulich <jbeulich@suse.com> +Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> +(cherry picked from commit 89108547af1f230b72893b48351f9c1106189649) +--- + xen/arch/x86/hvm/stdvga.c | 317 +----------------------------- + xen/arch/x86/include/asm/hvm/io.h | 1 - + 2 files changed, 4 insertions(+), 314 deletions(-) + +diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c +index 76f0d83297..0f0bd10068 100644 +--- a/xen/arch/x86/hvm/stdvga.c ++++ b/xen/arch/x86/hvm/stdvga.c +@@ -37,26 +37,6 @@ + #define VGA_MEM_BASE 0xa0000 + #define VGA_MEM_SIZE 0x20000 + +-#define PAT(x) (x) +-static const uint32_t mask16[16] = { +- PAT(0x00000000), +- PAT(0x000000ff), +- PAT(0x0000ff00), +- PAT(0x0000ffff), +- PAT(0x00ff0000), +- PAT(0x00ff00ff), +- PAT(0x00ffff00), +- PAT(0x00ffffff), +- PAT(0xff000000), +- PAT(0xff0000ff), +- PAT(0xff00ff00), +- PAT(0xff00ffff), +- PAT(0xffff0000), +- PAT(0xffff00ff), +- PAT(0xffffff00), +- PAT(0xffffffff), +-}; +- + /* force some bits to zero */ + static const uint8_t sr_mask[8] = { + (uint8_t)~0xfc, +@@ -81,25 +61,6 @@ static const uint8_t gr_mask[9] = { + (uint8_t)~0x00, /* 0x08 */ + }; + +-static uint8_t *vram_getb(struct hvm_hw_stdvga *s, unsigned int a) +-{ +- struct page_info *pg = s->vram_page[(a >> 12) & 0x3f]; +- uint8_t *p = __map_domain_page(pg); +- return &p[a & 0xfff]; +-} +- +-static uint32_t *vram_getl(struct hvm_hw_stdvga *s, unsigned int a) +-{ +- struct page_info *pg = s->vram_page[(a >> 10) & 0x3f]; +- uint32_t *p = __map_domain_page(pg); +- return &p[a & 0x3ff]; +-} +- +-static void vram_put(struct hvm_hw_stdvga *s, void *p) +-{ +- unmap_domain_page(p); +-} +- + static int stdvga_outb(uint64_t addr, uint8_t val) + { + struct hvm_hw_stdvga *s = ¤t->domain->arch.hvm.stdvga; +@@ -168,244 +129,13 @@ static int cf_check stdvga_intercept_pio( + return X86EMUL_UNHANDLEABLE; /* propagate to external ioemu */ + } + +-static unsigned int stdvga_mem_offset( +- struct hvm_hw_stdvga *s, unsigned int mmio_addr) +-{ +- unsigned int memory_map_mode = (s->gr[6] >> 2) & 3; +- unsigned int offset = mmio_addr & 0x1ffff; +- +- switch ( memory_map_mode ) +- { +- case 0: +- break; +- case 1: +- if ( offset >= 0x10000 ) +- goto fail; +- offset += 0; /* assume bank_offset == 0; */ +- break; +- case 2: +- offset -= 0x10000; +- if ( offset >= 0x8000 ) +- goto fail; +- break; +- default: +- case 3: +- offset -= 0x18000; +- if ( offset >= 0x8000 ) +- goto fail; +- break; +- } +- +- return offset; +- +- fail: +- return ~0u; +-} +- +-#define GET_PLANE(data, p) (((data) >> ((p) * 8)) & 0xff) +- +-static uint8_t stdvga_mem_readb(uint64_t addr) +-{ +- struct hvm_hw_stdvga *s = ¤t->domain->arch.hvm.stdvga; +- int plane; +- uint32_t ret, *vram_l; +- uint8_t *vram_b; +- +- addr = stdvga_mem_offset(s, addr); +- if ( addr == ~0u ) +- return 0xff; +- +- if ( s->sr[4] & 0x08 ) +- { +- /* chain 4 mode : simplest access */ +- vram_b = vram_getb(s, addr); +- ret = *vram_b; +- vram_put(s, vram_b); +- } +- else if ( s->gr[5] & 0x10 ) +- { +- /* odd/even mode (aka text mode mapping) */ +- plane = (s->gr[4] & 2) | (addr & 1); +- vram_b = vram_getb(s, ((addr & ~1) << 1) | plane); +- ret = *vram_b; +- vram_put(s, vram_b); +- } +- else +- { +- /* standard VGA latched access */ +- vram_l = vram_getl(s, addr); +- s->latch = *vram_l; +- vram_put(s, vram_l); +- +- if ( !(s->gr[5] & 0x08) ) +- { +- /* read mode 0 */ +- plane = s->gr[4]; +- ret = GET_PLANE(s->latch, plane); +- } +- else +- { +- /* read mode 1 */ +- ret = (s->latch ^ mask16[s->gr[2]]) & mask16[s->gr[7]]; +- ret |= ret >> 16; +- ret |= ret >> 8; +- ret = (~ret) & 0xff; +- } +- } +- +- return ret; +-} +- + static int cf_check stdvga_mem_read( + const struct hvm_io_handler *handler, uint64_t addr, uint32_t size, + uint64_t *p_data) + { +- uint64_t data = ~0UL; +- +- switch ( size ) +- { +- case 1: +- data = stdvga_mem_readb(addr); +- break; +- +- case 2: +- data = stdvga_mem_readb(addr); +- data |= stdvga_mem_readb(addr + 1) << 8; +- break; +- +- case 4: +- data = stdvga_mem_readb(addr); +- data |= stdvga_mem_readb(addr + 1) << 8; +- data |= stdvga_mem_readb(addr + 2) << 16; +- data |= (uint32_t)stdvga_mem_readb(addr + 3) << 24; +- break; +- +- case 8: +- data = (uint64_t)(stdvga_mem_readb(addr)); +- data |= (uint64_t)(stdvga_mem_readb(addr + 1)) << 8; +- data |= (uint64_t)(stdvga_mem_readb(addr + 2)) << 16; +- data |= (uint64_t)(stdvga_mem_readb(addr + 3)) << 24; +- data |= (uint64_t)(stdvga_mem_readb(addr + 4)) << 32; +- data |= (uint64_t)(stdvga_mem_readb(addr + 5)) << 40; +- data |= (uint64_t)(stdvga_mem_readb(addr + 6)) << 48; +- data |= (uint64_t)(stdvga_mem_readb(addr + 7)) << 56; +- break; +- +- default: +- gdprintk(XENLOG_WARNING, "invalid io size: %u\n", size); +- break; +- } +- +- *p_data = data; +- return X86EMUL_OKAY; +-} +- +-static void stdvga_mem_writeb(uint64_t addr, uint32_t val) +-{ +- struct hvm_hw_stdvga *s = ¤t->domain->arch.hvm.stdvga; +- int plane, write_mode, b, func_select, mask; +- uint32_t write_mask, bit_mask, set_mask, *vram_l; +- uint8_t *vram_b; +- +- addr = stdvga_mem_offset(s, addr); +- if ( addr == ~0u ) +- return; +- +- if ( s->sr[4] & 0x08 ) +- { +- /* chain 4 mode : simplest access */ +- plane = addr & 3; +- mask = (1 << plane); +- if ( s->sr[2] & mask ) +- { +- vram_b = vram_getb(s, addr); +- *vram_b = val; +- vram_put(s, vram_b); +- } +- } +- else if ( s->gr[5] & 0x10 ) +- { +- /* odd/even mode (aka text mode mapping) */ +- plane = (s->gr[4] & 2) | (addr & 1); +- mask = (1 << plane); +- if ( s->sr[2] & mask ) +- { +- addr = ((addr & ~1) << 1) | plane; +- vram_b = vram_getb(s, addr); +- *vram_b = val; +- vram_put(s, vram_b); +- } +- } +- else +- { +- write_mode = s->gr[5] & 3; +- switch ( write_mode ) +- { +- default: +- case 0: +- /* rotate */ +- b = s->gr[3] & 7; +- val = ((val >> b) | (val << (8 - b))) & 0xff; +- val |= val << 8; +- val |= val << 16; +- +- /* apply set/reset mask */ +- set_mask = mask16[s->gr[1]]; +- val = (val & ~set_mask) | (mask16[s->gr[0]] & set_mask); +- bit_mask = s->gr[8]; +- break; +- case 1: +- val = s->latch; +- goto do_write; +- case 2: +- val = mask16[val & 0x0f]; +- bit_mask = s->gr[8]; +- break; +- case 3: +- /* rotate */ +- b = s->gr[3] & 7; +- val = (val >> b) | (val << (8 - b)); +- +- bit_mask = s->gr[8] & val; +- val = mask16[s->gr[0]]; +- break; +- } +- +- /* apply logical operation */ +- func_select = s->gr[3] >> 3; +- switch ( func_select ) +- { +- case 0: +- default: +- /* nothing to do */ +- break; +- case 1: +- /* and */ +- val &= s->latch; +- break; +- case 2: +- /* or */ +- val |= s->latch; +- break; +- case 3: +- /* xor */ +- val ^= s->latch; +- break; +- } +- +- /* apply bit mask */ +- bit_mask |= bit_mask << 8; +- bit_mask |= bit_mask << 16; +- val = (val & bit_mask) | (s->latch & ~bit_mask); +- +- do_write: +- /* mask data according to sr[2] */ +- mask = s->sr[2]; +- write_mask = mask16[mask]; +- vram_l = vram_getl(s, addr); +- *vram_l = (*vram_l & ~write_mask) | (val & write_mask); +- vram_put(s, vram_l); +- } ++ ASSERT_UNREACHABLE(); ++ *p_data = ~0; ++ return X86EMUL_UNHANDLEABLE; + } + + static int cf_check stdvga_mem_write( +@@ -420,47 +150,8 @@ static int cf_check stdvga_mem_write( + .dir = IOREQ_WRITE, + .data = data, + }; +- struct ioreq_server *srv; +- +- goto done; +- +- /* Intercept mmio write */ +- switch ( size ) +- { +- case 1: +- stdvga_mem_writeb(addr, (data >> 0) & 0xff); +- break; +- +- case 2: +- stdvga_mem_writeb(addr+0, (data >> 0) & 0xff); +- stdvga_mem_writeb(addr+1, (data >> 8) & 0xff); +- break; +- +- case 4: +- stdvga_mem_writeb(addr+0, (data >> 0) & 0xff); +- stdvga_mem_writeb(addr+1, (data >> 8) & 0xff); +- stdvga_mem_writeb(addr+2, (data >> 16) & 0xff); +- stdvga_mem_writeb(addr+3, (data >> 24) & 0xff); +- break; +- +- case 8: +- stdvga_mem_writeb(addr+0, (data >> 0) & 0xff); +- stdvga_mem_writeb(addr+1, (data >> 8) & 0xff); +- stdvga_mem_writeb(addr+2, (data >> 16) & 0xff); +- stdvga_mem_writeb(addr+3, (data >> 24) & 0xff); +- stdvga_mem_writeb(addr+4, (data >> 32) & 0xff); +- stdvga_mem_writeb(addr+5, (data >> 40) & 0xff); +- stdvga_mem_writeb(addr+6, (data >> 48) & 0xff); +- stdvga_mem_writeb(addr+7, (data >> 56) & 0xff); +- break; +- +- default: +- gdprintk(XENLOG_WARNING, "invalid io size: %u\n", size); +- break; +- } ++ struct ioreq_server *srv = ioreq_server_select(current->domain, &p); + +- done: +- srv = ioreq_server_select(current->domain, &p); + if ( !srv ) + return X86EMUL_UNHANDLEABLE; + +diff --git a/xen/arch/x86/include/asm/hvm/io.h b/xen/arch/x86/include/asm/hvm/io.h +index 28dbaf2e1b..19ecf4fd78 100644 +--- a/xen/arch/x86/include/asm/hvm/io.h ++++ b/xen/arch/x86/include/asm/hvm/io.h +@@ -115,7 +115,6 @@ struct hvm_hw_stdvga { + uint8_t sr[8]; + uint8_t gr_index; + uint8_t gr[9]; +- uint32_t latch; + struct page_info *vram_page[64]; /* shadow of 0xa0000-0xaffff */ + spinlock_t lock; + }; +-- +2.47.0 + diff --git a/0048-x86-HVM-drop-stdvga-s-gr-struct-member.patch b/0048-x86-HVM-drop-stdvga-s-gr-struct-member.patch new file mode 100644 index 0000000..7f8196a --- /dev/null +++ b/0048-x86-HVM-drop-stdvga-s-gr-struct-member.patch @@ -0,0 +1,70 @@ +From 46755f06f9377c34bc036c3e3f92d555e894e53f Mon Sep 17 00:00:00 2001 +From: Jan Beulich <jbeulich@suse.com> +Date: Tue, 12 Nov 2024 13:52:08 +0100 +Subject: [PATCH 48/56] x86/HVM: drop stdvga's "gr[]" struct member + +No consumers are left, hence the producer and the array itself can also +go away. The static gr_mask[] is then orphaned and hence needs dropping, +too. + +This is part of XSA-463 / CVE-2024-45818 + +Signed-off-by: Jan Beulich <jbeulich@suse.com> +Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> +(cherry picked from commit b16c0966a17f19c0e55ed0b9baa28191d2590178) +--- + xen/arch/x86/hvm/stdvga.c | 18 ------------------ + xen/arch/x86/include/asm/hvm/io.h | 1 - + 2 files changed, 19 deletions(-) + +diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c +index 0f0bd10068..fa25833caa 100644 +--- a/xen/arch/x86/hvm/stdvga.c ++++ b/xen/arch/x86/hvm/stdvga.c +@@ -49,18 +49,6 @@ static const uint8_t sr_mask[8] = { + (uint8_t)~0x00, + }; + +-static const uint8_t gr_mask[9] = { +- (uint8_t)~0xf0, /* 0x00 */ +- (uint8_t)~0xf0, /* 0x01 */ +- (uint8_t)~0xf0, /* 0x02 */ +- (uint8_t)~0xe0, /* 0x03 */ +- (uint8_t)~0xfc, /* 0x04 */ +- (uint8_t)~0x84, /* 0x05 */ +- (uint8_t)~0xf0, /* 0x06 */ +- (uint8_t)~0xf0, /* 0x07 */ +- (uint8_t)~0x00, /* 0x08 */ +-}; +- + static int stdvga_outb(uint64_t addr, uint8_t val) + { + struct hvm_hw_stdvga *s = ¤t->domain->arch.hvm.stdvga; +@@ -82,12 +70,6 @@ static int stdvga_outb(uint64_t addr, uint8_t val) + s->gr_index = val; + break; + +- case 0x3cf: /* graphics data register */ +- rc = (s->gr_index < sizeof(s->gr)); +- if ( rc ) +- s->gr[s->gr_index] = val & gr_mask[s->gr_index]; +- break; +- + default: + rc = 0; + break; +diff --git a/xen/arch/x86/include/asm/hvm/io.h b/xen/arch/x86/include/asm/hvm/io.h +index 19ecf4fd78..6a34ea82f4 100644 +--- a/xen/arch/x86/include/asm/hvm/io.h ++++ b/xen/arch/x86/include/asm/hvm/io.h +@@ -114,7 +114,6 @@ struct hvm_hw_stdvga { + uint8_t sr_index; + uint8_t sr[8]; + uint8_t gr_index; +- uint8_t gr[9]; + struct page_info *vram_page[64]; /* shadow of 0xa0000-0xaffff */ + spinlock_t lock; + }; +-- +2.47.0 + diff --git a/0049-x86-HVM-drop-stdvga-s-sr-struct-member.patch b/0049-x86-HVM-drop-stdvga-s-sr-struct-member.patch new file mode 100644 index 0000000..0f496da --- /dev/null +++ b/0049-x86-HVM-drop-stdvga-s-sr-struct-member.patch @@ -0,0 +1,70 @@ +From efc71abfe609648c647113e82bbf68972b3be348 Mon Sep 17 00:00:00 2001 +From: Jan Beulich <jbeulich@suse.com> +Date: Tue, 12 Nov 2024 13:52:28 +0100 +Subject: [PATCH 49/56] x86/HVM: drop stdvga's "sr[]" struct member + +No consumers are left, hence the producer and the array itself can also +go away. The static sr_mask[] is then orphaned and hence needs dropping, +too. + +This is part of XSA-463 / CVE-2024-45818 + +Signed-off-by: Jan Beulich <jbeulich@suse.com> +Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> +(cherry picked from commit 7aba44bdd78aedb97703811948c3b69ccff85032) +--- + xen/arch/x86/hvm/stdvga.c | 18 ------------------ + xen/arch/x86/include/asm/hvm/io.h | 1 - + 2 files changed, 19 deletions(-) + +diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c +index fa25833caa..5523a441dd 100644 +--- a/xen/arch/x86/hvm/stdvga.c ++++ b/xen/arch/x86/hvm/stdvga.c +@@ -37,18 +37,6 @@ + #define VGA_MEM_BASE 0xa0000 + #define VGA_MEM_SIZE 0x20000 + +-/* force some bits to zero */ +-static const uint8_t sr_mask[8] = { +- (uint8_t)~0xfc, +- (uint8_t)~0xc2, +- (uint8_t)~0xf0, +- (uint8_t)~0xc0, +- (uint8_t)~0xf1, +- (uint8_t)~0xff, +- (uint8_t)~0xff, +- (uint8_t)~0x00, +-}; +- + static int stdvga_outb(uint64_t addr, uint8_t val) + { + struct hvm_hw_stdvga *s = ¤t->domain->arch.hvm.stdvga; +@@ -60,12 +48,6 @@ static int stdvga_outb(uint64_t addr, uint8_t val) + s->sr_index = val; + break; + +- case 0x3c5: /* sequencer data register */ +- rc = (s->sr_index < sizeof(s->sr)); +- if ( rc ) +- s->sr[s->sr_index] = val & sr_mask[s->sr_index] ; +- break; +- + case 0x3ce: /* graphics address register */ + s->gr_index = val; + break; +diff --git a/xen/arch/x86/include/asm/hvm/io.h b/xen/arch/x86/include/asm/hvm/io.h +index 6a34ea82f4..d8310f0fe4 100644 +--- a/xen/arch/x86/include/asm/hvm/io.h ++++ b/xen/arch/x86/include/asm/hvm/io.h +@@ -112,7 +112,6 @@ struct vpci_arch_msix_entry { + + struct hvm_hw_stdvga { + uint8_t sr_index; +- uint8_t sr[8]; + uint8_t gr_index; + struct page_info *vram_page[64]; /* shadow of 0xa0000-0xaffff */ + spinlock_t lock; +-- +2.47.0 + diff --git a/0050-x86-HVM-drop-stdvga-s-g-s-r_index-struct-members.patch b/0050-x86-HVM-drop-stdvga-s-g-s-r_index-struct-members.patch new file mode 100644 index 0000000..0bc8263 --- /dev/null +++ b/0050-x86-HVM-drop-stdvga-s-g-s-r_index-struct-members.patch @@ -0,0 +1,114 @@ +From 885570c94e6751cca1c92259411797a4cd1f4d71 Mon Sep 17 00:00:00 2001 +From: Jan Beulich <jbeulich@suse.com> +Date: Tue, 12 Nov 2024 13:52:46 +0100 +Subject: [PATCH 50/56] x86/HVM: drop stdvga's "{g,s}r_index" struct members + +No consumers are left, hence the producer and the fields themselves can +also go away. stdvga_outb() is then useless, rendering stdvga_out() +useless as well. Hence the entire I/O port intercept can go away. + +This is part of XSA-463 / CVE-2024-45818 + +Signed-off-by: Jan Beulich <jbeulich@suse.com> +Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> +(cherry picked from commit 86c03372e107f5c18266a62281663861b1144929) +--- + xen/arch/x86/hvm/stdvga.c | 61 ------------------------------- + xen/arch/x86/include/asm/hvm/io.h | 2 - + 2 files changed, 63 deletions(-) + +diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c +index 5523a441dd..155a67a438 100644 +--- a/xen/arch/x86/hvm/stdvga.c ++++ b/xen/arch/x86/hvm/stdvga.c +@@ -37,62 +37,6 @@ + #define VGA_MEM_BASE 0xa0000 + #define VGA_MEM_SIZE 0x20000 + +-static int stdvga_outb(uint64_t addr, uint8_t val) +-{ +- struct hvm_hw_stdvga *s = ¤t->domain->arch.hvm.stdvga; +- int rc = 1; +- +- switch ( addr ) +- { +- case 0x3c4: /* sequencer address register */ +- s->sr_index = val; +- break; +- +- case 0x3ce: /* graphics address register */ +- s->gr_index = val; +- break; +- +- default: +- rc = 0; +- break; +- } +- +- return rc; +-} +- +-static void stdvga_out(uint32_t port, uint32_t bytes, uint32_t val) +-{ +- switch ( bytes ) +- { +- case 1: +- stdvga_outb(port, val); +- break; +- +- case 2: +- stdvga_outb(port + 0, val >> 0); +- stdvga_outb(port + 1, val >> 8); +- break; +- +- default: +- break; +- } +-} +- +-static int cf_check stdvga_intercept_pio( +- int dir, unsigned int port, unsigned int bytes, uint32_t *val) +-{ +- struct hvm_hw_stdvga *s = ¤t->domain->arch.hvm.stdvga; +- +- if ( dir == IOREQ_WRITE ) +- { +- spin_lock(&s->lock); +- stdvga_out(port, bytes, *val); +- spin_unlock(&s->lock); +- } +- +- return X86EMUL_UNHANDLEABLE; /* propagate to external ioemu */ +-} +- + static int cf_check stdvga_mem_read( + const struct hvm_io_handler *handler, uint64_t addr, uint32_t size, + uint64_t *p_data) +@@ -194,11 +138,6 @@ void stdvga_init(struct domain *d) + { + struct hvm_io_handler *handler; + +- /* Sequencer registers. */ +- register_portio_handler(d, 0x3c4, 2, stdvga_intercept_pio); +- /* Graphics registers. */ +- register_portio_handler(d, 0x3ce, 2, stdvga_intercept_pio); +- + /* VGA memory */ + handler = hvm_next_io_handler(d); + +diff --git a/xen/arch/x86/include/asm/hvm/io.h b/xen/arch/x86/include/asm/hvm/io.h +index d8310f0fe4..ec55c93d2f 100644 +--- a/xen/arch/x86/include/asm/hvm/io.h ++++ b/xen/arch/x86/include/asm/hvm/io.h +@@ -111,8 +111,6 @@ struct vpci_arch_msix_entry { + }; + + struct hvm_hw_stdvga { +- uint8_t sr_index; +- uint8_t gr_index; + struct page_info *vram_page[64]; /* shadow of 0xa0000-0xaffff */ + spinlock_t lock; + }; +-- +2.47.0 + diff --git a/0051-x86-HVM-drop-stdvga-s-vram_page-struct-member.patch b/0051-x86-HVM-drop-stdvga-s-vram_page-struct-member.patch new file mode 100644 index 0000000..5550821 --- /dev/null +++ b/0051-x86-HVM-drop-stdvga-s-vram_page-struct-member.patch @@ -0,0 +1,124 @@ +From 4f8e6602bc45008712f9b7828fe0819d259a2472 Mon Sep 17 00:00:00 2001 +From: Jan Beulich <jbeulich@suse.com> +Date: Tue, 12 Nov 2024 13:53:03 +0100 +Subject: [PATCH 51/56] x86/HVM: drop stdvga's "vram_page[]" struct member + +No uses are left, hence its setup, teardown, and the field itself can +also go away. stdvga_deinit() is then empty and can be dropped as well. + +This is part of XSA-463 / CVE-2024-45818 + +Signed-off-by: Jan Beulich <jbeulich@suse.com> +Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> +(cherry picked from commit 3beb4baf2a0a2eef40d39eb7e6eecbfd36da5d14) +--- + xen/arch/x86/hvm/hvm.c | 2 -- + xen/arch/x86/hvm/stdvga.c | 41 +++---------------------------- + xen/arch/x86/include/asm/hvm/io.h | 2 -- + 3 files changed, 4 insertions(+), 41 deletions(-) + +diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c +index 8f293f4feb..3662e23cb6 100644 +--- a/xen/arch/x86/hvm/hvm.c ++++ b/xen/arch/x86/hvm/hvm.c +@@ -685,7 +685,6 @@ int hvm_domain_initialise(struct domain *d, + return 0; + + fail2: +- stdvga_deinit(d); + vioapic_deinit(d); + fail1: + if ( is_hardware_domain(d) ) +@@ -748,7 +747,6 @@ void hvm_domain_destroy(struct domain *d) + if ( hvm_funcs.domain_destroy ) + alternative_vcall(hvm_funcs.domain_destroy, d); + +- stdvga_deinit(d); + vioapic_deinit(d); + + XFREE(d->arch.hvm.pl_time); +diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c +index 155a67a438..9f308fc896 100644 +--- a/xen/arch/x86/hvm/stdvga.c ++++ b/xen/arch/x86/hvm/stdvga.c +@@ -116,8 +116,7 @@ static const struct hvm_io_ops stdvga_mem_ops = { + void stdvga_init(struct domain *d) + { + struct hvm_hw_stdvga *s = &d->arch.hvm.stdvga; +- struct page_info *pg; +- unsigned int i; ++ struct hvm_io_handler *handler; + + if ( !has_vvga(d) ) + return; +@@ -125,47 +124,15 @@ void stdvga_init(struct domain *d) + memset(s, 0, sizeof(*s)); + spin_lock_init(&s->lock); + +- for ( i = 0; i != ARRAY_SIZE(s->vram_page); i++ ) ++ /* VGA memory */ ++ handler = hvm_next_io_handler(d); ++ if ( handler ) + { +- pg = alloc_domheap_page(d, MEMF_no_owner); +- if ( pg == NULL ) +- break; +- s->vram_page[i] = pg; +- clear_domain_page(page_to_mfn(pg)); +- } +- +- if ( i == ARRAY_SIZE(s->vram_page) ) +- { +- struct hvm_io_handler *handler; +- +- /* VGA memory */ +- handler = hvm_next_io_handler(d); +- +- if ( handler == NULL ) +- return; +- + handler->type = IOREQ_TYPE_COPY; + handler->ops = &stdvga_mem_ops; + } + } + +-void stdvga_deinit(struct domain *d) +-{ +- struct hvm_hw_stdvga *s = &d->arch.hvm.stdvga; +- int i; +- +- if ( !has_vvga(d) ) +- return; +- +- for ( i = 0; i != ARRAY_SIZE(s->vram_page); i++ ) +- { +- if ( s->vram_page[i] == NULL ) +- continue; +- free_domheap_page(s->vram_page[i]); +- s->vram_page[i] = NULL; +- } +-} +- + /* + * Local variables: + * mode: C +diff --git a/xen/arch/x86/include/asm/hvm/io.h b/xen/arch/x86/include/asm/hvm/io.h +index ec55c93d2f..958077de81 100644 +--- a/xen/arch/x86/include/asm/hvm/io.h ++++ b/xen/arch/x86/include/asm/hvm/io.h +@@ -111,12 +111,10 @@ struct vpci_arch_msix_entry { + }; + + struct hvm_hw_stdvga { +- struct page_info *vram_page[64]; /* shadow of 0xa0000-0xaffff */ + spinlock_t lock; + }; + + void stdvga_init(struct domain *d); +-void stdvga_deinit(struct domain *d); + + extern void hvm_dpci_msi_eoi(struct domain *d, int vector); + +-- +2.47.0 + diff --git a/0052-x86-HVM-drop-stdvga-s-lock-struct-member.patch b/0052-x86-HVM-drop-stdvga-s-lock-struct-member.patch new file mode 100644 index 0000000..44502cb --- /dev/null +++ b/0052-x86-HVM-drop-stdvga-s-lock-struct-member.patch @@ -0,0 +1,119 @@ +From bc5ae1d254ef1da536127d2a232b6c21052f4d92 Mon Sep 17 00:00:00 2001 +From: Jan Beulich <jbeulich@suse.com> +Date: Tue, 12 Nov 2024 13:53:24 +0100 +Subject: [PATCH 52/56] x86/HVM: drop stdvga's "lock" struct member + +No state is left to protect. It being the last field, drop the struct +itself as well. Similarly for then ending up empty, drop the .complete +handler. + +This is part of XSA-463 / CVE-2024-45818 + +Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com> +Signed-off-by: Jan Beulich <jbeulich@suse.com> +Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> +(cherry picked from commit b180a50326c8a2c171f37c1940a0fbbdcad4be90) +--- + xen/arch/x86/hvm/stdvga.c | 30 ++------------------------- + xen/arch/x86/include/asm/hvm/domain.h | 1 - + xen/arch/x86/include/asm/hvm/io.h | 4 ---- + 3 files changed, 2 insertions(+), 33 deletions(-) + +diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c +index 9f308fc896..d38d30affb 100644 +--- a/xen/arch/x86/hvm/stdvga.c ++++ b/xen/arch/x86/hvm/stdvga.c +@@ -69,61 +69,35 @@ static int cf_check stdvga_mem_write( + static bool cf_check stdvga_mem_accept( + const struct hvm_io_handler *handler, const ioreq_t *p) + { +- struct hvm_hw_stdvga *s = ¤t->domain->arch.hvm.stdvga; +- +- /* +- * The range check must be done without taking the lock, to avoid +- * deadlock when hvm_mmio_internal() is called from +- * hvm_copy_to/from_guest_phys() in hvm_process_io_intercept(). +- */ + if ( (ioreq_mmio_first_byte(p) < VGA_MEM_BASE) || + (ioreq_mmio_last_byte(p) >= (VGA_MEM_BASE + VGA_MEM_SIZE)) ) + return 0; + +- spin_lock(&s->lock); +- + if ( p->dir != IOREQ_WRITE || p->data_is_ptr || p->count != 1 ) + { + /* + * Only accept single direct writes, as that's the only thing we can + * accelerate using buffered ioreq handling. + */ +- goto reject; ++ return false; + } + +- /* s->lock intentionally held */ +- return 1; +- +- reject: +- spin_unlock(&s->lock); +- return 0; +-} +- +-static void cf_check stdvga_mem_complete(const struct hvm_io_handler *handler) +-{ +- struct hvm_hw_stdvga *s = ¤t->domain->arch.hvm.stdvga; +- +- spin_unlock(&s->lock); ++ return true; + } + + static const struct hvm_io_ops stdvga_mem_ops = { + .accept = stdvga_mem_accept, + .read = stdvga_mem_read, + .write = stdvga_mem_write, +- .complete = stdvga_mem_complete + }; + + void stdvga_init(struct domain *d) + { +- struct hvm_hw_stdvga *s = &d->arch.hvm.stdvga; + struct hvm_io_handler *handler; + + if ( !has_vvga(d) ) + return; + +- memset(s, 0, sizeof(*s)); +- spin_lock_init(&s->lock); +- + /* VGA memory */ + handler = hvm_next_io_handler(d); + if ( handler ) +diff --git a/xen/arch/x86/include/asm/hvm/domain.h b/xen/arch/x86/include/asm/hvm/domain.h +index dd9d837e84..333501d5f2 100644 +--- a/xen/arch/x86/include/asm/hvm/domain.h ++++ b/xen/arch/x86/include/asm/hvm/domain.h +@@ -72,7 +72,6 @@ struct hvm_domain { + struct hvm_hw_vpic vpic[2]; /* 0=master; 1=slave */ + struct hvm_vioapic **vioapic; + unsigned int nr_vioapics; +- struct hvm_hw_stdvga stdvga; + + /* + * hvm_hw_pmtimer is a publicly-visible name. We will defer renaming +diff --git a/xen/arch/x86/include/asm/hvm/io.h b/xen/arch/x86/include/asm/hvm/io.h +index 958077de81..d123e7c9ed 100644 +--- a/xen/arch/x86/include/asm/hvm/io.h ++++ b/xen/arch/x86/include/asm/hvm/io.h +@@ -110,10 +110,6 @@ struct vpci_arch_msix_entry { + int pirq; + }; + +-struct hvm_hw_stdvga { +- spinlock_t lock; +-}; +- + void stdvga_init(struct domain *d); + + extern void hvm_dpci_msi_eoi(struct domain *d, int vector); +-- +2.47.0 + diff --git a/0053-x86-hvm-Simplify-stdvga_mem_accept-further.patch b/0053-x86-hvm-Simplify-stdvga_mem_accept-further.patch new file mode 100644 index 0000000..5058b5c --- /dev/null +++ b/0053-x86-hvm-Simplify-stdvga_mem_accept-further.patch @@ -0,0 +1,94 @@ +From 20d34c1e82402061b4a0be1b9e504ae55abdc5b6 Mon Sep 17 00:00:00 2001 +From: Andrew Cooper <andrew.cooper3@citrix.com> +Date: Tue, 12 Nov 2024 13:53:40 +0100 +Subject: [PATCH 53/56] x86/hvm: Simplify stdvga_mem_accept() further + +stdvga_mem_accept() is called on almost all IO emulations, and the +overwhelming likely answer is to reject the ioreq. Simply rearranging the +expression yields an improvement: + + add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-57 (-57) + Function old new delta + stdvga_mem_accept 109 52 -57 + +which is best explained looking at the disassembly: + + Before: After: + f3 0f 1e fa endbr64 f3 0f 1e fa endbr64 + 0f b6 4e 1e movzbl 0x1e(%rsi),%ecx | 0f b6 46 1e movzbl 0x1e(%rsi),%eax + 48 8b 16 mov (%rsi),%rdx | 31 d2 xor %edx,%edx + f6 c1 40 test $0x40,%cl | a8 30 test $0x30,%al + 75 38 jne <stdvga_mem_accept+0x48> | 75 23 jne <stdvga_mem_accept+0x31> + 31 c0 xor %eax,%eax < + 48 81 fa ff ff 09 00 cmp $0x9ffff,%rdx < + 76 26 jbe <stdvga_mem_accept+0x41> < + 8b 46 14 mov 0x14(%rsi),%eax < + 8b 7e 10 mov 0x10(%rsi),%edi < + 48 0f af c7 imul %rdi,%rax < + 48 8d 54 02 ff lea -0x1(%rdx,%rax,1),%rdx < + 31 c0 xor %eax,%eax < + 48 81 fa ff ff 0b 00 cmp $0xbffff,%rdx < + 77 0c ja <stdvga_mem_accept+0x41> < + 83 e1 30 and $0x30,%ecx < + 75 07 jne <stdvga_mem_accept+0x41> < + 83 7e 10 01 cmpl $0x1,0x10(%rsi) 83 7e 10 01 cmpl $0x1,0x10(%rsi) + 0f 94 c0 sete %al | 75 1d jne <stdvga_mem_accept+0x31> + c3 ret | 48 8b 0e mov (%rsi),%rcx + 66 0f 1f 44 00 00 nopw 0x0(%rax,%rax,1) | 48 81 f9 ff ff 09 00 cmp $0x9ffff,%rcx + 8b 46 10 mov 0x10(%rsi),%eax | 76 11 jbe <stdvga_mem_accept+0x31> + 8b 7e 14 mov 0x14(%rsi),%edi | 8b 46 14 mov 0x14(%rsi),%eax + 49 89 d0 mov %rdx,%r8 | 48 8d 44 01 ff lea -0x1(%rcx,%rax,1),%rax + 48 83 e8 01 sub $0x1,%rax | 48 3d ff ff 0b 00 cmp $0xbffff,%rax + 48 8d 54 3a ff lea -0x1(%rdx,%rdi,1),%rdx | 0f 96 c2 setbe %dl + 48 0f af c7 imul %rdi,%rax | 89 d0 mov %edx,%eax + 49 29 c0 sub %rax,%r8 < + 31 c0 xor %eax,%eax < + 49 81 f8 ff ff 09 00 cmp $0x9ffff,%r8 < + 77 be ja <stdvga_mem_accept+0x2a> < + c3 ret c3 ret + +By moving the "p->count != 1" check ahead of the +ioreq_mmio_{first,last}_byte() calls, both multiplies disappear along with a +lot of surrounding logic. + +No functional change. + +Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> +Reviewed-by: Jan Beulich <jbeulich@suse.com> +(cherry picked from commit 08ffd8705d36c7c445df3ecee8ad9b8f8d65fbe0) +--- + xen/arch/x86/hvm/stdvga.c | 16 ++++++---------- + 1 file changed, 6 insertions(+), 10 deletions(-) + +diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c +index d38d30affb..c3c43f59ee 100644 +--- a/xen/arch/x86/hvm/stdvga.c ++++ b/xen/arch/x86/hvm/stdvga.c +@@ -69,18 +69,14 @@ static int cf_check stdvga_mem_write( + static bool cf_check stdvga_mem_accept( + const struct hvm_io_handler *handler, const ioreq_t *p) + { +- if ( (ioreq_mmio_first_byte(p) < VGA_MEM_BASE) || ++ /* ++ * Only accept single direct writes, as that's the only thing we can ++ * accelerate using buffered ioreq handling. ++ */ ++ if ( p->dir != IOREQ_WRITE || p->data_is_ptr || p->count != 1 || ++ (ioreq_mmio_first_byte(p) < VGA_MEM_BASE) || + (ioreq_mmio_last_byte(p) >= (VGA_MEM_BASE + VGA_MEM_SIZE)) ) +- return 0; +- +- if ( p->dir != IOREQ_WRITE || p->data_is_ptr || p->count != 1 ) +- { +- /* +- * Only accept single direct writes, as that's the only thing we can +- * accelerate using buffered ioreq handling. +- */ + return false; +- } + + return true; + } +-- +2.47.0 + diff --git a/0054-libxl-Use-zero-ed-memory-for-PVH-acpi-tables.patch b/0054-libxl-Use-zero-ed-memory-for-PVH-acpi-tables.patch new file mode 100644 index 0000000..2d2b3ec --- /dev/null +++ b/0054-libxl-Use-zero-ed-memory-for-PVH-acpi-tables.patch @@ -0,0 +1,43 @@ +From 5f29c8c89afa7023d8d64a99be0d5b86e9299713 Mon Sep 17 00:00:00 2001 +From: Jason Andryuk <jason.andryuk@amd.com> +Date: Tue, 12 Nov 2024 13:54:00 +0100 +Subject: [PATCH 54/56] libxl: Use zero-ed memory for PVH acpi tables + +xl/libxl memory is leaking into a PVH guest through uninitialized +portions of the ACPI tables. + +Use libxl_zalloc() to obtain zero-ed memory to avoid this issue. + +This is XSA-464 / CVE-2024-45819. + +Signed-off-by: Jason Andryuk <jason.andryuk@amd.com> +Fixes: 14c0d328da2b ("libxl/acpi: Build ACPI tables for HVMlite guests") +Reviewed-by: Jan Beulich <jbeulich@suse.com> +master commit: 0bfe567b58f1182889dea9207103fc9d00baf414 +master date: 2024-11-12 13:32:45 +0100 +--- + tools/libs/light/libxl_x86_acpi.c | 7 ++++--- + 1 file changed, 4 insertions(+), 3 deletions(-) + +diff --git a/tools/libs/light/libxl_x86_acpi.c b/tools/libs/light/libxl_x86_acpi.c +index 5cf261bd67..2574ce2553 100644 +--- a/tools/libs/light/libxl_x86_acpi.c ++++ b/tools/libs/light/libxl_x86_acpi.c +@@ -176,10 +176,11 @@ int libxl__dom_load_acpi(libxl__gc *gc, + goto out; + } + +- config.rsdp = (unsigned long)libxl__malloc(gc, libxl_ctxt.page_size); +- config.infop = (unsigned long)libxl__malloc(gc, libxl_ctxt.page_size); ++ /* These are all copied into guest memory, so use zero-ed memory. */ ++ config.rsdp = (unsigned long)libxl__zalloc(gc, libxl_ctxt.page_size); ++ config.infop = (unsigned long)libxl__zalloc(gc, libxl_ctxt.page_size); + /* Pages to hold ACPI tables */ +- libxl_ctxt.buf = libxl__malloc(gc, NUM_ACPI_PAGES * ++ libxl_ctxt.buf = libxl__zalloc(gc, NUM_ACPI_PAGES * + libxl_ctxt.page_size); + + /* +-- +2.47.0 + diff --git a/0055-x86-io-apic-fix-directed-EOI-when-using-AMD-Vi-inter.patch b/0055-x86-io-apic-fix-directed-EOI-when-using-AMD-Vi-inter.patch new file mode 100644 index 0000000..a0cd2e1 --- /dev/null +++ b/0055-x86-io-apic-fix-directed-EOI-when-using-AMD-Vi-inter.patch @@ -0,0 +1,160 @@ +From 193126757d0fd4f36b10894504e51863cab462f9 Mon Sep 17 00:00:00 2001 +From: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com> +Date: Tue, 12 Nov 2024 13:54:41 +0100 +Subject: [PATCH 55/56] x86/io-apic: fix directed EOI when using AMD-Vi + interrupt remapping +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +When using AMD-Vi interrupt remapping the vector field in the IO-APIC RTE is +repurposed to contain part of the offset into the remapping table. Previous to +2ca9fbd739b8 Xen had logic so that the offset into the interrupt remapping +table would match the vector. Such logic was mandatory for end of interrupt to +work, since the vector field (even when not containing a vector) is used by the +IO-APIC to find for which pin the EOI must be performed. + +A simple solution wold be to read the IO-APIC RTE each time an EOI is to be +performed, so the raw value of the vector field can be obtained. However +that's likely to perform poorly. Instead introduce a cache to store the +EOI handles when using interrupt remapping, so that the IO-APIC driver can +translate pins into EOI handles without having to read the IO-APIC RTE entry. +Note that to simplify the logic such cache is used unconditionally when +interrupt remapping is enabled, even if strictly it would only be required +for AMD-Vi. + +Reported-by: Willi Junga <xenproject@ymy.be> +Suggested-by: David Woodhouse <dwmw@amazon.co.uk> +Fixes: 2ca9fbd739b8 ('AMD IOMMU: allocate IRTE entries instead of using a static mapping') +Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> +Tested-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> +Reviewed-by: Jan Beulich <jbeulich@suse.com> +master commit: 86001b3970fea4536048607ea6e12541736c48e1 +master date: 2024-11-05 10:36:53 +0000 +--- + xen/arch/x86/io_apic.c | 75 +++++++++++++++++++++++++++++++++++++++--- + 1 file changed, 70 insertions(+), 5 deletions(-) + +diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c +index f7591fd091..836af62061 100644 +--- a/xen/arch/x86/io_apic.c ++++ b/xen/arch/x86/io_apic.c +@@ -71,6 +71,24 @@ static int apic_pin_2_gsi_irq(int apic, int pin); + + static vmask_t *__read_mostly vector_map[MAX_IO_APICS]; + ++/* ++ * Store the EOI handle when using interrupt remapping. ++ * ++ * If using AMD-Vi interrupt remapping the IO-APIC redirection entry remapped ++ * format repurposes the vector field to store the offset into the Interrupt ++ * Remap table. This breaks directed EOI, as the CPU vector no longer matches ++ * the contents of the RTE vector field. Add a translation cache so that ++ * directed EOI uses the value in the RTE vector field when interrupt remapping ++ * is enabled. ++ * ++ * Intel VT-d Xen code still stores the CPU vector in the RTE vector field when ++ * using the remapped format, but use the translation cache uniformly in order ++ * to avoid extra logic to differentiate between VT-d and AMD-Vi. ++ * ++ * The matrix is accessed as [#io-apic][#pin]. ++ */ ++static uint8_t **__ro_after_init io_apic_pin_eoi; ++ + static void share_vector_maps(unsigned int src, unsigned int dst) + { + unsigned int pin; +@@ -273,6 +291,17 @@ void __ioapic_write_entry( + { + __io_apic_write(apic, 0x11 + 2 * pin, eu.w2); + __io_apic_write(apic, 0x10 + 2 * pin, eu.w1); ++ /* ++ * Might be called before io_apic_pin_eoi is allocated. Entry will be ++ * initialized to the RTE value once the cache is allocated. ++ * ++ * The vector field is only cached for raw RTE writes when using IR. ++ * In that case the vector field might have been repurposed to store ++ * something different than the CPU vector, and hence need to be cached ++ * for performing EOI. ++ */ ++ if ( io_apic_pin_eoi ) ++ io_apic_pin_eoi[apic][pin] = e.vector; + } + else + iommu_update_ire_from_apic(apic, pin, e.raw); +@@ -288,18 +317,36 @@ static void ioapic_write_entry( + spin_unlock_irqrestore(&ioapic_lock, flags); + } + +-/* EOI an IO-APIC entry. Vector may be -1, indicating that it should be ++/* ++ * EOI an IO-APIC entry. Vector may be -1, indicating that it should be + * worked out using the pin. This function expects that the ioapic_lock is + * being held, and interrupts are disabled (or there is a good reason not + * to), and that if both pin and vector are passed, that they refer to the +- * same redirection entry in the IO-APIC. */ ++ * same redirection entry in the IO-APIC. ++ * ++ * If using Interrupt Remapping the vector is always ignored because the RTE ++ * remapping format might have repurposed the vector field and a cached value ++ * of the EOI handle to use is obtained based on the provided apic and pin ++ * values. ++ */ + static void __io_apic_eoi(unsigned int apic, unsigned int vector, unsigned int pin) + { + /* Prefer the use of the EOI register if available */ + if ( ioapic_has_eoi_reg(apic) ) + { +- /* If vector is unknown, read it from the IO-APIC */ +- if ( vector == IRQ_VECTOR_UNASSIGNED ) ++ if ( io_apic_pin_eoi ) ++ /* ++ * If the EOI handle is cached use it. When using AMD-Vi IR the CPU ++ * vector no longer matches the vector field in the RTE, because ++ * the RTE remapping format repurposes the field. ++ * ++ * The value in the RTE vector field must always be used to signal ++ * which RTE to EOI, hence use the cached value which always ++ * mirrors the contents of the raw RTE vector field. ++ */ ++ vector = io_apic_pin_eoi[apic][pin]; ++ else if ( vector == IRQ_VECTOR_UNASSIGNED ) ++ /* If vector is unknown, read it from the IO-APIC */ + vector = __ioapic_read_entry(apic, pin, true).vector; + + *(IO_APIC_BASE(apic)+16) = vector; +@@ -1298,12 +1345,30 @@ void __init enable_IO_APIC(void) + vector_map[apic] = vector_map[0]; + } + ++ if ( iommu_intremap != iommu_intremap_off ) ++ { ++ io_apic_pin_eoi = xmalloc_array(typeof(*io_apic_pin_eoi), nr_ioapics); ++ BUG_ON(!io_apic_pin_eoi); ++ } ++ + for(apic = 0; apic < nr_ioapics; apic++) { + int pin; +- /* See if any of the pins is in ExtINT mode */ ++ ++ if ( io_apic_pin_eoi ) ++ { ++ io_apic_pin_eoi[apic] = xmalloc_array(typeof(**io_apic_pin_eoi), ++ nr_ioapic_entries[apic]); ++ BUG_ON(!io_apic_pin_eoi[apic]); ++ } ++ ++ /* See if any of the pins is in ExtINT mode and cache EOI handle */ + for (pin = 0; pin < nr_ioapic_entries[apic]; pin++) { + struct IO_APIC_route_entry entry = ioapic_read_entry(apic, pin, false); + ++ if ( io_apic_pin_eoi ) ++ io_apic_pin_eoi[apic][pin] = ++ ioapic_read_entry(apic, pin, true).vector; ++ + /* If the interrupt line is enabled and in ExtInt mode + * I have found the pin where the i8259 is connected. + */ +-- +2.47.0 + diff --git a/0056-xen-x86-prevent-addition-of-.note.gnu.property-if-li.patch b/0056-xen-x86-prevent-addition-of-.note.gnu.property-if-li.patch new file mode 100644 index 0000000..49d840f --- /dev/null +++ b/0056-xen-x86-prevent-addition-of-.note.gnu.property-if-li.patch @@ -0,0 +1,46 @@ +From 1cbeb625a3551ad7e3184f9713875b584552df9b Mon Sep 17 00:00:00 2001 +From: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com> +Date: Tue, 12 Nov 2024 13:54:56 +0100 +Subject: [PATCH 56/56] xen/x86: prevent addition of .note.gnu.property if + livepatch is enabled +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +GNU assembly that supports such feature will unconditionally add a +.note.gnu.property section to object files. The content of that section can +change depending on the generated instructions. The current logic in +livepatch-build-tools doesn't know how to deal with such section changing +as a result of applying a patch and rebuilding. + +Since .note.gnu.property is not consumed by the Xen build, suppress its +addition when livepatch support is enabled. + +Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> +Reviewed-by: Jan Beulich <jbeulich@suse.com> +master commit: 718400a54dcfcc8a11958a6d953168f50944f002 +master date: 2024-11-11 13:19:45 +0100 +--- + xen/arch/x86/arch.mk | 6 ++++++ + 1 file changed, 6 insertions(+) + +diff --git a/xen/arch/x86/arch.mk b/xen/arch/x86/arch.mk +index 751fd8d410..aa55f54f69 100644 +--- a/xen/arch/x86/arch.mk ++++ b/xen/arch/x86/arch.mk +@@ -46,6 +46,12 @@ CFLAGS-$(CONFIG_CC_IS_GCC) += -fno-jump-tables + CFLAGS-$(CONFIG_CC_IS_CLANG) += -mretpoline-external-thunk + endif + ++# Disable the addition of a .note.gnu.property section to object files when ++# livepatch support is enabled. The contents of that section can change ++# depending on the instructions used, and livepatch-build-tools doesn't know ++# how to deal with such changes. ++$(call cc-option-add,CFLAGS-$(CONFIG_LIVEPATCH),CC,-Wa$$(comma)-mx86-used-note=no) ++ + ifdef CONFIG_XEN_IBT + # Force -fno-jump-tables to work around + # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104816 +-- +2.47.0 + @@ -1,6 +1,6 @@ -Xen upstream patchset #0 for 4.18.4-pre +Xen upstream patchset #1 for 4.18.4-pre Containing patches from RELEASE-4.18.3 (6298a1d3e864d2e5a68e67034d689e3160f36987) to -staging-4.18 (2c5f888204d988110fee9823b102f433c6212d9d) +staging-4.18 (1cbeb625a3551ad7e3184f9713875b584552df9b) |