Github messages for voidlinux
 help / color / mirror / Atom feed
From: CMB <CMB@users.noreply.github.com>
To: ml@inbox.vuxu.org
Subject: Re: [PR PATCH] [Updated] xen: update to 4.14.2.
Date: Sun, 27 Jun 2021 03:14:10 +0200	[thread overview]
Message-ID: <20210627011410.9GkX_2v0mh27UYP_G5d6z0xC-kIIbjSKeZbjSBGwSvY@z> (raw)
In-Reply-To: <gh-mailinglist-notifications-41a7ca26-5023-4802-975b-f1789d68868e-void-packages-30599@inbox.vuxu.org>

[-- Attachment #1: Type: text/plain, Size: 1588 bytes --]

There is an updated pull request by CMB against master on the void-packages repository

https://github.com/CMB/void-packages xen-update
https://github.com/void-linux/void-packages/pull/30599

xen: update to 4.14.2.
- xen: update to 4.14.2.

<!-- Mark items with [x] where applicable -->

#### General
- [ ] This is a new package and it conforms to the [quality requirements](https://github.com/void-linux/void-packages/blob/master/Manual.md#quality-requirements)

#### Have the results of the proposed changes been tested?
- [ x] I use the packages affected by the proposed changes on a regular basis and confirm this PR works for me
- [ ] I generally don't use the affected packages but briefly tested this PR

<!--
If GitHub CI cannot be used to validate the build result (for example, if the
build is likely to take several hours), make sure to
[skip CI](https://github.com/void-linux/void-packages/blob/master/CONTRIBUTING.md#continuous-integration).
When skipping CI, uncomment and fill out the following section.
Note: for builds that are likely to complete in less than 2 hours, it is not
acceptable to skip CI.
-->
<!-- 
#### Does it build and run successfully? 
(Please choose at least one native build and, if supported, at least one cross build. More are better.)
- [ x] I built this PR locally for my native architecture, (x86_64-musl)
- [ ] I built this PR locally for these architectures (if supported. mark crossbuilds):
  - [ ] aarch64-musl
  - [ ] armv7l
  - [ ] armv6l-musl
-->


A patch file from https://github.com/void-linux/void-packages/pull/30599.patch is attached

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: github-pr-xen-update-30599.patch --]
[-- Type: text/x-diff, Size: 44148 bytes --]

From eeb67eae85dbd875b62de7bb1ed5098445c95b7e Mon Sep 17 00:00:00 2001
From: Christopher Brannon <chris@the-brannons.com>
Date: Wed, 17 Feb 2021 19:09:50 -0800
Subject: [PATCH 1/2] xen: prevent unwanted service start spam, support
 oxenstored in service.

This should fix issue 18676 (xen services spamming the user when not booted
into Xen.

Make the xenstored implementation selectable.  There are two
implementations shipped by Xen and in our Xen package:
the original one written in C (binary is just called xenstored) and
a newer one in ocaml (oxenstored).
oxenstored is -- IME -- more reliable.  Make it configurable in
./conf, but leave the default as-is.

Add a patch to fix failing to build from source on musl with newer
argp-standalone.
---
 srcpkgs/xen/files/xen.conf                |  1 +
 srcpkgs/xen/files/xen/run                 |  9 +++++----
 srcpkgs/xen/files/xenconsoled/check       |  6 ++++++
 srcpkgs/xen/files/xenconsoled/log/run     |  2 ++
 srcpkgs/xen/files/xenconsoled/run         |  9 ++++++---
 srcpkgs/xen/files/xenstored/check         |  7 +++++++
 srcpkgs/xen/files/xenstored/log/run       |  2 ++
 srcpkgs/xen/files/xenstored/run           | 13 +++++++++----
 srcpkgs/xen/patches/argp-standalone.patch | 15 +++++++++++++++
 srcpkgs/xen/template                      |  2 +-
 10 files changed, 54 insertions(+), 12 deletions(-)
 create mode 100755 srcpkgs/xen/files/xenconsoled/check
 create mode 100755 srcpkgs/xen/files/xenconsoled/log/run
 create mode 100755 srcpkgs/xen/files/xenstored/check
 create mode 100755 srcpkgs/xen/files/xenstored/log/run
 create mode 100644 srcpkgs/xen/patches/argp-standalone.patch

diff --git a/srcpkgs/xen/files/xen.conf b/srcpkgs/xen/files/xen.conf
index 4fbf6c96bc8e..95b777b20a1d 100644
--- a/srcpkgs/xen/files/xen.conf
+++ b/srcpkgs/xen/files/xen.conf
@@ -1,3 +1,4 @@
+xenfs
 xen-evtchn
 xen-gntdev
 xen-gntalloc
diff --git a/srcpkgs/xen/files/xen/run b/srcpkgs/xen/files/xen/run
index b35a945d1bec..e5e120f54dd6 100755
--- a/srcpkgs/xen/files/xen/run
+++ b/srcpkgs/xen/files/xen/run
@@ -1,5 +1,6 @@
-#!/bin/sh
-sv check xenconsoled >/dev/null || exit 1
-xenstore-write "/local/domain/0/domid" 0 || exit 1
-xenstore-write "/local/domain/0/name" "Domain-0" || exit 1
+#!/bin/sh -e
+sv check xenstored >/dev/null
+sv check xenconsoled >/dev/null
+xenstore-write "/local/domain/0/domid" 0
+xenstore-write "/local/domain/0/name" "Domain-0"
 exec chpst -b xen pause
diff --git a/srcpkgs/xen/files/xenconsoled/check b/srcpkgs/xen/files/xenconsoled/check
new file mode 100755
index 000000000000..7b7dac21bc57
--- /dev/null
+++ b/srcpkgs/xen/files/xenconsoled/check
@@ -0,0 +1,6 @@
+#!/bin/sh
+exec > /dev/null
+exec 2>&1
+
+# Exit 0 if the xenconsoled lock is taken (E.G., by a running xenconsoled).
+! chpst -L /run/xen/xenconsoled-running.lck true
diff --git a/srcpkgs/xen/files/xenconsoled/log/run b/srcpkgs/xen/files/xenconsoled/log/run
new file mode 100755
index 000000000000..da2b3ba7d791
--- /dev/null
+++ b/srcpkgs/xen/files/xenconsoled/log/run
@@ -0,0 +1,2 @@
+#!/bin/sh
+exec logger -p daemon.notice -t xenconsoled
diff --git a/srcpkgs/xen/files/xenconsoled/run b/srcpkgs/xen/files/xenconsoled/run
index bf13989cdb95..80430906cc2a 100755
--- a/srcpkgs/xen/files/xenconsoled/run
+++ b/srcpkgs/xen/files/xenconsoled/run
@@ -1,4 +1,7 @@
-#!/bin/sh
-sv check xenstored >/dev/null || exit 1
+#!/bin/sh -e
+exec 2>&1
+sv check xenstored >/dev/null
+
+# xenconsoled writes per-domU logs and hypervisor.log here:
 mkdir -p /var/log/xen/console
-exec xenconsoled -i --log=all
+exec chpst -L /run/xen/xenconsoled-running.lck xenconsoled -i --log=all
diff --git a/srcpkgs/xen/files/xenstored/check b/srcpkgs/xen/files/xenstored/check
new file mode 100755
index 000000000000..473379232a42
--- /dev/null
+++ b/srcpkgs/xen/files/xenstored/check
@@ -0,0 +1,7 @@
+#!/bin/sh
+exec > /dev/null
+exec 2>&1
+
+# If this is a dom0 and the root key in xenstore exists, then xenstored
+# must be running.
+grep -q control_d /proc/xen/capabilities && /usr/bin/xenstore-exists /
diff --git a/srcpkgs/xen/files/xenstored/log/run b/srcpkgs/xen/files/xenstored/log/run
new file mode 100755
index 000000000000..bab732f16ce6
--- /dev/null
+++ b/srcpkgs/xen/files/xenstored/log/run
@@ -0,0 +1,2 @@
+#!/bin/sh
+exec logger -p daemon.notice -t xenstored
diff --git a/srcpkgs/xen/files/xenstored/run b/srcpkgs/xen/files/xenstored/run
index f30d9adefaa4..e3171d2f0e35 100755
--- a/srcpkgs/xen/files/xenstored/run
+++ b/srcpkgs/xen/files/xenstored/run
@@ -1,7 +1,12 @@
-#!/bin/sh
+#!/bin/sh -e
+exec 2>&1
 [ ! -d /run/xen ] && mkdir -p /run/xen
-modprobe -q xen-evtchn xen-gnttalloc || exit 1
 mountpoint -q /proc/xen || mount -t xenfs xenfs /proc/xen
+grep -q control_d /proc/xen/capabilities
 mountpoint -q /var/lib/xenstored || mount -t tmpfs xenstored /var/lib/xenstored
-grep -q control_d /proc/xen/capabilities || exit 1
-exec xenstored --verbose --no-fork
+
+[ -r conf ] && . ./conf
+XENSTORED="${XENSTORED:-xenstored}"
+XENSTORED_ARGS="${XENSTORED_ARGS} --no-fork"
+
+exec "${XENSTORED}" ${XENSTORED_ARGS}
diff --git a/srcpkgs/xen/patches/argp-standalone.patch b/srcpkgs/xen/patches/argp-standalone.patch
new file mode 100644
index 000000000000..5ff66ce732f8
--- /dev/null
+++ b/srcpkgs/xen/patches/argp-standalone.patch
@@ -0,0 +1,15 @@
+Fix build with recent argp-standalone.
+I'll try and get this upstreamed so we don't have to carry it.
+
+diff -Nur xen-4.14.1.orig/tools/configure.ac xen-4.14.1/tools/configure.ac
+--- xen-4.14.1.orig/tools/configure.ac	2020-12-17 16:47:25.000000000 +0000
++++ xen-4.14.1/tools/configure.ac	2021-02-20 19:44:33.618002472 +0000
+@@ -426,7 +426,7 @@
+ AC_CHECK_LIB([iconv], [libiconv_open], [libiconv="y"], [libiconv="n"])
+ AC_SUBST(libiconv)
+ AC_CHECK_HEADER([argp.h], [
+-AC_CHECK_LIB([argp], [argp_usage], [argp_ldflags="-largp"])
++AC_CHECK_LIB([argp], [argp_parse], [argp_ldflags="-largp"])
+ ], [AC_MSG_ERROR([Could not find argp])])
+ AC_SUBST(argp_ldflags)
+ 
diff --git a/srcpkgs/xen/template b/srcpkgs/xen/template
index d916f29ac384..8a8bd995f532 100644
--- a/srcpkgs/xen/template
+++ b/srcpkgs/xen/template
@@ -1,7 +1,7 @@
 # Template file for 'xen'
 pkgname=xen
 version=4.14.1
-revision=2
+revision=3
 # grep -R IPXE_GIT_TAG src/xen-*/tools/firmware/etherboot
 _git_tag_ipxe=4bd064de239dab2426b31c9789a1f4d78087dc63
 # TODO: arm / aarch64

From a1a011be072ad3ef9a8157d1368e4b5d51311935 Mon Sep 17 00:00:00 2001
From: Christopher Brannon <chris@the-brannons.com>
Date: Wed, 28 Apr 2021 04:36:32 -0700
Subject: [PATCH 2/2] xen: update to 4.14.2.

And apply patches for XSAs.

Also orphaning, since I haven't been a responsible maintainer and I
do not have a compelling reason to use this.
---
 srcpkgs/xen/patches/argp-standalone.patch |   6 +-
 srcpkgs/xen/patches/xsa360-4.14.patch     |  97 -------------
 srcpkgs/xen/patches/xsa373-4.14-1.patch   | 120 ++++++++++++++++
 srcpkgs/xen/patches/xsa373-4.14-2.patch   | 102 ++++++++++++++
 srcpkgs/xen/patches/xsa373-4.14-3.patch   | 163 ++++++++++++++++++++++
 srcpkgs/xen/patches/xsa373-4.14-4.patch   |  81 +++++++++++
 srcpkgs/xen/patches/xsa373-4.14-5.patch   | 143 +++++++++++++++++++
 srcpkgs/xen/patches/xsa375.patch          |  50 +++++++
 srcpkgs/xen/patches/xsa377.patch          |  27 ++++
 srcpkgs/xen/template                      |  16 ++-
 10 files changed, 698 insertions(+), 107 deletions(-)
 delete mode 100644 srcpkgs/xen/patches/xsa360-4.14.patch
 create mode 100644 srcpkgs/xen/patches/xsa373-4.14-1.patch
 create mode 100644 srcpkgs/xen/patches/xsa373-4.14-2.patch
 create mode 100644 srcpkgs/xen/patches/xsa373-4.14-3.patch
 create mode 100644 srcpkgs/xen/patches/xsa373-4.14-4.patch
 create mode 100644 srcpkgs/xen/patches/xsa373-4.14-5.patch
 create mode 100644 srcpkgs/xen/patches/xsa375.patch
 create mode 100644 srcpkgs/xen/patches/xsa377.patch

diff --git a/srcpkgs/xen/patches/argp-standalone.patch b/srcpkgs/xen/patches/argp-standalone.patch
index 5ff66ce732f8..5163771e9450 100644
--- a/srcpkgs/xen/patches/argp-standalone.patch
+++ b/srcpkgs/xen/patches/argp-standalone.patch
@@ -1,9 +1,9 @@
 Fix build with recent argp-standalone.
 I'll try and get this upstreamed so we don't have to carry it.
 
-diff -Nur xen-4.14.1.orig/tools/configure.ac xen-4.14.1/tools/configure.ac
---- xen-4.14.1.orig/tools/configure.ac	2020-12-17 16:47:25.000000000 +0000
-+++ xen-4.14.1/tools/configure.ac	2021-02-20 19:44:33.618002472 +0000
+diff -Nur xen-4.14.2.orig/tools/configure.ac xen-4.14.2/tools/configure.ac
+--- xen-4.14.2.orig/tools/configure.ac	2020-12-17 16:47:25.000000000 +0000
++++ xen-4.14.2/tools/configure.ac	2021-02-20 19:44:33.618002472 +0000
 @@ -426,7 +426,7 @@
  AC_CHECK_LIB([iconv], [libiconv_open], [libiconv="y"], [libiconv="n"])
  AC_SUBST(libiconv)
diff --git a/srcpkgs/xen/patches/xsa360-4.14.patch b/srcpkgs/xen/patches/xsa360-4.14.patch
deleted file mode 100644
index 1bc185b110dc..000000000000
--- a/srcpkgs/xen/patches/xsa360-4.14.patch
+++ /dev/null
@@ -1,97 +0,0 @@
-From: Roger Pau Monne <roger.pau@citrix.com>
-Subject: x86/dpci: do not remove pirqs from domain tree on unbind
-
-A fix for a previous issue removed the pirqs from the domain tree when
-they are unbound in order to prevent shared pirqs from triggering a
-BUG_ON in __pirq_guest_unbind if they are unbound multiple times. That
-caused free_domain_pirqs to no longer unmap the pirqs because they
-are gone from the domain pirq tree, thus leaving stale unbound pirqs
-after domain destruction if the domain had mapped dpci pirqs after
-shutdown.
-
-Take a different approach to fix the original issue, instead of
-removing the pirq from d->pirq_tree clear the flags of the dpci pirq
-struct to signal that the pirq is now unbound. This prevents calling
-pirq_guest_unbind multiple times for the same pirq without having to
-remove it from the domain pirq tree.
-
-This is XSA-360.
-
-Fixes: 5b58dad089 ('x86/pass-through: avoid double IRQ unbind during domain cleanup')
-Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
-Reviewed-by: Jan Beulich <jbeulich@suse.com>
-
---- a/xen/arch/x86/irq.c
-+++ b/xen/arch/x86/irq.c
-@@ -1331,7 +1331,7 @@ void (pirq_cleanup_check)(struct pirq *p
-     }
- 
-     if ( radix_tree_delete(&d->pirq_tree, pirq->pirq) != pirq )
--        BUG_ON(!d->is_dying);
-+        BUG();
- }
- 
- /* Flush all ready EOIs from the top of this CPU's pending-EOI stack. */
---- a/xen/drivers/passthrough/pci.c
-+++ b/xen/drivers/passthrough/pci.c
-@@ -862,6 +862,10 @@ static int pci_clean_dpci_irq(struct dom
- {
-     struct dev_intx_gsi_link *digl, *tmp;
- 
-+    if ( !pirq_dpci->flags )
-+        /* Already processed. */
-+        return 0;
-+
-     pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
- 
-     if ( pt_irq_need_timer(pirq_dpci->flags) )
-@@ -872,15 +876,10 @@ static int pci_clean_dpci_irq(struct dom
-         list_del(&digl->list);
-         xfree(digl);
-     }
-+    /* Note the pirq is now unbound. */
-+    pirq_dpci->flags = 0;
- 
--    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
--
--    if ( !pt_pirq_softirq_active(pirq_dpci) )
--        return 0;
--
--    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
--
--    return -ERESTART;
-+    return pt_pirq_softirq_active(pirq_dpci) ? -ERESTART : 0;
- }
- 
- static int pci_clean_dpci_irqs(struct domain *d)
-@@ -897,18 +896,8 @@ static int pci_clean_dpci_irqs(struct do
-     hvm_irq_dpci = domain_get_irq_dpci(d);
-     if ( hvm_irq_dpci != NULL )
-     {
--        int ret = 0;
--
--        if ( hvm_irq_dpci->pending_pirq_dpci )
--        {
--            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
--                 ret = -ERESTART;
--            else
--                 hvm_irq_dpci->pending_pirq_dpci = NULL;
--        }
-+        int ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
- 
--        if ( !ret )
--            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
-         if ( ret )
-         {
-             spin_unlock(&d->event_lock);
---- a/xen/include/asm-x86/hvm/irq.h
-+++ b/xen/include/asm-x86/hvm/irq.h
-@@ -160,8 +160,6 @@ struct hvm_irq_dpci {
-     DECLARE_BITMAP(isairq_map, NR_ISAIRQS);
-     /* Record of mapped Links */
-     uint8_t link_cnt[NR_LINK];
--    /* Clean up: Entry with a softirq invocation pending / in progress. */
--    struct hvm_pirq_dpci *pending_pirq_dpci;
- };
- 
- /* Machine IRQ to guest device/intx mapping. */
diff --git a/srcpkgs/xen/patches/xsa373-4.14-1.patch b/srcpkgs/xen/patches/xsa373-4.14-1.patch
new file mode 100644
index 000000000000..ee5229a11c42
--- /dev/null
+++ b/srcpkgs/xen/patches/xsa373-4.14-1.patch
@@ -0,0 +1,120 @@
+From: Jan Beulich <jbeulich@suse.com>
+Subject: VT-d: size qinval queue dynamically
+
+With the present synchronous model, we need two slots for every
+operation (the operation itself and a wait descriptor).  There can be
+one such pair of requests pending per CPU. To ensure that under all
+normal circumstances a slot is always available when one is requested,
+size the queue ring according to the number of present CPUs.
+
+This is part of XSA-373 / CVE-2021-28692.
+
+Signed-off-by: Jan Beulich <jbeulich@suse.com>
+Reviewed-by: Paul Durrant <paul@xen.org>
+
+--- a/xen/drivers/passthrough/vtd/iommu.h
++++ b/xen/drivers/passthrough/vtd/iommu.h
+@@ -450,17 +450,9 @@ struct qinval_entry {
+     }q;
+ };
+ 
+-/* Order of queue invalidation pages(max is 8) */
+-#define QINVAL_PAGE_ORDER   2
+-
+-#define QINVAL_ARCH_PAGE_ORDER  (QINVAL_PAGE_ORDER + PAGE_SHIFT_4K - PAGE_SHIFT)
+-#define QINVAL_ARCH_PAGE_NR     ( QINVAL_ARCH_PAGE_ORDER < 0 ?  \
+-                                1 :                             \
+-                                1 << QINVAL_ARCH_PAGE_ORDER )
+-
+ /* Each entry is 16 bytes, so 2^8 entries per page */
+ #define QINVAL_ENTRY_ORDER  ( PAGE_SHIFT - 4 )
+-#define QINVAL_ENTRY_NR     (1 << (QINVAL_PAGE_ORDER + 8))
++#define QINVAL_MAX_ENTRY_NR (1u << (7 + QINVAL_ENTRY_ORDER))
+ 
+ /* Status data flag */
+ #define QINVAL_STAT_INIT  0
+--- a/xen/drivers/passthrough/vtd/qinval.c
++++ b/xen/drivers/passthrough/vtd/qinval.c
+@@ -31,6 +31,9 @@
+ 
+ #define VTD_QI_TIMEOUT	1
+ 
++static unsigned int __read_mostly qi_pg_order;
++static unsigned int __read_mostly qi_entry_nr;
++
+ static int __must_check invalidate_sync(struct vtd_iommu *iommu);
+ 
+ static void print_qi_regs(struct vtd_iommu *iommu)
+@@ -55,7 +58,7 @@ static unsigned int qinval_next_index(st
+     tail >>= QINVAL_INDEX_SHIFT;
+ 
+     /* (tail+1 == head) indicates a full queue, wait for HW */
+-    while ( ( tail + 1 ) % QINVAL_ENTRY_NR ==
++    while ( ((tail + 1) & (qi_entry_nr - 1)) ==
+             ( dmar_readq(iommu->reg, DMAR_IQH_REG) >> QINVAL_INDEX_SHIFT ) )
+         cpu_relax();
+ 
+@@ -68,7 +71,7 @@ static void qinval_update_qtail(struct v
+ 
+     /* Need hold register lock when update tail */
+     ASSERT( spin_is_locked(&iommu->register_lock) );
+-    val = (index + 1) % QINVAL_ENTRY_NR;
++    val = (index + 1) & (qi_entry_nr - 1);
+     dmar_writeq(iommu->reg, DMAR_IQT_REG, (val << QINVAL_INDEX_SHIFT));
+ }
+ 
+@@ -403,8 +406,28 @@ int enable_qinval(struct vtd_iommu *iomm
+ 
+     if ( iommu->qinval_maddr == 0 )
+     {
+-        iommu->qinval_maddr = alloc_pgtable_maddr(QINVAL_ARCH_PAGE_NR,
+-                                                  iommu->node);
++        if ( !qi_entry_nr )
++        {
++            /*
++             * With the present synchronous model, we need two slots for every
++             * operation (the operation itself and a wait descriptor).  There
++             * can be one such pair of requests pending per CPU.  One extra
++             * entry is needed as the ring is considered full when there's
++             * only one entry left.
++             */
++            BUILD_BUG_ON(CONFIG_NR_CPUS * 2 >= QINVAL_MAX_ENTRY_NR);
++            qi_pg_order = get_order_from_bytes((num_present_cpus() * 2 + 1) <<
++                                               (PAGE_SHIFT -
++                                                QINVAL_ENTRY_ORDER));
++            qi_entry_nr = 1u << (qi_pg_order + QINVAL_ENTRY_ORDER);
++
++            dprintk(XENLOG_INFO VTDPREFIX,
++                    "QI: using %u-entry ring(s)\n", qi_entry_nr);
++        }
++
++        iommu->qinval_maddr =
++            alloc_pgtable_maddr(qi_entry_nr >> QINVAL_ENTRY_ORDER,
++                                iommu->node);
+         if ( iommu->qinval_maddr == 0 )
+         {
+             dprintk(XENLOG_WARNING VTDPREFIX,
+@@ -418,15 +441,16 @@ int enable_qinval(struct vtd_iommu *iomm
+ 
+     spin_lock_irqsave(&iommu->register_lock, flags);
+ 
+-    /* Setup Invalidation Queue Address(IQA) register with the
+-     * address of the page we just allocated.  QS field at
+-     * bits[2:0] to indicate size of queue is one 4KB page.
+-     * That's 256 entries.  Queued Head (IQH) and Queue Tail (IQT)
+-     * registers are automatically reset to 0 with write
+-     * to IQA register.
++    /*
++     * Setup Invalidation Queue Address (IQA) register with the address of the
++     * pages we just allocated.  The QS field at bits[2:0] indicates the size
++     * (page order) of the queue.
++     *
++     * Queued Head (IQH) and Queue Tail (IQT) registers are automatically
++     * reset to 0 with write to IQA register.
+      */
+     dmar_writeq(iommu->reg, DMAR_IQA_REG,
+-                iommu->qinval_maddr | QINVAL_PAGE_ORDER);
++                iommu->qinval_maddr | qi_pg_order);
+ 
+     dmar_writeq(iommu->reg, DMAR_IQT_REG, 0);
+ 
diff --git a/srcpkgs/xen/patches/xsa373-4.14-2.patch b/srcpkgs/xen/patches/xsa373-4.14-2.patch
new file mode 100644
index 000000000000..773cbfd555b7
--- /dev/null
+++ b/srcpkgs/xen/patches/xsa373-4.14-2.patch
@@ -0,0 +1,102 @@
+From: Jan Beulich <jbeulich@suse.com>
+Subject: AMD/IOMMU: size command buffer dynamically
+
+With the present synchronous model, we need two slots for every
+operation (the operation itself and a wait command).  There can be one
+such pair of commands pending per CPU. To ensure that under all normal
+circumstances a slot is always available when one is requested, size the
+command ring according to the number of present CPUs.
+
+This is part of XSA-373 / CVE-2021-28692.
+
+Signed-off-by: Jan Beulich <jbeulich@suse.com>
+Reviewed-by: Paul Durrant <paul@xen.org>
+
+--- a/xen/drivers/passthrough/amd/iommu-defs.h
++++ b/xen/drivers/passthrough/amd/iommu-defs.h
+@@ -20,9 +20,6 @@
+ #ifndef AMD_IOMMU_DEFS_H
+ #define AMD_IOMMU_DEFS_H
+ 
+-/* IOMMU Command Buffer entries: in power of 2 increments, minimum of 256 */
+-#define IOMMU_CMD_BUFFER_DEFAULT_ENTRIES	512
+-
+ /* IOMMU Event Log entries: in power of 2 increments, minimum of 256 */
+ #define IOMMU_EVENT_LOG_DEFAULT_ENTRIES     512
+ 
+@@ -164,8 +161,8 @@ struct amd_iommu_dte {
+ #define IOMMU_CMD_BUFFER_LENGTH_MASK		0x0F000000
+ #define IOMMU_CMD_BUFFER_LENGTH_SHIFT		24
+ 
+-#define IOMMU_CMD_BUFFER_ENTRY_SIZE			16
+-#define IOMMU_CMD_BUFFER_POWER_OF2_ENTRIES_PER_PAGE	8
++#define IOMMU_CMD_BUFFER_ENTRY_ORDER            4
++#define IOMMU_CMD_BUFFER_MAX_ENTRIES            (1u << 15)
+ 
+ #define IOMMU_CMD_OPCODE_MASK			0xF0000000
+ #define IOMMU_CMD_OPCODE_SHIFT			28
+--- a/xen/drivers/passthrough/amd/iommu_cmd.c
++++ b/xen/drivers/passthrough/amd/iommu_cmd.c
+@@ -24,7 +24,7 @@ static int queue_iommu_command(struct am
+ {
+     uint32_t tail, head;
+ 
+-    tail = iommu->cmd_buffer.tail + IOMMU_CMD_BUFFER_ENTRY_SIZE;
++    tail = iommu->cmd_buffer.tail + sizeof(cmd_entry_t);
+     if ( tail == iommu->cmd_buffer.size )
+         tail = 0;
+ 
+@@ -33,7 +33,7 @@ static int queue_iommu_command(struct am
+     if ( head != tail )
+     {
+         memcpy(iommu->cmd_buffer.buffer + iommu->cmd_buffer.tail,
+-               cmd, IOMMU_CMD_BUFFER_ENTRY_SIZE);
++               cmd, sizeof(cmd_entry_t));
+ 
+         iommu->cmd_buffer.tail = tail;
+         return 1;
+--- a/xen/drivers/passthrough/amd/iommu_init.c
++++ b/xen/drivers/passthrough/amd/iommu_init.c
+@@ -118,7 +118,7 @@ static void register_iommu_cmd_buffer_in
+     writel(entry, iommu->mmio_base + IOMMU_CMD_BUFFER_BASE_LOW_OFFSET);
+ 
+     power_of2_entries = get_order_from_bytes(iommu->cmd_buffer.size) +
+-        IOMMU_CMD_BUFFER_POWER_OF2_ENTRIES_PER_PAGE;
++        PAGE_SHIFT - IOMMU_CMD_BUFFER_ENTRY_ORDER;
+ 
+     entry = 0;
+     iommu_set_addr_hi_to_reg(&entry, addr_hi);
+@@ -1022,9 +1022,31 @@ static void *__init allocate_ring_buffer
+ static void * __init allocate_cmd_buffer(struct amd_iommu *iommu)
+ {
+     /* allocate 'command buffer' in power of 2 increments of 4K */
++    static unsigned int __read_mostly nr_ents;
++
++    if ( !nr_ents )
++    {
++        unsigned int order;
++
++        /*
++         * With the present synchronous model, we need two slots for every
++         * operation (the operation itself and a wait command).  There can be
++         * one such pair of requests pending per CPU.  One extra entry is
++         * needed as the ring is considered full when there's only one entry
++         * left.
++         */
++        BUILD_BUG_ON(CONFIG_NR_CPUS * 2 >= IOMMU_CMD_BUFFER_MAX_ENTRIES);
++        order = get_order_from_bytes((num_present_cpus() * 2 + 1) <<
++                                     IOMMU_CMD_BUFFER_ENTRY_ORDER);
++        nr_ents = 1u << (order + PAGE_SHIFT - IOMMU_CMD_BUFFER_ENTRY_ORDER);
++
++        AMD_IOMMU_DEBUG("using %u-entry cmd ring(s)\n", nr_ents);
++    }
++
++    BUILD_BUG_ON(sizeof(cmd_entry_t) != (1u << IOMMU_CMD_BUFFER_ENTRY_ORDER));
++
+     return allocate_ring_buffer(&iommu->cmd_buffer, sizeof(cmd_entry_t),
+-                                IOMMU_CMD_BUFFER_DEFAULT_ENTRIES,
+-                                "Command Buffer", false);
++                                nr_ents, "Command Buffer", false);
+ }
+ 
+ static void * __init allocate_event_log(struct amd_iommu *iommu)
diff --git a/srcpkgs/xen/patches/xsa373-4.14-3.patch b/srcpkgs/xen/patches/xsa373-4.14-3.patch
new file mode 100644
index 000000000000..fe345466fd3a
--- /dev/null
+++ b/srcpkgs/xen/patches/xsa373-4.14-3.patch
@@ -0,0 +1,163 @@
+From: Jan Beulich <jbeulich@suse.com>
+Subject: VT-d: eliminate flush related timeouts
+
+Leaving an in-progress operation pending when it appears to take too
+long is problematic: If e.g. a QI command completed later, the write to
+the "poll slot" may instead be understood to signal a subsequently
+started command's completion. Also our accounting of the timeout period
+was actually wrong: We included the time it took for the command to
+actually make it to the front of the queue, which could be heavily
+affected by guests other than the one for which the flush is being
+performed.
+
+Do away with all timeout detection on all flush related code paths.
+Log excessively long processing times (with a progressive threshold) to
+have some indication of problems in this area.
+
+Additionally log (once) if qinval_next_index() didn't immediately find
+an available slot. Together with the earlier change sizing the queue(s)
+dynamically, we should now have a guarantee that with our fully
+synchronous model any demand for slots can actually be satisfied.
+
+This is part of XSA-373 / CVE-2021-28692.
+
+Signed-off-by: Jan Beulich <jbeulich@suse.com>
+Reviewed-by: Paul Durrant <paul@xen.org>
+
+--- a/xen/drivers/passthrough/vtd/dmar.h
++++ b/xen/drivers/passthrough/vtd/dmar.h
+@@ -127,6 +127,34 @@ do {
+     }                                                           \
+ } while (0)
+ 
++#define IOMMU_FLUSH_WAIT(what, iommu, offset, op, cond, sts)       \
++do {                                                               \
++    static unsigned int __read_mostly threshold = 1;               \
++    s_time_t start = NOW();                                        \
++    s_time_t timeout = start + DMAR_OPERATION_TIMEOUT * threshold; \
++                                                                   \
++    for ( ; ; )                                                    \
++    {                                                              \
++        sts = op(iommu->reg, offset);                              \
++        if ( cond )                                                \
++            break;                                                 \
++        if ( timeout && NOW() > timeout )                          \
++        {                                                          \
++            threshold |= threshold << 1;                           \
++            printk(XENLOG_WARNING VTDPREFIX                        \
++                   " IOMMU#%u: %s flush taking too long\n",        \
++                   iommu->index, what);                            \
++            timeout = 0;                                           \
++        }                                                          \
++        cpu_relax();                                               \
++    }                                                              \
++                                                                   \
++    if ( !timeout )                                                \
++        printk(XENLOG_WARNING VTDPREFIX                            \
++               " IOMMU#%u: %s flush took %lums\n",                 \
++               iommu->index, what, (NOW() - start) / 10000000);    \
++} while ( false )
++
+ int vtd_hw_check(void);
+ void disable_pmr(struct vtd_iommu *iommu);
+ int is_igd_drhd(struct acpi_drhd_unit *drhd);
+--- a/xen/drivers/passthrough/vtd/iommu.c
++++ b/xen/drivers/passthrough/vtd/iommu.c
+@@ -326,8 +326,8 @@ static void iommu_flush_write_buffer(str
+     dmar_writel(iommu->reg, DMAR_GCMD_REG, val | DMA_GCMD_WBF);
+ 
+     /* Make sure hardware complete it */
+-    IOMMU_WAIT_OP(iommu, DMAR_GSTS_REG, dmar_readl,
+-                  !(val & DMA_GSTS_WBFS), val);
++    IOMMU_FLUSH_WAIT("write buffer", iommu, DMAR_GSTS_REG, dmar_readl,
++                     !(val & DMA_GSTS_WBFS), val);
+ 
+     spin_unlock_irqrestore(&iommu->register_lock, flags);
+ }
+@@ -376,8 +376,8 @@ int vtd_flush_context_reg(struct vtd_iom
+     dmar_writeq(iommu->reg, DMAR_CCMD_REG, val);
+ 
+     /* Make sure hardware complete it */
+-    IOMMU_WAIT_OP(iommu, DMAR_CCMD_REG, dmar_readq,
+-                  !(val & DMA_CCMD_ICC), val);
++    IOMMU_FLUSH_WAIT("context", iommu, DMAR_CCMD_REG, dmar_readq,
++                     !(val & DMA_CCMD_ICC), val);
+ 
+     spin_unlock_irqrestore(&iommu->register_lock, flags);
+     /* flush context entry will implicitly flush write buffer */
+@@ -454,8 +454,8 @@ int vtd_flush_iotlb_reg(struct vtd_iommu
+     dmar_writeq(iommu->reg, tlb_offset + 8, val);
+ 
+     /* Make sure hardware complete it */
+-    IOMMU_WAIT_OP(iommu, (tlb_offset + 8), dmar_readq,
+-                  !(val & DMA_TLB_IVT), val);
++    IOMMU_FLUSH_WAIT("iotlb", iommu, (tlb_offset + 8), dmar_readq,
++                     !(val & DMA_TLB_IVT), val);
+     spin_unlock_irqrestore(&iommu->register_lock, flags);
+ 
+     /* check IOTLB invalidation granularity */
+--- a/xen/drivers/passthrough/vtd/qinval.c
++++ b/xen/drivers/passthrough/vtd/qinval.c
+@@ -29,8 +29,6 @@
+ #include "extern.h"
+ #include "../ats.h"
+ 
+-#define VTD_QI_TIMEOUT	1
+-
+ static unsigned int __read_mostly qi_pg_order;
+ static unsigned int __read_mostly qi_entry_nr;
+ 
+@@ -60,7 +58,11 @@ static unsigned int qinval_next_index(st
+     /* (tail+1 == head) indicates a full queue, wait for HW */
+     while ( ((tail + 1) & (qi_entry_nr - 1)) ==
+             ( dmar_readq(iommu->reg, DMAR_IQH_REG) >> QINVAL_INDEX_SHIFT ) )
++    {
++        printk_once(XENLOG_ERR VTDPREFIX " IOMMU#%u: no QI slot available\n",
++                    iommu->index);
+         cpu_relax();
++    }
+ 
+     return tail;
+ }
+@@ -180,23 +182,32 @@ static int __must_check queue_invalidate
+     /* Now we don't support interrupt method */
+     if ( sw )
+     {
+-        s_time_t timeout;
+-
+-        /* In case all wait descriptor writes to same addr with same data */
+-        timeout = NOW() + MILLISECS(flush_dev_iotlb ?
+-                                    iommu_dev_iotlb_timeout : VTD_QI_TIMEOUT);
++        static unsigned int __read_mostly threshold = 1;
++        s_time_t start = NOW();
++        s_time_t timeout = start + (flush_dev_iotlb
++                                    ? iommu_dev_iotlb_timeout
++                                    : 100) * MILLISECS(threshold);
+ 
+         while ( ACCESS_ONCE(*this_poll_slot) != QINVAL_STAT_DONE )
+         {
+-            if ( NOW() > timeout )
++            if ( timeout && NOW() > timeout )
+             {
+-                print_qi_regs(iommu);
++                threshold |= threshold << 1;
+                 printk(XENLOG_WARNING VTDPREFIX
+-                       " Queue invalidate wait descriptor timed out\n");
+-                return -ETIMEDOUT;
++                       " IOMMU#%u: QI%s wait descriptor taking too long\n",
++                       iommu->index, flush_dev_iotlb ? " dev" : "");
++                print_qi_regs(iommu);
++                timeout = 0;
+             }
+             cpu_relax();
+         }
++
++        if ( !timeout )
++            printk(XENLOG_WARNING VTDPREFIX
++                   " IOMMU#%u: QI%s wait descriptor took %lums\n",
++                   iommu->index, flush_dev_iotlb ? " dev" : "",
++                   (NOW() - start) / 10000000);
++
+         return 0;
+     }
+ 
diff --git a/srcpkgs/xen/patches/xsa373-4.14-4.patch b/srcpkgs/xen/patches/xsa373-4.14-4.patch
new file mode 100644
index 000000000000..a1f186b25e6b
--- /dev/null
+++ b/srcpkgs/xen/patches/xsa373-4.14-4.patch
@@ -0,0 +1,81 @@
+From: Jan Beulich <jbeulich@suse.com>
+Subject: AMD/IOMMU: wait for command slot to be available
+
+No caller cared about send_iommu_command() indicating unavailability of
+a slot. Hence if a sufficient number prior commands timed out, we did
+blindly assume that the requested command was submitted to the IOMMU
+when really it wasn't. This could mean both a hanging system (waiting
+for a command to complete that was never seen by the IOMMU) or blindly
+propagating success back to callers, making them believe they're fine
+to e.g. free previously unmapped pages.
+
+Fold the three involved functions into one, add spin waiting for an
+available slot along the lines of VT-d's qinval_next_index(), and as a
+consequence drop all error indicator return types/values.
+
+This is part of XSA-373 / CVE-2021-28692.
+
+Signed-off-by: Jan Beulich <jbeulich@suse.com>
+Reviewed-by: Paul Durrant <paul@xen.org>
+
+--- a/xen/drivers/passthrough/amd/iommu_cmd.c
++++ b/xen/drivers/passthrough/amd/iommu_cmd.c
+@@ -20,43 +20,32 @@
+ #include "iommu.h"
+ #include "../ats.h"
+ 
+-static int queue_iommu_command(struct amd_iommu *iommu, u32 cmd[])
++static void send_iommu_command(struct amd_iommu *iommu,
++                               const uint32_t cmd[4])
+ {
+-    uint32_t tail, head;
++    uint32_t tail;
+ 
+     tail = iommu->cmd_buffer.tail + sizeof(cmd_entry_t);
+     if ( tail == iommu->cmd_buffer.size )
+         tail = 0;
+ 
+-    head = readl(iommu->mmio_base +
+-                 IOMMU_CMD_BUFFER_HEAD_OFFSET) & IOMMU_RING_BUFFER_PTR_MASK;
+-    if ( head != tail )
++    while ( tail == (readl(iommu->mmio_base +
++                           IOMMU_CMD_BUFFER_HEAD_OFFSET) &
++                     IOMMU_RING_BUFFER_PTR_MASK) )
+     {
+-        memcpy(iommu->cmd_buffer.buffer + iommu->cmd_buffer.tail,
+-               cmd, sizeof(cmd_entry_t));
+-
+-        iommu->cmd_buffer.tail = tail;
+-        return 1;
++        printk_once(XENLOG_ERR
++                    "AMD IOMMU %04x:%02x:%02x.%u: no cmd slot available\n",
++                    iommu->seg, PCI_BUS(iommu->bdf),
++                    PCI_SLOT(iommu->bdf), PCI_FUNC(iommu->bdf));
++        cpu_relax();
+     }
+ 
+-    return 0;
+-}
+-
+-static void commit_iommu_command_buffer(struct amd_iommu *iommu)
+-{
+-    writel(iommu->cmd_buffer.tail,
+-           iommu->mmio_base + IOMMU_CMD_BUFFER_TAIL_OFFSET);
+-}
++    memcpy(iommu->cmd_buffer.buffer + iommu->cmd_buffer.tail,
++           cmd, sizeof(cmd_entry_t));
+ 
+-static int send_iommu_command(struct amd_iommu *iommu, u32 cmd[])
+-{
+-    if ( queue_iommu_command(iommu, cmd) )
+-    {
+-        commit_iommu_command_buffer(iommu);
+-        return 1;
+-    }
++    iommu->cmd_buffer.tail = tail;
+ 
+-    return 0;
++    writel(tail, iommu->mmio_base + IOMMU_CMD_BUFFER_TAIL_OFFSET);
+ }
+ 
+ static void flush_command_buffer(struct amd_iommu *iommu)
diff --git a/srcpkgs/xen/patches/xsa373-4.14-5.patch b/srcpkgs/xen/patches/xsa373-4.14-5.patch
new file mode 100644
index 000000000000..01556a87f188
--- /dev/null
+++ b/srcpkgs/xen/patches/xsa373-4.14-5.patch
@@ -0,0 +1,143 @@
+From: Jan Beulich <jbeulich@suse.com>
+Subject: AMD/IOMMU: drop command completion timeout
+
+First and foremost - such timeouts were not signaled to callers, making
+them believe they're fine to e.g. free previously unmapped pages.
+
+Mirror VT-d's behavior: A fixed number of loop iterations is not a
+suitable way to detect timeouts in an environment (CPU and bus speeds)
+independent manner anyway. Furthermore, leaving an in-progress operation
+pending when it appears to take too long is problematic: If a command
+completed later, the signaling of its completion may instead be
+understood to signal a subsequently started command's completion.
+
+Log excessively long processing times (with a progressive threshold) to
+have some indication of problems in this area. Allow callers to specify
+a non-default timeout bias for this logging, using the same values as
+VT-d does, which in particular means a (by default) much larger value
+for device IO TLB invalidation.
+
+This is part of XSA-373 / CVE-2021-28692.
+
+Signed-off-by: Jan Beulich <jbeulich@suse.com>
+Reviewed-by: Paul Durrant <paul@xen.org>
+
+--- a/xen/drivers/passthrough/amd/iommu_cmd.c
++++ b/xen/drivers/passthrough/amd/iommu_cmd.c
+@@ -48,10 +48,12 @@ static void send_iommu_command(struct am
+     writel(tail, iommu->mmio_base + IOMMU_CMD_BUFFER_TAIL_OFFSET);
+ }
+ 
+-static void flush_command_buffer(struct amd_iommu *iommu)
++static void flush_command_buffer(struct amd_iommu *iommu,
++                                 unsigned int timeout_base)
+ {
+-    unsigned int cmd[4], status, loop_count;
+-    bool comp_wait;
++    uint32_t cmd[4];
++    s_time_t start, timeout;
++    static unsigned int __read_mostly threshold = 1;
+ 
+     /* RW1C 'ComWaitInt' in status register */
+     writel(IOMMU_STATUS_COMP_WAIT_INT,
+@@ -67,22 +69,31 @@ static void flush_command_buffer(struct
+                          IOMMU_COMP_WAIT_I_FLAG_SHIFT, &cmd[0]);
+     send_iommu_command(iommu, cmd);
+ 
+-    /* Make loop_count long enough for polling completion wait bit */
+-    loop_count = 1000;
+-    do {
+-        status = readl(iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
+-        comp_wait = status & IOMMU_STATUS_COMP_WAIT_INT;
+-        --loop_count;
+-    } while ( !comp_wait && loop_count );
+-
+-    if ( comp_wait )
++    start = NOW();
++    timeout = start + (timeout_base ?: 100) * MILLISECS(threshold);
++    while ( !(readl(iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET) &
++              IOMMU_STATUS_COMP_WAIT_INT) )
+     {
+-        /* RW1C 'ComWaitInt' in status register */
+-        writel(IOMMU_STATUS_COMP_WAIT_INT,
+-               iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
+-        return;
++        if ( timeout && NOW() > timeout )
++        {
++            threshold |= threshold << 1;
++            printk(XENLOG_WARNING
++                   "AMD IOMMU %04x:%02x:%02x.%u: %scompletion wait taking too long\n",
++                   iommu->seg, PCI_BUS(iommu->bdf),
++                   PCI_SLOT(iommu->bdf), PCI_FUNC(iommu->bdf),
++                   timeout_base ? "iotlb " : "");
++            timeout = 0;
++        }
++        cpu_relax();
+     }
+-    AMD_IOMMU_DEBUG("Warning: ComWaitInt bit did not assert!\n");
++
++    if ( !timeout )
++        printk(XENLOG_WARNING
++               "AMD IOMMU %04x:%02x:%02x.%u: %scompletion wait took %lums\n",
++               iommu->seg, PCI_BUS(iommu->bdf),
++               PCI_SLOT(iommu->bdf), PCI_FUNC(iommu->bdf),
++               timeout_base ? "iotlb " : "",
++               (NOW() - start) / 10000000);
+ }
+ 
+ /* Build low level iommu command messages */
+@@ -294,7 +305,7 @@ void amd_iommu_flush_iotlb(u8 devfn, con
+     /* send INVALIDATE_IOTLB_PAGES command */
+     spin_lock_irqsave(&iommu->lock, flags);
+     invalidate_iotlb_pages(iommu, maxpend, 0, queueid, daddr, req_id, order);
+-    flush_command_buffer(iommu);
++    flush_command_buffer(iommu, iommu_dev_iotlb_timeout);
+     spin_unlock_irqrestore(&iommu->lock, flags);
+ }
+ 
+@@ -331,7 +342,7 @@ static void _amd_iommu_flush_pages(struc
+     {
+         spin_lock_irqsave(&iommu->lock, flags);
+         invalidate_iommu_pages(iommu, daddr, dom_id, order);
+-        flush_command_buffer(iommu);
++        flush_command_buffer(iommu, 0);
+         spin_unlock_irqrestore(&iommu->lock, flags);
+     }
+ 
+@@ -355,7 +366,7 @@ void amd_iommu_flush_device(struct amd_i
+     ASSERT( spin_is_locked(&iommu->lock) );
+ 
+     invalidate_dev_table_entry(iommu, bdf);
+-    flush_command_buffer(iommu);
++    flush_command_buffer(iommu, 0);
+ }
+ 
+ void amd_iommu_flush_intremap(struct amd_iommu *iommu, uint16_t bdf)
+@@ -363,7 +374,7 @@ void amd_iommu_flush_intremap(struct amd
+     ASSERT( spin_is_locked(&iommu->lock) );
+ 
+     invalidate_interrupt_table(iommu, bdf);
+-    flush_command_buffer(iommu);
++    flush_command_buffer(iommu, 0);
+ }
+ 
+ void amd_iommu_flush_all_caches(struct amd_iommu *iommu)
+@@ -371,7 +382,7 @@ void amd_iommu_flush_all_caches(struct a
+     ASSERT( spin_is_locked(&iommu->lock) );
+ 
+     invalidate_iommu_all(iommu);
+-    flush_command_buffer(iommu);
++    flush_command_buffer(iommu, 0);
+ }
+ 
+ void amd_iommu_send_guest_cmd(struct amd_iommu *iommu, u32 cmd[])
+@@ -381,7 +392,8 @@ void amd_iommu_send_guest_cmd(struct amd
+     spin_lock_irqsave(&iommu->lock, flags);
+ 
+     send_iommu_command(iommu, cmd);
+-    flush_command_buffer(iommu);
++    /* TBD: Timeout selection may require peeking into cmd[]. */
++    flush_command_buffer(iommu, 0);
+ 
+     spin_unlock_irqrestore(&iommu->lock, flags);
+ }
diff --git a/srcpkgs/xen/patches/xsa375.patch b/srcpkgs/xen/patches/xsa375.patch
new file mode 100644
index 000000000000..aa2e5ad4674f
--- /dev/null
+++ b/srcpkgs/xen/patches/xsa375.patch
@@ -0,0 +1,50 @@
+From: Andrew Cooper <andrew.cooper3@citrix.com>
+Subject: x86/spec-ctrl: Protect against Speculative Code Store Bypass
+
+Modern x86 processors have far-better-than-architecturally-guaranteed self
+modifying code detection.  Typically, when a write hits an instruction in
+flight, a Machine Clear occurs to flush stale content in the frontend and
+backend.
+
+For self modifying code, before a write which hits an instruction in flight
+retires, the frontend can speculatively decode and execute the old instruction
+stream.  Speculation of this form can suffer from type confusion in registers,
+and potentially leak data.
+
+Furthermore, updates are typically byte-wise, rather than atomic.  Depending
+on timing, speculation can race ahead multiple times between individual
+writes, and execute the transiently-malformed instruction stream.
+
+Xen has stubs which are used in certain cases for emulation purposes.  Inhibit
+speculation between updating the stub and executing it.
+
+This is XSA-375 / CVE-2021-0089.
+
+Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
+Reviewed-by: Jan Beulich <jbeulich@suse.com>
+
+diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
+index 8889509d2a..11467a1e3a 100644
+--- a/xen/arch/x86/pv/emul-priv-op.c
++++ b/xen/arch/x86/pv/emul-priv-op.c
+@@ -138,6 +138,8 @@ static io_emul_stub_t *io_emul_stub_setup(struct priv_op_ctxt *ctxt, u8 opcode,
+     /* Runtime confirmation that we haven't clobbered an adjacent stub. */
+     BUG_ON(STUB_BUF_SIZE / 2 < (p - ctxt->io_emul_stub));
+ 
++    block_speculation(); /* SCSB */
++
+     /* Handy function-typed pointer to the stub. */
+     return (void *)stub_va;
+ 
+diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c b/xen/arch/x86/x86_emulate/x86_emulate.c
+index c25d88d0d8..f42ff2a837 100644
+--- a/xen/arch/x86/x86_emulate/x86_emulate.c
++++ b/xen/arch/x86/x86_emulate/x86_emulate.c
+@@ -1257,6 +1257,7 @@ static inline int mkec(uint8_t e, int32_t ec, ...)
+ # define invoke_stub(pre, post, constraints...) do {                    \
+     stub_exn.info = (union stub_exception_token) { .raw = ~0 };         \
+     stub_exn.line = __LINE__; /* Utility outweighs livepatching cost */ \
++    block_speculation(); /* SCSB */                                     \
+     asm volatile ( pre "\n\tINDIRECT_CALL %[stub]\n\t" post "\n"        \
+                    ".Lret%=:\n\t"                                       \
+                    ".pushsection .fixup,\"ax\"\n"                       \
diff --git a/srcpkgs/xen/patches/xsa377.patch b/srcpkgs/xen/patches/xsa377.patch
new file mode 100644
index 000000000000..1a1887b60e09
--- /dev/null
+++ b/srcpkgs/xen/patches/xsa377.patch
@@ -0,0 +1,27 @@
+From: Andrew Cooper <andrew.cooper3@citrix.com>
+Subject: x86/spec-ctrl: Mitigate TAA after S3 resume
+
+The user chosen setting for MSR_TSX_CTRL needs restoring after S3.
+
+All APs get the correct setting via start_secondary(), but the BSP was missed
+out.
+
+This is XSA-377 / CVE-2021-28690.
+
+Fixes: 8c4330818f6 ("x86/spec-ctrl: Mitigate the TSX Asynchronous Abort sidechannel")
+Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
+Reviewed-by: Jan Beulich <jbeulich@suse.com>
+
+diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
+index 91a8c4d0bd..31a56f02d0 100644
+--- a/xen/arch/x86/acpi/power.c
++++ b/xen/arch/x86/acpi/power.c
+@@ -288,6 +288,8 @@ static int enter_state(u32 state)
+ 
+     microcode_update_one();
+ 
++    tsx_init(); /* Needs microcode.  May change HLE/RTM feature bits. */
++
+     if ( !recheck_cpu_features(0) )
+         panic("Missing previously available feature(s)\n");
+ 
diff --git a/srcpkgs/xen/template b/srcpkgs/xen/template
index 8a8bd995f532..de128973e067 100644
--- a/srcpkgs/xen/template
+++ b/srcpkgs/xen/template
@@ -1,7 +1,7 @@
 # Template file for 'xen'
 pkgname=xen
-version=4.14.1
-revision=3
+version=4.14.2
+revision=1
 # grep -R IPXE_GIT_TAG src/xen-*/tools/firmware/etherboot
 _git_tag_ipxe=4bd064de239dab2426b31c9789a1f4d78087dc63
 # TODO: arm / aarch64
@@ -10,20 +10,22 @@ build_style=gnu-configure
 configure_args="$(vopt_enable stubdom) --disable-systemd
  --with-system-seabios=/usr/share/seabios/bios.bin
  --with-sysconfig-leaf-dir=conf.d --with-rundir=/run"
-hostmakedepends="acpica-utils automake bison flex fig2dev gettext ghostscript git
- ocaml ocaml-findlib pandoc pkg-config python3-Markdown tar texinfo wget"
+hostmakedepends="acpica-utils automake bison flex fig2dev gettext ghostscript
+ git ncurses-devel ocaml ocaml-findlib pandoc pkg-config python3-Markdown tar
+ texinfo wget"
 makedepends="SDL-devel dev86 dtc-devel e2fsprogs-devel gnutls-devel libaio-devel
  libbluetooth-devel libglib-devel liblzma-devel libnl3-devel openssl-devel
- netpbm pciutils-devel pixman-devel python3-devel seabios yajl-devel"
+ ncurses-devel netpbm pciutils-devel pixman-devel python3-devel seabios
+ yajl-devel"
 depends="bridge-utils perl xen-hypervisor"
 short_desc="Xen hypervisor utilities"
-maintainer="Chris Brannon <chris@the-brannons.com>"
+maintainer="Orphaned <orphan@voidlinux.org>"
 license="GPL-2.0-or-later"
 homepage="https://www.xenproject.org/"
 distfiles="
  https://downloads.xenproject.org/release/xen/${version}/${pkgname}-${version}.tar.gz
  https://github.com/ipxe/ipxe/archive/${_git_tag_ipxe}.tar.gz"
-checksum="cf0d7316ad674491f49b7ef0518cb1d906a2e3bfad639deef0ef2343b119ac0c
+checksum="e35099a963070e3c9f425d1e36cbb1c40b7874ef449bfafd6688343783cb25ad
  4850691d6f196eaf4d6210f2de01383251b3ea1b928141da9ce28c0b06a90938"
 skip_extraction="${_git_tag_ipxe}.tar.gz"
 nopie=yes

  parent reply	other threads:[~2021-06-27  1:14 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-30 22:18 [PR PATCH] " CMB
2021-05-02 14:33 ` ericonr
2021-05-02 14:33 ` ericonr
2021-05-03  0:22 ` [PR REVIEW] " heliocat
2021-05-03  0:22 ` heliocat
2021-05-03  0:22 ` heliocat
2021-05-03  0:22 ` heliocat
2021-05-03  0:22 ` heliocat
2021-05-03  0:22 ` heliocat
2021-05-03  0:22 ` heliocat
2021-05-03  0:22 ` heliocat
2021-05-03  0:22 ` heliocat
2021-05-03  0:22 ` heliocat
2021-05-03  0:22 ` heliocat
2021-05-03  1:51 ` ahesford
2021-05-03  1:51 ` ahesford
2021-05-03  1:51 ` ahesford
2021-05-03  1:51 ` ahesford
2021-05-03  1:51 ` ahesford
2021-05-03  2:00 ` ahesford
2021-05-03  2:00 ` ahesford
2021-05-03  2:09 ` heliocat
2021-05-03  2:15 ` heliocat
2021-05-03  2:19 ` heliocat
2021-05-03  2:23 ` [PR REVIEW] " ahesford
2021-05-03  2:32 ` ahesford
2021-05-03  3:16 ` [PR REVIEW] " heliocat
2021-05-03  8:32 ` Piraty
2021-06-27  1:14 ` CMB [this message]
2021-06-27  1:20 ` CMB
2021-06-27  1:22 ` CMB
2022-05-18  2:09 ` github-actions
2022-06-01  2:14 ` [PR PATCH] [Closed]: " github-actions

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210627011410.9GkX_2v0mh27UYP_G5d6z0xC-kIIbjSKeZbjSBGwSvY@z \
    --to=cmb@users.noreply.github.com \
    --cc=ml@inbox.vuxu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).