Certain page table manipulation operations in Xen 4.1.x, 4.2.x, and earlier are not preemptible, which allows local PV kernels to cause a denial of service via vectors related to "deep page table traversal."
Stack-based buffer overflow in the dirty video RAM tracking functionality in Xen 3.4 through 4.1 allows local HVM guest OS administrators to cause a denial of service (crash) via a large bitmap image.
A domain cleanup issue was discovered in the C xenstore daemon (aka cxenstored) in Xen through 4.9.x. When shutting down a VM with a stubdomain, a race in cxenstored may cause a double-free. The xenstored daemon may crash, resulting in a DoS of any parts of the system relying on it (including domain creation / destruction, ballooning, device changes, etc.).
An issue was discovered in Xen through 4.13.x, allowing x86 HVM guest OS users to cause a hypervisor crash. An inverted conditional in x86 HVM guests' dirty video RAM tracking code allows such guests to make Xen de-reference a pointer guaranteed to point at unmapped space. A malicious or buggy HVM guest may cause the hypervisor to crash, resulting in Denial of Service (DoS) affecting the entire host. Xen versions from 4.8 onwards are affected. Xen versions 4.7 and earlier are not affected. Only x86 systems are affected. Arm systems are not affected. Only x86 HVM guests using shadow paging can leverage the vulnerability. In addition, there needs to be an entity actively monitoring a guest's video frame buffer (typically for display purposes) in order for such a guest to be able to leverage the vulnerability. x86 PV guests, as well as x86 HVM guests using hardware assisted paging (HAP), cannot leverage the vulnerability.
The Intel VT-d Interrupt Remapping engine in Xen 3.3.x through 4.3.x allows local guests to cause a denial of service (kernel panic) via a malformed Message Signaled Interrupt (MSI) from a PCI device that is bus mastering capable that triggers a System Error Reporting (SERR) Non-Maskable Interrupt (NMI).
Xen 4.2.x and 4.1.x does not properly restrict access to IRQs, which allows local stub domain clients to gain access to IRQs and cause a denial of service via vectors related to "passed-through IRQs or PCI devices."
Memory leak in Xen 4.2 and unstable allows local HVM guests to cause a denial of service (host memory consumption) by performing nested virtualization in a way that triggers errors that are not properly handled.
The do_tmem_get function in the Transcendent Memory (TMEM) in Xen 4.0, 4.1, and 4.2 allow local guest OS users to cause a denial of service (CPU hang and host crash) via unspecified vectors related to a spinlock being held in the "bad_copy error path." NOTE: this issue was originally published as part of CVE-2012-3497, which was too general; CVE-2012-3497 has been SPLIT into this ID and others.
The get_page_from_gfn hypercall function in Xen 4.2 allows local PV guest OS administrators to cause a denial of service (crash) via a crafted GFN that triggers a buffer over-read.
Xen 4.x, when downgrading the grant table version, does not properly remove the status page from the tracking list when freeing the page, which allows local guest OS administrators to cause a denial of service (hypervisor crash) via unspecified vectors.
The (1) XENMEM_decrease_reservation, (2) XENMEM_populate_physmap, and (3) XENMEM_exchange hypercalls in Xen 4.2 and earlier allow local guest administrators to cause a denial of service (long loop and hang) via a crafted extent_order value.
The guest_physmap_mark_populate_on_demand function in Xen 4.2 and earlier does not properly unlock the subject GFNs when checking if they are in use, which allows local guest HVM administrators to cause a denial of service (hang) via unspecified vectors.
XENMEM_populate_physmap in Xen 4.0, 4.1, and 4.2, and Citrix XenServer 6.0.2 and earlier, when translating paging mode is not used, allows local PV OS guest kernels to cause a denial of service (BUG triggered and host crash) via invalid flags such as MEMF_populate_on_demand.
An issue was discovered in Xen through 4.11.x allowing 64-bit PV guest OS users to cause a denial of service (host OS crash) because #GP[0] can occur after a non-canonical address is passed to the TLB flushing code. NOTE: this issue exists because of an incorrect CVE-2017-5754 (aka Meltdown) mitigation.
The p2m_teardown function in arch/arm/p2m.c in Xen 4.4.x through 4.6.x allows local guest OS users with access to the driver domain to cause a denial of service (NULL pointer dereference and host OS crash) by creating concurrent domains and holding references to them, related to VMID exhaustion.
The paging_invlpg function in include/asm-x86/paging.h in Xen 3.3.x through 4.6.x, when using shadow mode paging or nested virtualization is enabled, allows local HVM guest users to cause a denial of service (host crash) via a non-canonical guest address in an INVVPID instruction, which triggers a hypervisor bug check.
The memory_exchange function in common/memory.c in Xen 3.2.x through 4.6.x does not properly release locks, which might allow guest OS administrators to cause a denial of service (deadlock or host crash) via unspecified vectors, related to XENMEM_exchange error handling.
The KVM subsystem in the Linux kernel through 4.2.6, and Xen 4.3.x through 4.6.x, allows guest OS users to cause a denial of service (host OS panic or hang) by triggering many #DB (aka Debug) exceptions, related to svm.c.
The memory_exchange function in common/memory.c in Xen 3.2.x through 4.6.x does not properly hand back pages to a domain, which might allow guest OS administrators to cause a denial of service (host crash) via unspecified vectors related to domain teardown.
Xen 4.4.x and earlier, when using a large number of VCPUs, does not properly handle read and write locks, which allows local x86 guest users to cause a denial of service (write denial or NMI watchdog timeout and host crash) via a large number of read requests, a different vulnerability than CVE-2014-9065.
The compatibility mode hypercall argument translation in Xen 3.3.x through 4.4.x, when running on a 64-bit hypervisor, allows local 32-bit HVM guests to cause a denial of service (host crash) via vectors involving altering the high halves of registers while in 64-bit mode.
Certain MMU virtualization operations in Xen 4.2.x through 4.4.x before the xsa97-hap patch, when using Hardware Assisted Paging (HAP), are not preemptible, which allows local HVM guest to cause a denial of service (vcpu consumption) by invoking these operations, which process every page assigned to a guest, a different vulnerability than CVE-2014-5149.
Certain MMU virtualization operations in Xen 4.2.x through 4.4.x, when using shadow pagetables, are not preemptible, which allows local HVM guest to cause a denial of service (vcpu consumption) by invoking these operations, which process every page assigned to a guest, a different vulnerability than CVE-2014-5146.
Multiple HVM control operations in Xen 3.4 through 4.2 allow local HVM guest OS administrators to cause a denial of service (physical CPU consumption) via a large input.
Insufficient cleanup of passed-through device IRQs The management of IRQs associated with physical devices exposed to x86 HVM guests involves an iterative operation in particular when cleaning up after the guest's use of the device. In the case where an interrupt is not quiescent yet at the time this cleanup gets invoked, the cleanup attempt may be scheduled to be retried. When multiple interrupts are involved, this scheduling of a retry may get erroneously skipped. At the same time pointers may get cleared (resulting in a de-reference of NULL) and freed (resulting in a use-after-free), while other code would continue to assume them to be valid.
The AMD IOMMU support in Xen 4.2.x, 4.1.x, 3.3, and other versions, when using AMD-Vi for PCI passthrough, uses the same interrupt remapping table for the host and all guests, which allows guests to cause a denial of service by injecting an interrupt into other guests.
An issue was discovered in Xen through 4.13.x, allowing guest OS users to cause a host OS crash because of incorrect error handling in event-channel port allocation. The allocation of an event-channel port may fail for multiple reasons: (1) port is already in use, (2) the memory allocation failed, or (3) the port we try to allocate is higher than what is supported by the ABI (e.g., 2L or FIFO) used by the guest or the limit set by an administrator (max_event_channels in xl cfg). Due to the missing error checks, only (1) will be considered an error. All the other cases will provide a valid port and will result in a crash when trying to access the event channel. When the administrator configured a guest to allow more than 1023 event channels, that guest may be able to crash the host. When Xen is out-of-memory, allocation of new event channels will result in crashing the host rather than reporting an error. Xen versions 4.10 and later are affected. All architectures are affected. The default configuration, when guests are created with xl/libxl, is not vulnerable, because of the default event-channel limit.
An issue was discovered in Xen through 4.9.x allowing PV guest OS users to cause a denial of service (host OS crash) if shadow mode and log-dirty mode are in place, because of an incorrect assertion related to M2P.
Xen 4.0.2 through 4.0.4, 4.1.x, and 4.2.x allows local PV guest users to cause a denial of service (hypervisor crash) via certain bit combinations to the XSETBV instruction.
The ocaml binding for the xc_vcpu_getaffinity function in Xen 4.2.x and 4.3.x frees certain memory that may still be intended for use, which allows local users to cause a denial of service (heap corruption and crash) and possibly execute arbitrary code via unspecified vectors that trigger a (1) use-after-free or (2) double free.
The Ocaml xenstored implementation (oxenstored) in Xen 4.1.x, 4.2.x, and 4.3.x allows local guest domains to cause a denial of service (domain shutdown) via a large message reply.
The XEN_DOMCTL_getmemlist hypercall in Xen 3.4.x through 4.3.x (possibly 4.3.1) does not always obtain the page_alloc_lock and mm_rwlock in the same order, which allows local guest administrators to cause a denial of service (host deadlock).
Buffer overflow in the Python bindings for the xc_vcpu_setaffinity call in Xen 4.0.x, 4.1.x, and 4.2.x allows local administrators with permissions to configure VCPU affinity to cause a denial of service (memory corruption and xend toolstack crash) and possibly gain privileges via a crafted cpumap.
An issue was discovered in Xen through 4.9.x allowing x86 PV guest OS users to execute arbitrary code on the host OS because of a race condition that can cause a stale TLB entry.
The pciback_enable_msi function in the PCI backend driver (drivers/xen/pciback/conf_space_capability_msi.c) in Xen for the Linux kernel 2.6.18 and 3.8 allows guest OS users with PCI device access to cause a denial of service via a large number of kernel log messages. NOTE: some of these details are obtained from third party information.
Arm provides multiple helpers to clean & invalidate the cache for a given region. This is, for instance, used when allocating guest memory to ensure any writes (such as the ones during scrubbing) have reached memory before handing over the page to a guest. Unfortunately, the arithmetics in the helpers can overflow and would then result to skip the cache cleaning/invalidation. Therefore there is no guarantee when all the writes will reach the memory. This undefined behavior was meant to be addressed by XSA-437, but the approach was not sufficient.
Another race in XENMAPSPACE_grant_table handling Guests are permitted access to certain Xen-owned pages of memory. The majority of such pages remain allocated / associated with a guest for its entire lifetime. Grant table v2 status pages, however, are de-allocated when a guest switches (back) from v2 to v1. Freeing such pages requires that the hypervisor enforce that no parallel request can result in the addition of a mapping of such a page to a guest. That enforcement was missing, allowing guests to retain access to pages that were freed and perhaps re-used for other purposes. Unfortunately, when XSA-379 was being prepared, this similar issue was not noticed.
Race condition in the grant table code in Xen 4.6.x through 4.9.x allows local guest OS administrators to cause a denial of service (free list corruption and host crash) or gain privileges on the host via vectors involving maptrack free list handling.
An issue was discovered in Xen through 4.12.x allowing 32-bit Arm guest OS users to cause a denial of service (out-of-bounds access) because certain bit iteration is mishandled. In a number of places bitmaps are being used by the hypervisor to track certain state. Iteration over all bits involves functions which may misbehave in certain corner cases: On 32-bit Arm accesses to bitmaps with bit a count which is a multiple of 32, an out of bounds access may occur. A malicious guest may cause a hypervisor crash or hang, resulting in a Denial of Service (DoS). All versions of Xen are vulnerable. 32-bit Arm systems are vulnerable. 64-bit Arm systems are not vulnerable.
An issue was discovered in Xen through 4.12.x allowing x86 PV guest OS users to gain host OS privileges by leveraging race conditions in pagetable promotion and demotion operations, because of an incomplete fix for CVE-2019-18421. XSA-299 addressed several critical issues in restartable PV type change operations. Despite extensive testing and auditing, some corner cases were missed. A malicious PV guest administrator may be able to escalate their privilege to that of the host. All security-supported versions of Xen are vulnerable. Only x86 systems are affected. Arm systems are not affected. Only x86 PV guests can leverage the vulnerability. x86 HVM and PVH guests cannot leverage the vulnerability. Note that these attacks require very precise timing, which may be difficult to exploit in practice.
An issue was discovered in Xen through 4.11.x allowing x86 PV guest OS users to cause a denial of service or gain privileges by leveraging a page-writability race condition during addition of a passed-through PCI device.
The grant-table feature in Xen through 4.8.x mishandles a GNTMAP_device_map and GNTMAP_host_map mapping, when followed by only a GNTMAP_host_map unmapping, which allows guest OS users to cause a denial of service (count mismanagement and memory corruption) or obtain privileged host OS access, aka XSA-224 bug 1.
An issue was discovered in Xen through 4.11.x allowing x86 PV guest OS users to cause a denial of service or gain privileges by leveraging a race condition that arose when XENMEM_exchange was introduced.
The shadow-paging feature in Xen through 4.8.x mismanages page references and consequently introduces a race condition, which allows guest OS users to obtain Xen privileges, aka XSA-219.
The grant-table feature in Xen through 4.8.x does not ensure sufficient type counts for a GNTMAP_device_map and GNTMAP_host_map mapping, which allows guest OS users to cause a denial of service (count mismanagement and memory corruption) or obtain privileged host OS access, aka XSA-224 bug 2.
An attacker with local access to a system (either through a disk or external drive) can present a modified XFS partition to grub-legacy in such a way to exploit a memory corruption in grub’s XFS file system implementation.
Xen 4.7.x and earlier does not properly honor CR0.TS and CR0.EM, which allows local x86 HVM guest OS users to read or modify FPU, MMX, or XMM register state information belonging to arbitrary tasks on the guest by modifying an instruction while the hypervisor is preparing to emulate it.
In Xen 4.10, new infrastructure was introduced as part of an overhaul to how MSR emulation happens for guests. Unfortunately, one tracking structure isn't freed when a vcpu is destroyed. This allows guest OS administrators to cause a denial of service (host OS memory consumption) by rebooting many times.
Buffer overflow in Xen 4.7.x and earlier allows local x86 HVM guest OS administrators on guests running with shadow paging to cause a denial of service via a pagetable update.
Linux PV device frontends vulnerable to attacks by backends T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] Several Linux PV device frontends are using the grant table interfaces for removing access rights of the backends in ways being subject to race conditions, resulting in potential data leaks, data corruption by malicious backends, and denial of service triggered by malicious backends: blkfront, netfront, scsifront and the gntalloc driver are testing whether a grant reference is still in use. If this is not the case, they assume that a following removal of the granted access will always succeed, which is not true in case the backend has mapped the granted page between those two operations. As a result the backend can keep access to the memory page of the guest no matter how the page will be used after the frontend I/O has finished. The xenbus driver has a similar problem, as it doesn't check the success of removing the granted access of a shared ring buffer. blkfront: CVE-2022-23036 netfront: CVE-2022-23037 scsifront: CVE-2022-23038 gntalloc: CVE-2022-23039 xenbus: CVE-2022-23040 blkfront, netfront, scsifront, usbfront, dmabuf, xenbus, 9p, kbdfront, and pvcalls are using a functionality to delay freeing a grant reference until it is no longer in use, but the freeing of the related data page is not synchronized with dropping the granted access. As a result the backend can keep access to the memory page even after it has been freed and then re-used for a different purpose. CVE-2022-23041 netfront will fail a BUG_ON() assertion if it fails to revoke access in the rx path. This will result in a Denial of Service (DoS) situation of the guest which can be triggered by the backend. CVE-2022-23042