An issue was discovered in Xen through 4.9.x allowing HVM guest OS users to cause a denial of service (infinite loop and host OS hang) by leveraging the mishandling of Populate on Demand (PoD) errors.
An issue was discovered in Xen 4.4.x through 4.9.x allowing ARM guest OS users to cause a denial of service (prevent physical CPU usage) because of lock mishandling upon detection of an add-to-physmap error.
An issue was discovered in Xen 4.5.x through 4.9.x allowing attackers (who control a stub domain kernel or tool stack) to cause a denial of service (host OS crash) because of a missing comparison (of range start to range end) within the DMOP map/unmap implementation.
The pciback_enable_msi function in the PCI backend driver (drivers/xen/pciback/conf_space_capability_msi.c) in Xen for the Linux kernel 2.6.18 and 3.8 allows guest OS users with PCI device access to cause a denial of service via a large number of kernel log messages. NOTE: some of these details are obtained from third party information.
The HVMOP_pagetable_dying hypercall in Xen 4.0, 4.1, and 4.2 does not properly check the pagetable state when running on shadow pagetables, which allows a local HVM guest OS to cause a denial of service (hypervisor crash) via unspecified vectors.
An issue was discovered in Xen 4.5.x through 4.9.x. The function `__gnttab_cache_flush` handles GNTTABOP_cache_flush grant table operations. It checks to see if the calling domain is the owner of the page that is to be operated on. If it is not, the owner's grant table is checked to see if a grant mapping to the calling domain exists for the page in question. However, the function does not check to see if the owning domain actually has a grant table or not. Some special domains, such as `DOMID_XEN`, `DOMID_IO` and `DOMID_COW` are created without grant tables. Hence, if __gnttab_cache_flush operates on a page owned by these special domains, it will attempt to dereference a NULL pointer in the domain struct.
Xen 4.0 and 4.1 allows local HVM guest OS kernels to cause a denial of service (domain 0 VCPU hang and kernel panic) by modifying the physical address space in a way that triggers excessive shared page search time during the p2m teardown.
Memory leak in Xen 3.3 through 4.8.x allows guest OS users to cause a denial of service (ARM or x86 AMD host OS memory consumption) by continually rebooting, because certain cleanup is skipped if no pass-through device was ever assigned, aka XSA-207.
An issue was discovered in Xen through 4.14.x. Out of bounds event channels are available to 32-bit x86 domains. The so called 2-level event channel model imposes different limits on the number of usable event channels for 32-bit x86 domains vs 64-bit or Arm (either bitness) ones. 32-bit x86 domains can use only 1023 channels, due to limited space in their shared (between guest and Xen) information structure, whereas all other domains can use up to 4095 in this model. The recording of the respective limit during domain initialization, however, has occurred at a time where domains are still deemed to be 64-bit ones, prior to actually honoring respective domain properties. At the point domains get recognized as 32-bit ones, the limit didn't get updated accordingly. Due to this misbehavior in Xen, 32-bit domains (including Domain 0) servicing other domains may observe event channel allocations to succeed when they should really fail. Subsequent use of such event channels would then possibly lead to corruption of other parts of the shared info structure. An unprivileged guest may cause another domain, in particular Domain 0, to misbehave. This may lead to a Denial of Service (DoS) for the entire system. All Xen versions from 4.4 onwards are vulnerable. Xen versions 4.3 and earlier are not vulnerable. Only x86 32-bit domains servicing other domains are vulnerable. Arm systems, as well as x86 64-bit domains, are not vulnerable.
inadequate grant-v2 status frames array bounds check The v2 grant table interface separates grant attributes from grant status. That is, when operating in this mode, a guest has two tables. As a result, guests also need to be able to retrieve the addresses that the new status tracking table can be accessed through. For 32-bit guests on x86, translation of requests has to occur because the interface structure layouts commonly differ between 32- and 64-bit. The translation of the request to obtain the frame numbers of the grant status table involves translating the resulting array of frame numbers. Since the space used to carry out the translation is limited, the translation layer tells the core function the capacity of the array within translation space. Unfortunately the core function then only enforces array bounds to be below 8 times the specified value, and would write past the available space if enough frame numbers needed storing.
long running loops in grant table handling In order to properly monitor resource use, Xen maintains information on the grant mappings a domain may create to map grants offered by other domains. In the process of carrying out certain actions, Xen would iterate over all such entries, including ones which aren't in use anymore and some which may have been created but never used. If the number of entries for a given domain is large enough, this iterating of the entire table may tie up a CPU for too long, starving other domains or causing issues in the hypervisor itself. Note that a domain may map its own grants, i.e. there is no need for multiple domains to be involved here. A pair of "cooperating" guests may, however, cause the effects to be more severe.
The compat_iret function in Xen 3.1 through 4.5 iterates the wrong way through a loop, which allows local 32-bit PV guest administrators to cause a denial of service (large loop and system hang) via a hypercall_iret call with EFLAGS.VM set.
An issue was discovered in Xen through 4.12.x allowing Arm domU attackers to cause a denial of service (infinite loop) involving a compare-and-exchange operation.
An issue was discovered in drivers/xen/balloon.c in the Linux kernel before 5.2.3, as used in Xen through 4.12.x, allowing guest OS users to cause a denial of service because of unrestricted resource consumption during the mapping of guest memory, aka CID-6ef36ab967c7.
An issue was discovered in Xen through 4.11.x allowing x86 PV guest OS users to cause a denial of service by leveraging a long-running operation that exists to support restartability of PTE updates.
An issue was discovered in Xen through 4.12.x allowing Arm domU attackers to cause a denial of service (infinite loop) involving a LoadExcl or StoreExcl operation.
An issue was discovered in Xen 4.8.x through 4.11.x allowing x86 PV guest OS users to cause a denial of service because mishandling of failed IOMMU operations causes a bug check during the cleanup of a crashed guest.
An issue was discovered in Xen through 4.11.x allowing x86 PV guest OS users to cause a denial of service because of an incompatibility between Process Context Identifiers (PCID) and shadow-pagetable switching.
The evtchn_fifo_set_pending function in Xen 4.4.x allows local guest users to cause a denial of service (host crash) via vectors involving an uninitialized FIFO-based event channel control block when (1) binding or (2) moving an event to a different VCPU.
The x86 segment base write emulation functionality in Xen 4.4.x through 4.7.x allows local x86 PV guest OS administrators to cause a denial of service (host crash) by leveraging lack of canonical address checks.
Xen through 4.7.x allows local ARM guest OS users to cause a denial of service (host crash) via vectors involving a (1) data or (2) prefetch abort with the ESR_EL2.EA bit set.
Xen through 4.7.x allows local ARM guest OS users to cause a denial of service (host crash) via vectors involving an asynchronous abort while at EL2.
An issue was discovered in Xen 4.8.x through 4.10.x allowing x86 PVH guest OS users to cause a denial of service (NULL pointer dereference and hypervisor crash) by leveraging the mishandling of configurations that lack a Local APIC.
An issue was discovered in Xen through 4.10.x allowing x86 PV guest OS users to cause a denial of service (host OS CPU hang) via non-preemptable L3/L4 pagetable freeing.
In Xen 4.10, new infrastructure was introduced as part of an overhaul to how MSR emulation happens for guests. Unfortunately, one tracking structure isn't freed when a vcpu is destroyed. This allows guest OS administrators to cause a denial of service (host OS memory consumption) by rebooting many times.
An issue was discovered in Xen through 4.11.x on Intel x86 platforms allowing guest OS users to cause a denial of service (host OS hang) because Xen does not work around Intel's mishandling of certain HLE transactions associated with the KACQUIRE instruction prefix.
An issue was discovered in Xen 4.14.x. When moving IRQs between CPUs to distribute the load of IRQ handling, IRQ vectors are dynamically allocated and de-allocated on the relevant CPUs. De-allocation has to happen when certain constraints are met. If these conditions are not met when first checked, the checking CPU may send an interrupt to itself, in the expectation that this IRQ will be delivered only after the condition preventing the cleanup has cleared. For two specific IRQ vectors, this expectation was violated, resulting in a continuous stream of self-interrupts, which renders the CPU effectively unusable. A domain with a passed through PCI device can cause lockup of a physical CPU, resulting in a Denial of Service (DoS) to the entire host. Only x86 systems are vulnerable. Arm systems are not vulnerable. Only guests with physical PCI devices passed through to them can exploit the vulnerability.
An issue was discovered in Xen 4.6 through 4.14.x. When acting upon a guest XS_RESET_WATCHES request, not all tracking information is freed. A guest can cause unbounded memory usage in oxenstored. This can lead to a system-wide DoS. Only systems using the Ocaml Xenstored implementation are vulnerable. Systems using the C Xenstored implementation are not vulnerable.
An issue was discovered in Xen through 4.14.x. When a Xenstore watch fires, the xenstore client that registered the watch will receive a Xenstore message containing the path of the modified Xenstore entry that triggered the watch, and the tag that was specified when registering the watch. Any communication with xenstored is done via Xenstore messages, consisting of a message header and the payload. The payload length is limited to 4096 bytes. Any request to xenstored resulting in a response with a payload longer than 4096 bytes will result in an error. When registering a watch, the payload length limit applies to the combined length of the watched path and the specified tag. Because watches for a specific path are also triggered for all nodes below that path, the payload of a watch event message can be longer than the payload needed to register the watch. A malicious guest that registers a watch using a very large tag (i.e., with a registration operation payload length close to the 4096 byte limit) can cause the generation of watch events with a payload length larger than 4096 bytes, by writing to Xenstore entries below the watched path. This will result in an error condition in xenstored. This error can result in a NULL pointer dereference, leading to a crash of xenstored. A malicious guest administrator can cause xenstored to crash, leading to a denial of service. Following a xenstored crash, domains may continue to run, but management operations will be impossible. Only C xenstored is affected, oxenstored is not affected.
An issue was discovered in Xen through 4.11.x. ARM never properly implemented grant table v2, either in the hypervisor or in Linux. Unfortunately, an ARM guest can still request v2 grant tables; they will simply not be properly set up, resulting in subsequent grant-related hypercalls hitting BUG() checks. An unprivileged guest can cause a BUG() check in the hypervisor, resulting in a denial-of-service (crash).
Xen 4.5.x through 4.7.x do not implement Supervisor Mode Access Prevention (SMAP) whitelisting in 32-bit exception and event delivery, which allows local 32-bit PV guest OS kernels to cause a denial of service (hypervisor and VM crash) by triggering a safety check.
An issue was discovered in Xen through 4.14.x. A guest may access xenstore paths via absolute paths containing a full pathname, or via a relative path, which implicitly includes /local/domain/$DOMID for their own domain id. Management tools must access paths in guests' namespaces, necessarily using absolute paths. oxenstored imposes a pathname limit that is applied solely to the relative or absolute path specified by the client. Therefore, a guest can create paths in its own namespace which are too long for management tools to access. Depending on the toolstack in use, a malicious guest administrator might cause some management tools and debugging operations to fail. For example, a guest administrator can cause "xenstore-ls -r" to fail. However, a guest administrator cannot prevent the host administrator from tearing down the domain. All systems using oxenstored are vulnerable. Building and using oxenstored is the default in the upstream Xen distribution, if the Ocaml compiler is available. Systems using C xenstored are not vulnerable.
An issue was discovered in Xen through 4.14.x. Recording of the per-vCPU control block mapping maintained by Xen and that of pointers into the control block is reversed. The consumer assumes, seeing the former initialized, that the latter are also ready for use. Malicious or buggy guest kernels can mount a Denial of Service (DoS) attack affecting the entire system.
An issue was discovered in Xen through 4.10.x. Certain PV MMU operations may take a long time to process. For that reason Xen explicitly checks for the need to preempt the current vCPU at certain points. A few rarely taken code paths did bypass such checks. By suitably enforcing the conditions through its own page table contents, a malicious guest may cause such bypasses to be used for an unbounded number of iterations. A malicious or buggy PV guest may cause a Denial of Service (DoS) affecting the entire host. Specifically, it may prevent use of a physical CPU for an indeterminate period of time. All Xen versions from 3.4 onwards are vulnerable. Xen versions 3.3 and earlier are vulnerable to an even wider class of attacks, due to them lacking preemption checks altogether in the affected code paths. Only x86 systems are affected. ARM systems are not affected. Only multi-vCPU x86 PV guests can leverage the vulnerability. x86 HVM or PVH guests as well as x86 single-vCPU PV ones cannot leverage the vulnerability.
An issue was discovered in Xen through 4.10.x allowing x86 PV guest OS users to cause a denial of service (out-of-bounds zero write and hypervisor crash) via unexpected INT 80 processing, because of an incorrect fix for CVE-2017-5754.
An issue was discovered in the Linux kernel through 5.9.1, as used with Xen through 4.14.x. Guest OS users can cause a denial of service (host OS hang) via a high rate of events to dom0, aka CID-e99502f76271.
Xen through 4.8.x allows local x86 PV guest OS kernel administrators to cause a denial of service (host hang or crash) by modifying the instruction stream asynchronously while performing certain kernel operations.
The p2m_pod_emergency_sweep function in arch/x86/mm/p2m-pod.c in Xen 3.4.x, 3.5.x, and 3.6.x is not preemptible, which allows local x86 HVM guest administrators to cause a denial of service (CPU consumption and possibly reboot) via crafted memory contents that triggers a "time-consuming linear scan," related to Populate-on-Demand.
The KVM subsystem in the Linux kernel through 4.2.6, and Xen 4.3.x through 4.6.x, allows guest OS users to cause a denial of service (host OS panic or hang) by triggering many #AC (aka Alignment Check) exceptions, related to svm.c and vmx.c.
GNTTABOP_swap_grant_ref in Xen 4.2 through 4.5 does not check the grant table operation version, which allows local guest domains to cause a denial of service (NULL pointer dereference) via a hypercall without a GNTTABOP_setup_table or GNTTABOP_set_version.
Xen 3.3.x through 4.5.x enables logging for PCI MSI-X pass-through error messages, which allows local x86 HVM guests to cause a denial of service (host disk consumption) via certain invalid operations.
Xen 3.3.x through 4.5.x does not properly restrict write access to the host MSI message data field, which allows local x86 HVM guest administrators to cause a denial of service (host interrupt handling confusion) via vectors related to qemu and accessing spanning multiple fields.
The vgic_v2_to_sgi function in arch/arm/vgic-v2.c in Xen 4.5.x, when running on ARM hardware with general interrupt controller (GIC) version 2, allows local guest users to cause a denial of service (host crash) by writing an invalid value to the GICD.SGIR register.
The acceleration support for the "REP MOVS" instruction in Xen 4.4.x, 3.2.x, and earlier lacks properly bounds checking for memory mapped I/O (MMIO) emulated in the hypervisor, which allows local HVM guests to cause a denial of service (host crash) via unspecified vectors.
The HVMOP_set_mem_access HVM control operations in Xen 4.1.x for 32-bit and 4.1.x through 4.4.x for 64-bit allow local guest administrators to cause a denial of service (CPU consumption) by leveraging access to certain service domains for HVM guests and a large input.
An issue was discovered in Xen 4.11.x allowing x86 guest OS users to cause a denial of service (host OS hang) because the p2m lock remains unavailable indefinitely in certain error conditions.
An issue was discovered in Xen through 4.11.x. The DEBUGCTL MSR contains several debugging features, some of which virtualise cleanly, but some do not. In particular, Branch Trace Store is not virtualised by the processor, and software has to be careful to configure it suitably not to lock up the core. As a result, it must only be available to fully trusted guests. Unfortunately, in the case that vPMU is disabled, all value checking was skipped, allowing the guest to choose any MSR_DEBUGCTL setting it likes. A malicious or buggy guest administrator (on Intel x86 HVM or PVH) can lock up the entire host, causing a Denial of Service.
HVM soft-reset crashes toolstack libxl requires all data structures passed across its public interface to be initialized before use and disposed of afterwards by calling a specific set of functions. Many internal data structures also require this initialize / dispose discipline, but not all of them. When the "soft reset" feature was implemented, the libxl__domain_suspend_state structure didn't require any initialization or disposal. At some point later, an initialization function was introduced for the structure; but the "soft reset" path wasn't refactored to call the initialization function. When a guest nwo initiates a "soft reboot", uninitialized data structure leads to an assert() when later code finds the structure in an unexpected state. The effect of this is to crash the process monitoring the guest. How this affects the system depends on the structure of the toolstack. For xl, this will have no security-relevant effect: every VM has its own independent monitoring process, which contains no state. The domain in question will hang in a crashed state, but can be destroyed by `xl destroy` just like any other non-cooperating domain. For daemon-based toolstacks linked against libxl, such as libvirt, this will crash the toolstack, losing the state of any in-progress operations (localized DoS), and preventing further administrator operations unless the daemon is configured to restart automatically (system-wide DoS). If crashes "leak" resources, then repeated crashes could use up resources, also causing a system-wide DoS.
An issue was discovered in Xen through 4.11.x. The logic in oxenstored for handling writes depended on the order of evaluation of expressions making up a tuple. As indicated in section 7.7.3 "Operations on data structures" of the OCaml manual, the order of evaluation of subexpressions is not specified. In practice, different implementations behave differently. Thus, oxenstored may not enforce the configured quota-maxentity. This allows a malicious or buggy guest to write as many xenstore entries as it wishes, causing unbounded memory usage in oxenstored. This can lead to a system-wide DoS.
An issue was discovered in Xen through 4.14.x. There is a lack of preemption in evtchn_reset() / evtchn_destroy(). In particular, the FIFO event channel model allows guests to have a large number of event channels active at a time. Closing all of these (when resetting all event channels or when cleaning up after the guest) may take extended periods of time. So far, there was no arrangement for preemption at suitable intervals, allowing a CPU to spend an almost unbounded amount of time in the processing of these operations. Malicious or buggy guest kernels can mount a Denial of Service (DoS) attack affecting the entire system. All Xen versions are vulnerable in principle. Whether versions 4.3 and older are vulnerable depends on underlying hardware characteristics.