A stack buffer overflow vulnerability in the device control daemon (DCD) on Juniper Networks Junos OS allows a low privilege local user to create a Denial of Service (DoS) against the daemon or execute arbitrary code in the system with root privilege. This issue affects Juniper Networks Junos OS: 17.3 versions prior to 17.3R3-S9; 17.4 versions prior to 17.4R2-S12, 17.4R3-S3; 18.1 versions prior to 18.1R3-S11; 18.2 versions prior to 18.2R3-S6; 18.2X75 versions prior to 18.2X75-D53, 18.2X75-D65; 18.3 versions prior to 18.3R2-S4, 18.3R3-S4; 18.4 versions prior to 18.4R2-S5, 18.4R3-S5; 19.1 versions prior to 19.1R3-S3; 19.2 versions prior to 19.2R1-S5, 19.2R3; 19.3 versions prior to 19.3R2-S4, 19.3R3; 19.4 versions prior to 19.4R1-S3, 19.4R2-S2, 19.4R3; 20.1 versions prior to 20.1R1-S4, 20.1R2; 20.2 versions prior to 20.2R1-S1, 20.2R2. Versions of Junos OS prior to 17.3 are unaffected by this vulnerability.
When a device using Juniper Network's Dynamic Host Configuration Protocol Daemon (JDHCPD) process on Junos OS or Junos OS Evolved which is configured in relay mode it vulnerable to an attacker sending crafted IPv4 packets who may then arbitrarily execute commands as root on the target device. This issue affects IPv4 JDHCPD services. This issue affects: Juniper Networks Junos OS: 15.1 versions prior to 15.1R7-S6; 15.1X49 versions prior to 15.1X49-D200; 15.1X53 versions prior to 15.1X53-D592; 16.1 versions prior to 16.1R7-S6; 16.2 versions prior to 16.2R2-S11; 17.1 versions prior to 17.1R2-S11, 17.1R3-S1; 17.2 versions prior to 17.2R2-S8, 17.2R3-S3; 17.3 versions prior to 17.3R3-S6; 17.4 versions prior to 17.4R2-S7, 17.4R3; 18.1 versions prior to 18.1R3-S8; 18.2 versions prior to 18.2R3-S2; 18.2X75 versions prior to 18.2X75-D60; 18.3 versions prior to 18.3R1-S6, 18.3R2-S2, 18.3R3; 18.4 versions prior to 18.4R1-S5, 18.4R2-S3, 18.4R3; 19.1 versions prior to 19.1R1-S3, 19.1R2; 19.2 versions prior to 19.2R1-S3, 19.2R2*. and All versions prior to 19.3R1 on Junos OS Evolved. This issue do not affect versions of Junos OS prior to 15.1, or JDHCPD operating as a local server in non-relay mode.
When a device using Juniper Network's Dynamic Host Configuration Protocol Daemon (JDHCPD) process on Junos OS or Junos OS Evolved which is configured in relay mode it vulnerable to an attacker sending crafted IPv6 packets who may then arbitrarily execute commands as root on the target device. This issue affects IPv6 JDHCPD services. This issue affects: Juniper Networks Junos OS: 15.1 versions prior to 15.1R7-S6; 15.1X49 versions prior to 15.1X49-D200; 15.1X53 versions prior to 15.1X53-D592; 16.1 versions prior to 16.1R7-S6; 16.2 versions prior to 16.2R2-S11; 17.1 versions prior to 17.1R2-S11, 17.1R3-S1; 17.2 versions prior to 17.2R2-S8, 17.2R3-S3; 17.3 versions prior to 17.3R3-S6; 17.4 versions prior to 17.4R2-S7, 17.4R3; 18.1 versions prior to 18.1R3-S8; 18.2 versions prior to 18.2R3-S2; 18.2X75 versions prior to 18.2X75-D60; 18.3 versions prior to 18.3R1-S6, 18.3R2-S2, 18.3R3; 18.4 versions prior to 18.4R1-S5, 18.4R2-S3, 18.4R3; 19.1 versions prior to 19.1R1-S3, 19.1R2; 19.2 versions prior to 19.2R1-S3, 19.2R2*. and All versions prior to 19.3R1 on Junos OS Evolved. This issue do not affect versions of Junos OS prior to 15.1, or JDHCPD operating as a local server in non-relay mode.
An Out-of-Bounds Write vulnerability in the Routing Protocol Daemon (rpd) of Juniper Networks Junos OS and Junos OS Evolved allows an unauthenticated, network-based attacker to cause a Denial of Service (DoS). On all Junos OS and Junos OS Evolved devices an rpd crash and restart can occur while processing BGP route updates received over an established BGP session. This specific issue is observed for BGP routes learned via a peer which is configured with a BGP import policy that has hundreds of terms matching IPv4 and/or IPv6 prefixes. This issue affects Juniper Networks Junos OS: * All versions prior to 20.4R3-S8; * 21.1 version 21.1R1 and later versions; * 21.2 versions prior to 21.2R3-S2; * 21.3 versions prior to 21.3R3-S5; * 21.4 versions prior to 21.4R2-S1, 21.4R3-S5. This issue affects Juniper Networks Junos OS Evolved: * All versions prior to 20.4R3-S8-EVO; * 21.1-EVO version 21.1R1-EVO and later versions; * 21.2-EVO versions prior to 21.2R3-S2-EVO; * 21.3-EVO version 21.3R1-EVO and later versions; * 21.4-EVO versions prior to 21.4R2-S1-EVO, 21.4R3-S5-EVO.
Heap-based buffer overflow in the PCNET controller in QEMU allows remote attackers to execute arbitrary code by sending a packet with TXSTATUS_STARTPACKET set and then a crafted packet with TXSTATUS_DEVICEOWNS set.
An issue was discovered in libslax through v0.22.1. slaxLexer() in slaxlexer.c has a stack-based buffer overflow.
A buffer size validation vulnerability in the overlayd service of Juniper Networks Junos OS may allow an unauthenticated remote attacker to send specially crafted packets to the device, triggering a partial Denial of Service (DoS) condition, or leading to remote code execution (RCE). Continued receipt and processing of these packets will sustain the partial DoS. The overlayd daemon handles Overlay OAM packets, such as ping and traceroute, sent to the overlay. The service runs as root by default and listens for UDP connections on port 4789. This issue results from improper buffer size validation, which can lead to a buffer overflow. Unauthenticated attackers can send specially crafted packets to trigger this vulnerability, resulting in possible remote code execution. overlayd runs by default in MX Series, ACX Series, and QFX Series platforms. The SRX Series does not support VXLAN and is therefore not vulnerable to this issue. Other platforms are also vulnerable if a Virtual Extensible LAN (VXLAN) overlay network is configured. This issue affects Juniper Networks Junos OS: 15.1 versions prior to 15.1R7-S9; 17.3 versions prior to 17.3R3-S11; 17.4 versions prior to 17.4R2-S13, 17.4R3-S4; 18.1 versions prior to 18.1R3-S12; 18.2 versions prior to 18.2R2-S8, 18.2R3-S7; 18.3 versions prior to 18.3R3-S4; 18.4 versions prior to 18.4R1-S8, 18.4R2-S7, 18.4R3-S7; 19.1 versions prior to 19.1R2-S2, 19.1R3-S4; 19.2 versions prior to 19.2R1-S6, 19.2R3-S2; 19.3 versions prior to 19.3R3-S1; 19.4 versions prior to 19.4R2-S4, 19.4R3-S1; 20.1 versions prior to 20.1R2-S1, 20.1R3; 20.2 versions prior to 20.2R2, 20.2R2-S1, 20.2R3; 20.3 versions prior to 20.3R1-S1.
In the Linux kernel, the following vulnerability has been resolved: vmxnet3: Fix packet corruption in vmxnet3_xdp_xmit_frame Andrew and Nikolay reported connectivity issues with Cilium's service load-balancing in case of vmxnet3. If a BPF program for native XDP adds an encapsulation header such as IPIP and transmits the packet out the same interface, then in case of vmxnet3 a corrupted packet is being sent and subsequently dropped on the path. vmxnet3_xdp_xmit_frame() which is called e.g. via vmxnet3_run_xdp() through vmxnet3_xdp_xmit_back() calculates an incorrect DMA address: page = virt_to_page(xdpf->data); tbi->dma_addr = page_pool_get_dma_addr(page) + VMXNET3_XDP_HEADROOM; dma_sync_single_for_device(&adapter->pdev->dev, tbi->dma_addr, buf_size, DMA_TO_DEVICE); The above assumes a fixed offset (VMXNET3_XDP_HEADROOM), but the XDP BPF program could have moved xdp->data. While the passed buf_size is correct (xdpf->len), the dma_addr needs to have a dynamic offset which can be calculated as xdpf->data - (void *)xdpf, that is, xdp->data - xdp->data_hard_start.
An exploitable stack buffer overflow vulnerability vulnerability exists in the iocheckd service ‘I/O-Check’ functionality of WAGO PFC 200 Firmware version 03.02.02(14). An attacker can send a specially crafted packet to trigger the parsing of this cache file.The destination buffer sp+0x40 is overflowed with the call to sprintf() for any gateway values that are greater than 512-len(‘/etc/config-tools/config_default_gateway number=0 state=enabled value=‘) in length. A gateway value of length 0x7e2 will cause the service to crash.
libjxl b02d6b9, as used in libvips 8.11 through 8.11.2 and other products, has an out-of-bounds write in jxl::ModularFrameDecoder::DecodeGroup (called from jxl::FrameDecoder::ProcessACGroup and jxl::ThreadPool::RunCallState<jxl::FrameDecoder::ProcessSections).
GPAC 2.3-DEV-rev605-gfc9e29089-master contains a SEGV in gpac/MP4Box in gf_media_change_pl /afltest/gpac/src/media_tools/isom_tools.c:3293:42.
Stack-based buffer overflow in the vrend_decode_set_framebuffer_state function in vrend_decode.c in virglrenderer before 926b9b3460a48f6454d8bbe9e44313d86a65447f, as used in Quick Emulator (QEMU), allows a local guest users to cause a denial of service (application crash) via the "nr_cbufs" argument.
An exploitable stack buffer overflow vulnerability vulnerability exists in the iocheckd service ‘I/O-Check’ functionality of WAGO PFC 200 Firmware version 03.02.02(14). The destination buffer sp+0x440 is overflowed with the call to sprintf() for any domainname values that are greater than 1024-len(‘/etc/config-tools/edit_dns_server domain-name=‘) in length. A domainname value of length 0x3fa will cause the service to crash.
GPAC 2.3-DEV-rev605-gfc9e29089-master contains a SEGV in gpac/MP4Box in gf_isom_find_od_id_for_track /afltest/gpac/src/isomedia/media_odf.c:522:14.
A vulnerability was found in Performance Co-Pilot (PCP). This flaw allows an attacker to send specially crafted data to the system, which could cause the program to misbehave or crash.
In the Linux kernel, the following vulnerability has been resolved: soc: qcom: cmd-db: Map shared memory as WC, not WB Linux does not write into cmd-db region. This region of memory is write protected by XPU. XPU may sometime falsely detect clean cache eviction as "write" into the write protected region leading to secure interrupt which causes an endless loop somewhere in Trust Zone. The only reason it is working right now is because Qualcomm Hypervisor maps the same region as Non-Cacheable memory in Stage 2 translation tables. The issue manifests if we want to use another hypervisor (like Xen or KVM), which does not know anything about those specific mappings. Changing the mapping of cmd-db memory from MEMREMAP_WB to MEMREMAP_WT/WC removes dependency on correct mappings in Stage 2 tables. This patch fixes the issue by updating the mapping to MEMREMAP_WC. I tested this on SA8155P with Xen.
Insufficient input validation during parsing of the System Management Mode (SMM) binary may allow a maliciously crafted SMM executable binary to corrupt Dynamic Root of Trust for Measurement (DRTM) user application memory that may result in a potential denial of service.
In the Linux kernel, the following vulnerability has been resolved: erofs: fix out-of-bound access when z_erofs_gbuf_growsize() partially fails If z_erofs_gbuf_growsize() partially fails on a global buffer due to memory allocation failure or fault injection (as reported by syzbot [1]), new pages need to be freed by comparing to the existing pages to avoid memory leaks. However, the old gbuf->pages[] array may not be large enough, which can lead to null-ptr-deref or out-of-bound access. Fix this by checking against gbuf->nrpages in advance. [1] https://lore.kernel.org/r/000000000000f7b96e062018c6e3@google.com
In the Linux kernel, the following vulnerability has been resolved: bpf: Fix a kernel verifier crash in stacksafe() Daniel Hodges reported a kernel verifier crash when playing with sched-ext. Further investigation shows that the crash is due to invalid memory access in stacksafe(). More specifically, it is the following code: if (exact != NOT_EXACT && old->stack[spi].slot_type[i % BPF_REG_SIZE] != cur->stack[spi].slot_type[i % BPF_REG_SIZE]) return false; The 'i' iterates old->allocated_stack. If cur->allocated_stack < old->allocated_stack the out-of-bound access will happen. To fix the issue add 'i >= cur->allocated_stack' check such that if the condition is true, stacksafe() should fail. Otherwise, cur->stack[spi].slot_type[i % BPF_REG_SIZE] memory access is legal.
In the Linux kernel, the following vulnerability has been resolved: mptcp: fix data stream corruption Maxim reported several issues when forcing a TCP transparent proxy to use the MPTCP protocol for the inbound connections. He also provided a clean reproducer. The problem boils down to 'mptcp_frag_can_collapse_to()' assuming that only MPTCP will use the given page_frag. If others - e.g. the plain TCP protocol - allocate page fragments, we can end-up re-using already allocated memory for mptcp_data_frag. Fix the issue ensuring that the to-be-expanded data fragment is located at the current page frag end. v1 -> v2: - added missing fixes tag (Mat)
In the Linux kernel, the following vulnerability has been resolved: fix bitmap corruption on close_range() with CLOSE_RANGE_UNSHARE copy_fd_bitmaps(new, old, count) is expected to copy the first count/BITS_PER_LONG bits from old->full_fds_bits[] and fill the rest with zeroes. What it does is copying enough words (BITS_TO_LONGS(count/BITS_PER_LONG)), then memsets the rest. That works fine, *if* all bits past the cutoff point are clear. Otherwise we are risking garbage from the last word we'd copied. For most of the callers that is true - expand_fdtable() has count equal to old->max_fds, so there's no open descriptors past count, let alone fully occupied words in ->open_fds[], which is what bits in ->full_fds_bits[] correspond to. The other caller (dup_fd()) passes sane_fdtable_size(old_fdt, max_fds), which is the smallest multiple of BITS_PER_LONG that covers all opened descriptors below max_fds. In the common case (copying on fork()) max_fds is ~0U, so all opened descriptors will be below it and we are fine, by the same reasons why the call in expand_fdtable() is safe. Unfortunately, there is a case where max_fds is less than that and where we might, indeed, end up with junk in ->full_fds_bits[] - close_range(from, to, CLOSE_RANGE_UNSHARE) with * descriptor table being currently shared * 'to' being above the current capacity of descriptor table * 'from' being just under some chunk of opened descriptors. In that case we end up with observably wrong behaviour - e.g. spawn a child with CLONE_FILES, get all descriptors in range 0..127 open, then close_range(64, ~0U, CLOSE_RANGE_UNSHARE) and watch dup(0) ending up with descriptor #128, despite #64 being observably not open. The minimally invasive fix would be to deal with that in dup_fd(). If this proves to add measurable overhead, we can go that way, but let's try to fix copy_fd_bitmaps() first. * new helper: bitmap_copy_and_expand(to, from, bits_to_copy, size). * make copy_fd_bitmaps() take the bitmap size in words, rather than bits; it's 'count' argument is always a multiple of BITS_PER_LONG, so we are not losing any information, and that way we can use the same helper for all three bitmaps - compiler will see that count is a multiple of BITS_PER_LONG for the large ones, so it'll generate plain memcpy()+memset(). Reproducer added to tools/testing/selftests/core/close_range_test.c
An exploitable stack buffer overflow vulnerability vulnerability exists in the iocheckd service ‘I/O-Check’ functionality of WAGO PFC 200 Firmware version 03.02.02(14). An attacker can send a specially crafted packet to trigger the parsing of this cache file.The destination buffer sp+0x440 is overflowed with the call to sprintf() for any type values that are greater than 1024-len(‘/etc/config-tools/config_interfaces interface=X1 state=enabled config-type=‘) in length. A type value of length 0x3d9 will cause the service to crash.
Eximious Logo Designer 3.82 has a User Mode Write AV starting at ExiVectorRender!StrokeText_Blend+0x00000000000003a7.
In the Linux kernel, the following vulnerability has been resolved: igb: cope with large MAX_SKB_FRAGS Sabrina reports that the igb driver does not cope well with large MAX_SKB_FRAG values: setting MAX_SKB_FRAG to 45 causes payload corruption on TX. An easy reproducer is to run ssh to connect to the machine. With MAX_SKB_FRAGS=17 it works, with MAX_SKB_FRAGS=45 it fails. This has been reported originally in https://bugzilla.redhat.com/show_bug.cgi?id=2265320 The root cause of the issue is that the driver does not take into account properly the (possibly large) shared info size when selecting the ring layout, and will try to fit two packets inside the same 4K page even when the 1st fraglist will trump over the 2nd head. Address the issue by checking if 2K buffers are insufficient.
In the Linux kernel, the following vulnerability has been resolved: mm/vmalloc: fix page mapping if vm_area_alloc_pages() with high order fallback to order 0 The __vmap_pages_range_noflush() assumes its argument pages** contains pages with the same page shift. However, since commit e9c3cda4d86e ("mm, vmalloc: fix high order __GFP_NOFAIL allocations"), if gfp_flags includes __GFP_NOFAIL with high order in vm_area_alloc_pages() and page allocation failed for high order, the pages** may contain two different page shifts (high order and order-0). This could lead __vmap_pages_range_noflush() to perform incorrect mappings, potentially resulting in memory corruption. Users might encounter this as follows (vmap_allow_huge = true, 2M is for PMD_SIZE): kvmalloc(2M, __GFP_NOFAIL|GFP_X) __vmalloc_node_range_noprof(vm_flags=VM_ALLOW_HUGE_VMAP) vm_area_alloc_pages(order=9) ---> order-9 allocation failed and fallback to order-0 vmap_pages_range() vmap_pages_range_noflush() __vmap_pages_range_noflush(page_shift = 21) ----> wrong mapping happens We can remove the fallback code because if a high-order allocation fails, __vmalloc_node_range_noprof() will retry with order-0. Therefore, it is unnecessary to fallback to order-0 here. Therefore, fix this by removing the fallback code.
GPAC 2.3-DEV-rev605-gfc9e29089-master contains a heap-buffer-overflow in ffdmx_parse_side_data /afltest/gpac/src/filters/ff_dmx.c:202:14 in gpac/MP4Box.
In /SM8250_Q_Master/android/vendor/oppo_charger/oppo/charger_ic/oppo_da9313.c, failure to check the parameter buf in the function proc_work_mode_write in proc_work_mode_write causes a vulnerability.
In the Linux kernel, the following vulnerability has been resolved: ext4: fix slab-out-of-bounds in ext4_mb_find_good_group_avg_frag_lists() We can trigger a slab-out-of-bounds with the following commands: mkfs.ext4 -F /dev/$disk 10G mount /dev/$disk /tmp/test echo 2147483647 > /sys/fs/ext4/$disk/mb_group_prealloc echo test > /tmp/test/file && sync ================================================================== BUG: KASAN: slab-out-of-bounds in ext4_mb_find_good_group_avg_frag_lists+0x8a/0x200 [ext4] Read of size 8 at addr ffff888121b9d0f0 by task kworker/u2:0/11 CPU: 0 PID: 11 Comm: kworker/u2:0 Tainted: GL 6.7.0-next-20240118 #521 Call Trace: dump_stack_lvl+0x2c/0x50 kasan_report+0xb6/0xf0 ext4_mb_find_good_group_avg_frag_lists+0x8a/0x200 [ext4] ext4_mb_regular_allocator+0x19e9/0x2370 [ext4] ext4_mb_new_blocks+0x88a/0x1370 [ext4] ext4_ext_map_blocks+0x14f7/0x2390 [ext4] ext4_map_blocks+0x569/0xea0 [ext4] ext4_do_writepages+0x10f6/0x1bc0 [ext4] [...] ================================================================== The flow of issue triggering is as follows: // Set s_mb_group_prealloc to 2147483647 via sysfs ext4_mb_new_blocks ext4_mb_normalize_request ext4_mb_normalize_group_request ac->ac_g_ex.fe_len = EXT4_SB(sb)->s_mb_group_prealloc ext4_mb_regular_allocator ext4_mb_choose_next_group ext4_mb_choose_next_group_best_avail mb_avg_fragment_size_order order = fls(len) - 2 = 29 ext4_mb_find_good_group_avg_frag_lists frag_list = &sbi->s_mb_avg_fragment_size[order] if (list_empty(frag_list)) // Trigger SOOB! At 4k block size, the length of the s_mb_avg_fragment_size list is 14, but an oversized s_mb_group_prealloc is set, causing slab-out-of-bounds to be triggered by an attempt to access an element at index 29. Add a new attr_id attr_clusters_in_group with values in the range [0, sbi->s_clusters_per_group] and declare mb_group_prealloc as that type to fix the issue. In addition avoid returning an order from mb_avg_fragment_size_order() greater than MB_NUM_ORDERS(sb) and reduce some useless loops.
An out-of-bounds write issue was addressed with improved input validation. This issue is fixed in macOS Sonoma 14.6. An app may be able to cause a coprocessor crash.
Buffer Overflow vulnerability in Cesanta mJS 1.26 allows remote attackers to cause a denial of service via crafted .js file to mjs_set_errorf.
GPAC 2.3-DEV-rev605-gfc9e29089-master contains a heap-buffer-overflow in gf_isom_use_compact_size gpac/src/isomedia/isom_write.c:3403:3 in gpac/MP4Box.
Out-of-bounds array write in Xpdf 4.05 and earlier, triggered by long Unicode sequence in ActualText.
An issue was discovered in Xen through 4.14.x. Out of bounds event channels are available to 32-bit x86 domains. The so called 2-level event channel model imposes different limits on the number of usable event channels for 32-bit x86 domains vs 64-bit or Arm (either bitness) ones. 32-bit x86 domains can use only 1023 channels, due to limited space in their shared (between guest and Xen) information structure, whereas all other domains can use up to 4095 in this model. The recording of the respective limit during domain initialization, however, has occurred at a time where domains are still deemed to be 64-bit ones, prior to actually honoring respective domain properties. At the point domains get recognized as 32-bit ones, the limit didn't get updated accordingly. Due to this misbehavior in Xen, 32-bit domains (including Domain 0) servicing other domains may observe event channel allocations to succeed when they should really fail. Subsequent use of such event channels would then possibly lead to corruption of other parts of the shared info structure. An unprivileged guest may cause another domain, in particular Domain 0, to misbehave. This may lead to a Denial of Service (DoS) for the entire system. All Xen versions from 4.4 onwards are vulnerable. Xen versions 4.3 and earlier are not vulnerable. Only x86 32-bit domains servicing other domains are vulnerable. Arm systems, as well as x86 64-bit domains, are not vulnerable.
TensorFlow is an end-to-end open source platform for machine learning. An attacker can cause a heap buffer overflow by passing crafted inputs to `tf.raw_ops.StringNGrams`. This is because the implementation(https://github.com/tensorflow/tensorflow/blob/1cdd4da14282210cc759e468d9781741ac7d01bf/tensorflow/core/kernels/string_ngrams_op.cc#L171-L185) fails to consider corner cases where input would be split in such a way that the generated tokens should only contain padding elements. If input is such that `num_tokens` is 0, then, for `data_start_index=0` (when left padding is present), the marked line would result in reading `data[-1]`. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range.
In Arm Trusted Firmware M through 1.2, the NS world may trigger a system halt, an overwrite of secure data, or the printing out of secure data when calling secure functions under the NSPE handler mode.
Out-of-bounds array write in Xpdf 4.05 and earlier, triggered by negative object number in indirect reference in the input PDF file.
AMD System Management Unit (SMU) may experience a heap-based overflow which may result in a loss of resources.
In the Linux kernel, the following vulnerability has been resolved: xhci: handle isoc Babble and Buffer Overrun events properly xHCI 4.9 explicitly forbids assuming that the xHC has released its ownership of a multi-TRB TD when it reports an error on one of the early TRBs. Yet the driver makes such assumption and releases the TD, allowing the remaining TRBs to be freed or overwritten by new TDs. The xHC should also report completion of the final TRB due to its IOC flag being set by us, regardless of prior errors. This event cannot be recognized if the TD has already been freed earlier, resulting in "Transfer event TRB DMA ptr not part of current TD" error message. Fix this by reusing the logic for processing isoc Transaction Errors. This also handles hosts which fail to report the final completion. Fix transfer length reporting on Babble errors. They may be caused by device malfunction, no guarantee that the buffer has been filled.
In the Linux kernel, the following vulnerability has been resolved: smb: Fix regression in writes when non-standard maximum write size negotiated The conversion to netfs in the 6.3 kernel caused a regression when maximum write size is set by the server to an unexpected value which is not a multiple of 4096 (similarly if the user overrides the maximum write size by setting mount parm "wsize", but sets it to a value that is not a multiple of 4096). When negotiated write size is not a multiple of 4096 the netfs code can skip the end of the final page when doing large sequential writes, causing data corruption. This section of code is being rewritten/removed due to a large netfs change, but until that point (ie for the 6.3 kernel until now) we can not support non-standard maximum write sizes. Add a warning if a user specifies a wsize on mount that is not a multiple of 4096 (and round down), also add a change where we round down the maximum write size if the server negotiates a value that is not a multiple of 4096 (we also have to check to make sure that we do not round it down to zero).
In the Linux kernel, the following vulnerability has been resolved: libbpf: Use OPTS_SET() macro in bpf_xdp_query() When the feature_flags and xdp_zc_max_segs fields were added to the libbpf bpf_xdp_query_opts, the code writing them did not use the OPTS_SET() macro. This causes libbpf to write to those fields unconditionally, which means that programs compiled against an older version of libbpf (with a smaller size of the bpf_xdp_query_opts struct) will have its stack corrupted by libbpf writing out of bounds. The patch adding the feature_flags field has an early bail out if the feature_flags field is not part of the opts struct (via the OPTS_HAS) macro, but the patch adding xdp_zc_max_segs does not. For consistency, this fix just changes the assignments to both fields to use the OPTS_SET() macro.
In the Linux kernel, the following vulnerability has been resolved: Both cadence-quadspi ->runtime_suspend() and ->runtime_resume() implementations start with: struct cqspi_st *cqspi = dev_get_drvdata(dev); struct spi_controller *host = dev_get_drvdata(dev); This obviously cannot be correct, unless "struct cqspi_st" is the first member of " struct spi_controller", or the other way around, but it is not the case. "struct spi_controller" is allocated by devm_spi_alloc_host(), which allocates an extra amount of memory for private data, used to store "struct cqspi_st". The ->probe() function of the cadence-quadspi driver then sets the device drvdata to store the address of the "struct cqspi_st" structure. Therefore: struct cqspi_st *cqspi = dev_get_drvdata(dev); is correct, but: struct spi_controller *host = dev_get_drvdata(dev); is not, as it makes "host" point not to a "struct spi_controller" but to the same "struct cqspi_st" structure as above. This obviously leads to bad things (memory corruption, kernel crashes) directly during ->probe(), as ->probe() enables the device using PM runtime, leading the ->runtime_resume() hook being called, which in turns calls spi_controller_resume() with the wrong pointer. This has at least been reported [0] to cause a kernel crash, but the exact behavior will depend on the memory contents. [0] https://lore.kernel.org/all/20240226121803.5a7r5wkpbbowcxgx@dhruva/ This issue potentially affects all platforms that are currently using the cadence-quadspi driver.
In the Linux kernel, the following vulnerability has been resolved: dm-crypt, dm-verity: disable tasklets Tasklets have an inherent problem with memory corruption. The function tasklet_action_common calls tasklet_trylock, then it calls the tasklet callback and then it calls tasklet_unlock. If the tasklet callback frees the structure that contains the tasklet or if it calls some code that may free it, tasklet_unlock will write into free memory. The commits 8e14f610159d and d9a02e016aaf try to fix it for dm-crypt, but it is not a sufficient fix and the data corruption can still happen [1]. There is no fix for dm-verity and dm-verity will write into free memory with every tasklet-processed bio. There will be atomic workqueues implemented in the kernel 6.9 [2]. They will have better interface and they will not suffer from the memory corruption problem. But we need something that stops the memory corruption now and that can be backported to the stable kernels. So, I'm proposing this commit that disables tasklets in both dm-crypt and dm-verity. This commit doesn't remove the tasklet support, because the tasklet code will be reused when atomic workqueues will be implemented. [1] https://lore.kernel.org/all/d390d7ee-f142-44d3-822a-87949e14608b@suse.de/T/ [2] https://lore.kernel.org/lkml/20240130091300.2968534-1-tj@kernel.org/
In the Linux kernel, the following vulnerability has been resolved: arm64: entry: fix ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD Currently the ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD workaround isn't quite right, as it is supposed to be applied after the last explicit memory access, but is immediately followed by an LDR. The ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD workaround is used to handle Cortex-A520 erratum 2966298 and Cortex-A510 erratum 3117295, which are described in: * https://developer.arm.com/documentation/SDEN2444153/0600/?lang=en * https://developer.arm.com/documentation/SDEN1873361/1600/?lang=en In both cases the workaround is described as: | If pagetable isolation is disabled, the context switch logic in the | kernel can be updated to execute the following sequence on affected | cores before exiting to EL0, and after all explicit memory accesses: | | 1. A non-shareable TLBI to any context and/or address, including | unused contexts or addresses, such as a `TLBI VALE1 Xzr`. | | 2. A DSB NSH to guarantee completion of the TLBI. The important part being that the TLBI+DSB must be placed "after all explicit memory accesses". Unfortunately, as-implemented, the TLBI+DSB is immediately followed by an LDR, as we have: | alternative_if ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD | tlbi vale1, xzr | dsb nsh | alternative_else_nop_endif | alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0 | ldr lr, [sp, #S_LR] | add sp, sp, #PT_REGS_SIZE // restore sp | eret | alternative_else_nop_endif | | [ ... KPTI exception return path ... ] This patch fixes this by reworking the logic to place the TLBI+DSB immediately before the ERET, after all explicit memory accesses. The ERET is currently in a separate alternative block, and alternatives cannot be nested. To account for this, the alternative block for ARM64_UNMAP_KERNEL_AT_EL0 is replaced with a single alternative branch to skip the KPTI logic, with the new shape of the logic being: | alternative_insn "b .L_skip_tramp_exit_\@", nop, ARM64_UNMAP_KERNEL_AT_EL0 | [ ... KPTI exception return path ... ] | .L_skip_tramp_exit_\@: | | ldr lr, [sp, #S_LR] | add sp, sp, #PT_REGS_SIZE // restore sp | | alternative_if ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD | tlbi vale1, xzr | dsb nsh | alternative_else_nop_endif | eret The new structure means that the workaround is only applied when KPTI is not in use; this is fine as noted in the documented implications of the erratum: | Pagetable isolation between EL0 and higher level ELs prevents the | issue from occurring. ... and as per the workaround description quoted above, the workaround is only necessary "If pagetable isolation is disabled".
In the Linux kernel, the following vulnerability has been resolved: netfilter: nf_conntrack_h323: Add protection for bmp length out of range UBSAN load reports an exception of BRK#5515 SHIFT_ISSUE:Bitwise shifts that are out of bounds for their data type. vmlinux get_bitmap(b=75) + 712 <net/netfilter/nf_conntrack_h323_asn1.c:0> vmlinux decode_seq(bs=0xFFFFFFD008037000, f=0xFFFFFFD008037018, level=134443100) + 1956 <net/netfilter/nf_conntrack_h323_asn1.c:592> vmlinux decode_choice(base=0xFFFFFFD0080370F0, level=23843636) + 1216 <net/netfilter/nf_conntrack_h323_asn1.c:814> vmlinux decode_seq(f=0xFFFFFFD0080371A8, level=134443500) + 812 <net/netfilter/nf_conntrack_h323_asn1.c:576> vmlinux decode_choice(base=0xFFFFFFD008037280, level=0) + 1216 <net/netfilter/nf_conntrack_h323_asn1.c:814> vmlinux DecodeRasMessage() + 304 <net/netfilter/nf_conntrack_h323_asn1.c:833> vmlinux ras_help() + 684 <net/netfilter/nf_conntrack_h323_main.c:1728> vmlinux nf_confirm() + 188 <net/netfilter/nf_conntrack_proto.c:137> Due to abnormal data in skb->data, the extension bitmap length exceeds 32 when decoding ras message then uses the length to make a shift operation. It will change into negative after several loop. UBSAN load could detect a negative shift as an undefined behaviour and reports exception. So we add the protection to avoid the length exceeding 32. Or else it will return out of range error and stop decoding.
In the Linux kernel, the following vulnerability has been resolved: igc: avoid returning frame twice in XDP_REDIRECT When a frame can not be transmitted in XDP_REDIRECT (e.g. due to a full queue), it is necessary to free it by calling xdp_return_frame_rx_napi. However, this is the responsibility of the caller of the ndo_xdp_xmit (see for example bq_xmit_all in kernel/bpf/devmap.c) and thus calling it inside igc_xdp_xmit (which is the ndo_xdp_xmit of the igc driver) as well will lead to memory corruption. In fact, bq_xmit_all expects that it can return all frames after the last successfully transmitted one. Therefore, break for the first not transmitted frame, but do not call xdp_return_frame_rx_napi in igc_xdp_xmit. This is equally implemented in other Intel drivers such as the igb. There are two alternatives to this that were rejected: 1. Return num_frames as all the frames would have been transmitted and release them inside igc_xdp_xmit. While it might work technically, it is not what the return value is meant to represent (i.e. the number of SUCCESSFULLY transmitted packets). 2. Rework kernel/bpf/devmap.c and all drivers to support non-consecutively dropped packets. Besides being complex, it likely has a negative performance impact without a significant gain since it is anyway unlikely that the next frame can be transmitted if the previous one was dropped. The memory corruption can be reproduced with the following script which leads to a kernel panic after a few seconds. It basically generates more traffic than a i225 NIC can transmit and pushes it via XDP_REDIRECT from a virtual interface to the physical interface where frames get dropped. #!/bin/bash INTERFACE=enp4s0 INTERFACE_IDX=`cat /sys/class/net/$INTERFACE/ifindex` sudo ip link add dev veth1 type veth peer name veth2 sudo ip link set up $INTERFACE sudo ip link set up veth1 sudo ip link set up veth2 cat << EOF > redirect.bpf.c SEC("prog") int redirect(struct xdp_md *ctx) { return bpf_redirect($INTERFACE_IDX, 0); } char _license[] SEC("license") = "GPL"; EOF clang -O2 -g -Wall -target bpf -c redirect.bpf.c -o redirect.bpf.o sudo ip link set veth2 xdp obj redirect.bpf.o cat << EOF > pass.bpf.c SEC("prog") int pass(struct xdp_md *ctx) { return XDP_PASS; } char _license[] SEC("license") = "GPL"; EOF clang -O2 -g -Wall -target bpf -c pass.bpf.c -o pass.bpf.o sudo ip link set $INTERFACE xdp obj pass.bpf.o cat << EOF > trafgen.cfg { /* Ethernet Header */ 0xe8, 0x6a, 0x64, 0x41, 0xbf, 0x46, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, const16(ETH_P_IP), /* IPv4 Header */ 0b01000101, 0, # IPv4 version, IHL, TOS const16(1028), # IPv4 total length (UDP length + 20 bytes (IP header)) const16(2), # IPv4 ident 0b01000000, 0, # IPv4 flags, fragmentation off 64, # IPv4 TTL 17, # Protocol UDP csumip(14, 33), # IPv4 checksum /* UDP Header */ 10, 0, 1, 1, # IP Src - adapt as needed 10, 0, 1, 2, # IP Dest - adapt as needed const16(6666), # UDP Src Port const16(6666), # UDP Dest Port const16(1008), # UDP length (UDP header 8 bytes + payload length) csumudp(14, 34), # UDP checksum /* Payload */ fill('W', 1000), } EOF sudo trafgen -i trafgen.cfg -b3000MB -o veth1 --cpp
In the Linux kernel, the following vulnerability has been resolved: ksmbd: validate payload size in ipc response If installing malicious ksmbd-tools, ksmbd.mountd can return invalid ipc response to ksmbd kernel server. ksmbd should validate payload size of ipc response from ksmbd.mountd to avoid memory overrun or slab-out-of-bounds. This patch validate 3 ipc response that has payload.
Out-of-Bounds Write vulnerability in Jungo WinDriver before 12.5.1 allows local attackers to cause a Windows blue screen error and Denial of Service (DoS).
For certain valid JPEG XL images with a size slightly larger than an integer number of groups (256x256 pixels) when processing the groups out of order the decoder can perform an out of bounds copy of image pixels from an image buffer in the heap to another. This copy can occur when processing the right or bottom edges of the image, but only when groups are processed in certain order. Groups can be processed out of order in multi-threaded decoding environments with heavy thread load but also with images that contain the groups in an arbitrary order in the file. It is recommended to upgrade past 0.6.0 or patch with https://github.com/libjxl/libjxl/pull/775
A vulnerability was reported in the Open vSwitch sub-component in the Linux Kernel. The flaw occurs when a recursive operation of code push recursively calls into the code block. The OVS module does not validate the stack depth, pushing too many frames and causing a stack overflow. As a result, this can lead to a crash or other related issues.
An issue was discovered in the Linux kernel 5.4 and 5.5 through 5.5.6 on the AArch64 architecture. It ignores the top byte in the address passed to the brk system call, potentially moving the memory break downwards when the application expects it to move upwards, aka CID-dcde237319e6. This has been observed to cause heap corruption with the GNU C Library malloc implementation.