ThinkTrap: Denial-of-Service Attacks against Black-box LLM Services via Infinite Thinking
- URL: http://arxiv.org/abs/2512.07086v1
- Date: Mon, 08 Dec 2025 01:41:57 GMT
- Title: ThinkTrap: Denial-of-Service Attacks against Black-box LLM Services via Infinite Thinking
- Authors: Yunzhe Li, Jianan Wang, Hongzi Zhu, James Lin, Shan Chang, Minyi Guo,
- Abstract summary: Large Language Models (LLMs) have become foundational components in a wide range of applications.<n>A new class of threat: denial-of-service (DoS) attacks via unbounded reasoning.<n>We propose ThinkTrap, a novel input-space optimization framework for DoS attacks against LLM services.
- Score: 39.76888137704647
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) have become foundational components in a wide range of applications, including natural language understanding and generation, embodied intelligence, and scientific discovery. As their computational requirements continue to grow, these models are increasingly deployed as cloud-based services, allowing users to access powerful LLMs via the Internet. However, this deployment model introduces a new class of threat: denial-of-service (DoS) attacks via unbounded reasoning, where adversaries craft specially designed inputs that cause the model to enter excessively long or infinite generation loops. These attacks can exhaust backend compute resources, degrading or denying service to legitimate users. To mitigate such risks, many LLM providers adopt a closed-source, black-box setting to obscure model internals. In this paper, we propose ThinkTrap, a novel input-space optimization framework for DoS attacks against LLM services even in black-box environments. The core idea of ThinkTrap is to first map discrete tokens into a continuous embedding space, then undertake efficient black-box optimization in a low-dimensional subspace exploiting input sparsity. The goal of this optimization is to identify adversarial prompts that induce extended or non-terminating generation across several state-of-the-art LLMs, achieving DoS with minimal token overhead. We evaluate the proposed attack across multiple commercial, closed-source LLM services. Our results demonstrate that, even far under the restrictive request frequency limits commonly enforced by these platforms, typically capped at ten requests per minute (10 RPM), the attack can degrade service throughput to as low as 1% of its original capacity, and in some cases, induce complete service failure.
Related papers
- PSM: Prompt Sensitivity Minimization via LLM-Guided Black-Box Optimization [0.0]
This paper introduces a novel framework for hardening system prompts through shield appending.<n>We leverage an LLM-as-optimizer to search the space of possible SHIELDs, seeking to minimize a leakage metric derived from a suite of adversarial attacks.<n>We demonstrate empirically that our optimized SHIELDs significantly reduce prompt leakage against a comprehensive set of extraction attacks.
arXiv Detail & Related papers (2025-11-20T10:25:45Z) - Black-box Optimization of LLM Outputs by Asking for Directions [34.0051902705951]
We present a novel approach for attacking black-box large language models (LLMs) by exploiting their ability to express confidence in natural language.<n>We apply our general method to three attack scenarios: adversarial examples for vision-LLMs, jailbreaks and prompt injections.
arXiv Detail & Related papers (2025-10-19T11:13:45Z) - Better Privilege Separation for Agents by Restricting Data Types [6.028799607869068]
We propose type-directed privilege separation for large language models (LLMs)<n>We restrict the ability of an LLM to interact with third-party data by converting untrusted content to a curated set of data types.<n>Unlike raw strings, each data type is limited in scope and content, eliminating the possibility for prompt injections.
arXiv Detail & Related papers (2025-09-30T08:20:50Z) - Crabs: Consuming Resource via Auto-generation for LLM-DoS Attack under Black-box Settings [18.589945121820243]
We introduce Auto-Generation for LLM-DoS (AutoDoS) attack, an automated algorithm designed for black-box LLMs.<n>By transferability-driven iterative optimization, AutoDoS could work across different models in one prompt.<n> Experimental results show that AutoDoS significantly amplifies service response latency by over 250$timesuparrow$, leading to severe resource consumption.
arXiv Detail & Related papers (2024-12-18T14:19:23Z) - Denial-of-Service Poisoning Attacks against Large Language Models [64.77355353440691]
LLMs are vulnerable to denial-of-service (DoS) attacks, where spelling errors or non-semantic prompts trigger endless outputs without generating an [EOS] token.
We propose poisoning-based DoS attacks for LLMs, demonstrating that injecting a single poisoned sample designed for DoS purposes can break the output length limit.
arXiv Detail & Related papers (2024-10-14T17:39:31Z) - ShadowCode: Towards (Automatic) External Prompt Injection Attack against Code LLMs [56.46702494338318]
This paper introduces a new attack paradigm: (automatic) external prompt injection against code-oriented large language models.<n>We propose ShadowCode, a simple yet effective method that automatically generates induced perturbations based on code simulation.<n>We evaluate our method across 13 distinct malicious objectives, generating 31 threat cases spanning three popular programming languages.
arXiv Detail & Related papers (2024-07-12T10:59:32Z) - Chain-of-Scrutiny: Detecting Backdoor Attacks for Large Language Models [20.722572221155946]
Large Language Models (LLMs) generate malicious outputs when inputs contain specific "triggers" set by attackers.<n>Traditional defense strategies are impractical for API-accessible LLMs due to limited model access, high computational costs, and data requirements.<n>We propose Chain-of-Scrutiny (CoS) which leverages LLMs' unique reasoning abilities to mitigate backdoor attacks.
arXiv Detail & Related papers (2024-06-10T00:53:25Z) - RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content [62.685566387625975]
Current mitigation strategies, while effective, are not resilient under adversarial attacks.
This paper introduces Resilient Guardrails for Large Language Models (RigorLLM), a novel framework designed to efficiently moderate harmful and unsafe inputs.
arXiv Detail & Related papers (2024-03-19T07:25:02Z) - Jailbreaking Black Box Large Language Models in Twenty Queries [97.29563503097995]
Large language models (LLMs) are vulnerable to adversarial jailbreaks.
We propose an algorithm that generates semantic jailbreaks with only black-box access to an LLM.
arXiv Detail & Related papers (2023-10-12T15:38:28Z) - Universal and Transferable Adversarial Attacks on Aligned Language
Models [118.41733208825278]
We propose a simple and effective attack method that causes aligned language models to generate objectionable behaviors.
Surprisingly, we find that the adversarial prompts generated by our approach are quite transferable.
arXiv Detail & Related papers (2023-07-27T17:49:12Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.