When Harmless Words Harm: A New Threat to LLM Safety via Conceptual Triggers
- URL: http://arxiv.org/abs/2511.21718v1
- Date: Wed, 19 Nov 2025 14:34:57 GMT
- Title: When Harmless Words Harm: A New Threat to LLM Safety via Conceptual Triggers
- Authors: Zhaoxin Zhang, Borui Chen, Yiming Hu, Youyang Qu, Tianqing Zhu, Longxiang Gao,
- Abstract summary: We introduce MICM, a model-agnostic jailbreak method that targets aggregate value structure reflected in model responses.<n>We evaluate MICM across five advanced large language model (LLM) jailbreaks, including GPT-4o, Deepseek-R1, and Qwen3-8B.
- Score: 24.094815313911297
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research on large language model (LLM) jailbreaks has primarily focused on techniques that bypass safety mechanisms to elicit overtly harmful outputs. However, such efforts often overlook attacks that exploit the model's capacity for abstract generalization, creating a critical blind spot in current alignment strategies. This gap enables adversaries to induce objectionable content by subtly manipulating the implicit social values embedded in model outputs. In this paper, we introduce MICM, a novel, model-agnostic jailbreak method that targets the aggregate value structure reflected in LLM responses. Drawing on conceptual morphology theory, MICM encodes specific configurations of nuanced concepts into a fixed prompt template through a predefined set of phrases. These phrases act as conceptual triggers, steering model outputs toward a specific value stance without triggering conventional safety filters. We evaluate MICM across five advanced LLMs, including GPT-4o, Deepseek-R1, and Qwen3-8B. Experimental results show that MICM consistently outperforms state-of-the-art jailbreak techniques, achieving high success rates with minimal rejection. Our findings reveal a critical vulnerability in commercial LLMs: their safety mechanisms remain susceptible to covert manipulation of underlying value alignment.
Related papers
- A Fragile Guardrail: Diffusion LLM's Safety Blessing and Its Failure Mode [51.43498132808724]
We show that Diffusion large language models (D-LLMs) have intrinsic robustness against jailbreak attacks.<n>We identify a simple yet effective failure mode, termed context nesting, where harmful requests are embedded within structured benign contexts.<n>We show that this simple strategy is sufficient to bypass D-LLMs' safety blessing, achieving state-of-the-art attack success rates.
arXiv Detail & Related papers (2026-01-30T23:08:14Z) - Odysseus: Jailbreaking Commercial Multimodal LLM-integrated Systems via Dual Steganography [77.44136793431893]
We propose a novel jailbreak paradigm that introduces dual steganography to covertly embed malicious queries into benign-looking images.<n>Our Odysseus successfully jailbreaks several pioneering and realistic MLLM-integrated systems, achieving up to 99% attack success rate.
arXiv Detail & Related papers (2025-12-23T08:53:36Z) - Harmful Prompt Laundering: Jailbreaking LLMs with Abductive Styles and Symbolic Encoding [19.92751862281067]
Large Language Models (LLMs) have demonstrated remarkable capabilities across diverse tasks, but their potential misuse for harmful purposes remains a significant concern.<n>We propose textbfHarmful textbfPrompt textbfLaundering (HaPLa), a novel and broadly applicable jailbreaking technique that requires only black-box access to target models.
arXiv Detail & Related papers (2025-09-13T18:07:56Z) - PRISM: Programmatic Reasoning with Image Sequence Manipulation for LVLM Jailbreaking [3.718606661938873]
We propose a novel and effective jailbreak framework inspired by Return-Oriented Programming (ROP) techniques from software security.<n>Our approach decomposes a harmful instruction into a sequence of individually benign visual gadgets.<n>Our findings reveal a critical and underexplored vulnerability that exploits the compositional reasoning abilities of LVLMs.
arXiv Detail & Related papers (2025-07-29T07:13:56Z) - Cannot See the Forest for the Trees: Invoking Heuristics and Biases to Elicit Irrational Choices of LLMs [83.11815479874447]
We propose a novel jailbreak attack framework, inspired by cognitive decomposition and biases in human cognition.<n>We employ cognitive decomposition to reduce the complexity of malicious prompts and relevance bias to reorganize prompts.<n>We also introduce a ranking-based harmfulness evaluation metric that surpasses the traditional binary success-or-failure paradigm.
arXiv Detail & Related papers (2025-05-03T05:28:11Z) - GraphAttack: Exploiting Representational Blindspots in LLM Safety Mechanisms [1.48325651280105]
This paper introduces a novel graph-based approach to generate jailbreak prompts.<n>We represent malicious prompts as nodes in a graph structure with edges denoting different transformations.<n>We demonstrate a particularly effective exploitation vector by instructing LLMs to generate code that realizes the intent.
arXiv Detail & Related papers (2025-04-17T16:09:12Z) - Sugar-Coated Poison: Benign Generation Unlocks LLM Jailbreaking [14.541887120849687]
jailbreak attacks based on prompt engineering have become a major safety threat.<n>This study introduces the concept of Defense Threshold Decay (DTD), revealing the potential safety impact caused by LLMs' benign generation.<n>We propose the Sugar-Coated Poison attack paradigm, which uses a "semantic reversal" strategy to craft benign inputs that are opposite in meaning to malicious intent.
arXiv Detail & Related papers (2025-04-08T03:57:09Z) - MIRAGE: Multimodal Immersive Reasoning and Guided Exploration for Red-Team Jailbreak Attacks [85.3303135160762]
MIRAGE is a novel framework that exploits narrative-driven context and role immersion to circumvent safety mechanisms in Multimodal Large Language Models.<n>It achieves state-of-the-art performance, improving attack success rates by up to 17.5% over the best baselines.<n>We demonstrate that role immersion and structured semantic reconstruction can activate inherent model biases, facilitating the model's spontaneous violation of ethical safeguards.
arXiv Detail & Related papers (2025-03-24T20:38:42Z) - Jailbreaking as a Reward Misspecification Problem [80.52431374743998]
We propose a novel perspective that attributes this vulnerability to reward misspecification during the alignment process.<n>We introduce a metric ReGap to quantify the extent of reward misspecification and demonstrate its effectiveness.<n>We present ReMiss, a system for automated red teaming that generates adversarial prompts in a reward-misspecified space.
arXiv Detail & Related papers (2024-06-20T15:12:27Z) - AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting [54.931241667414184]
We propose textbfAdaptive textbfShield Prompting, which prepends inputs with defense prompts to defend MLLMs against structure-based jailbreak attacks.
Our methods can consistently improve MLLMs' robustness against structure-based jailbreak attacks.
arXiv Detail & Related papers (2024-03-14T15:57:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.