Do Prompts Guarantee Safety? Mitigating Toxicity from LLM Generations through Subspace Intervention
- URL: http://arxiv.org/abs/2602.06623v1
- Date: Fri, 06 Feb 2026 11:33:17 GMT
- Title: Do Prompts Guarantee Safety? Mitigating Toxicity from LLM Generations through Subspace Intervention
- Authors: Himanshu Singh, Ziwei Xu, A. V. Subramanyam, Mohan Kankanhalli,
- Abstract summary: Large Language Models (LLMs) are powerful text generators.<n>LLMs can produce toxic or harmful content even when given seemingly harmless prompts.<n>This presents a serious safety challenge and can cause real-world harm.
- Score: 6.808534332444413
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Large Language Models (LLMs) are powerful text generators, yet they can produce toxic or harmful content even when given seemingly harmless prompts. This presents a serious safety challenge and can cause real-world harm. Toxicity is often subtle and context-dependent, making it difficult to detect at the token level or through coarse sentence-level signals. Moreover, efforts to mitigate toxicity often face a trade-off between safety and the coherence, or fluency of the generated text. In this work, we present a targeted subspace intervention strategy for identifying and suppressing hidden toxic patterns from underlying model representations, while preserving overall ability to generate safe fluent content. On the RealToxicityPrompts, our method achieves strong mitigation performance compared to existing baselines, with minimal impact on inference complexity. Across multiple LLMs, our approach reduces toxicity of state-of-the-art detoxification systems by 8-20%, while maintaining comparable fluency. Through extensive quantitative and qualitative analyses, we show that our approach achieves effective toxicity reduction without impairing generative performance, consistently outperforming existing baselines.
Related papers
- Unveiling Covert Toxicity in Multimodal Data via Toxicity Association Graphs: A Graph-Based Metric and Interpretable Detection Framework [58.01529356381494]
We propose a novel detection framework based on Toxicity Association Graphs (TAGs)<n>We introduce the first quantifiable metric for hidden toxicity, the Multimodal Toxicity Covertness (MTC)<n>Our approach enables precise identification of covert toxicity while preserving full interpretability of the decision-making process.
arXiv Detail & Related papers (2026-02-03T08:54:25Z) - Cleansing the Artificial Mind: A Self-Reflective Detoxification Framework for Large Language Models [14.566005698357747]
Large Language Models (LLMs) have revealed remarkable generative capabilities and emerging self-regulatory mechanisms.<n>We introduce a fully self-reflective detoxification framework that harnesses the inherent capacities of LLMs to detect, correct toxic content.<n>Our findings underscore the potential for truly self-regulated language models, paving the way for more responsible and ethically guided text generation systems.
arXiv Detail & Related papers (2026-01-16T21:01:26Z) - Projecting Out the Malice: A Global Subspace Approach to LLM Detoxification [73.77171973106567]
Large language models (LLMs) exhibit exceptional performance but pose inherent risks of generating toxic content.<n>Traditional methods fail to eliminate underlying toxic regions in parameters, leaving models vulnerable to adversarial attacks.<n>We propose GLOSS, a lightweight method that mitigates toxicity by identifying and eliminating this global subspace from FFN parameters.
arXiv Detail & Related papers (2026-01-09T09:34:53Z) - Rethinking Toxicity Evaluation in Large Language Models: A Multi-Label Perspective [104.09817371557476]
Large language models (LLMs) have achieved impressive results across a range of natural language processing tasks.<n>Their potential to generate harmful content has raised serious safety concerns.<n>We introduce three novel multi-label benchmarks for toxicity detection.
arXiv Detail & Related papers (2025-10-16T06:50:33Z) - Detoxifying Large Language Models via Autoregressive Reward Guided Representation Editing [77.75609817898035]
Large Language Models (LLMs) have demonstrated impressive performance across various tasks, yet they remain vulnerable to generating toxic content.<n>We propose textscAutoregressive textscReward textscGuided textscRe presentation textscEditing (ARGRE)<n>ARGRE explicitly models toxicity transitions within the latent representation space, enabling stable and precise reward-guided editing.
arXiv Detail & Related papers (2025-09-24T03:40:32Z) - GloSS over Toxicity: Understanding and Mitigating Toxicity in LLMs via Global Toxic Subspace [62.68664365246247]
This paper investigates the underlying mechanisms of toxicity generation in Large Language Models (LLMs)<n>We propose GloSS (Global Toxic Subspace Suppression), a lightweight, four-stage method that mitigates toxicity by identifying and removing the global toxic subspace from the parameters of FFN.
arXiv Detail & Related papers (2025-05-20T08:29:11Z) - Contrastive Perplexity for Controlled Generation: An Application in Detoxifying Large Language Models [21.341749351654453]
The generation of toxic content by large language models (LLMs) remains a critical challenge for the safe deployment of language technology.<n>We propose a novel framework for implicit knowledge editing and controlled text generation by fine-tuning LLMs with a prototype-based contrastive perplexity objective.
arXiv Detail & Related papers (2024-01-16T16:49:39Z) - Unveiling the Implicit Toxicity in Large Language Models [77.90933074675543]
The open-endedness of large language models (LLMs) combined with their impressive capabilities may lead to new safety issues when being exploited for malicious use.
We show that LLMs can generate diverse implicit toxic outputs that are exceptionally difficult to detect via simply zero-shot prompting.
We propose a reinforcement learning (RL) based attacking method to further induce the implicit toxicity in LLMs.
arXiv Detail & Related papers (2023-11-29T06:42:36Z) - Challenges in Detoxifying Language Models [44.48396735574315]
Large language models (LM) generate remarkably fluent text and can be efficiently adapted across NLP tasks.
Measuring and guaranteeing the quality of generated text in terms of safety is imperative for deploying LMs in the real world.
We evaluate several toxicity mitigation strategies with respect to both automatic and human evaluation.
arXiv Detail & Related papers (2021-09-15T17:27:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.