AntiDote: Bi-level Adversarial Training for Tamper-Resistant LLMs
- URL: http://arxiv.org/abs/2509.08000v1
- Date: Sat, 06 Sep 2025 16:03:07 GMT
- Title: AntiDote: Bi-level Adversarial Training for Tamper-Resistant LLMs
- Authors: Debdeep Sanyal, Manodeep Ray, Murari Mandal,
- Abstract summary: Current safety measures struggle to preserve the general capabilities of open-weight large language models.<n>We introduce AntiDote, a bi-level optimization procedure for training LLMs to be resistant to such tampering.<n>We validate this approach against a diverse suite of 52 red-teaming attacks.
- Score: 7.176280545594957
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The release of open-weight large language models (LLMs) creates a tension between advancing accessible research and preventing misuse, such as malicious fine-tuning to elicit harmful content. Current safety measures struggle to preserve the general capabilities of the LLM while resisting a determined adversary with full access to the model's weights and architecture, who can use full-parameter fine-tuning to erase existing safeguards. To address this, we introduce AntiDote, a bi-level optimization procedure for training LLMs to be resistant to such tampering. AntiDote involves an auxiliary adversary hypernetwork that learns to generate malicious Low-Rank Adaptation (LoRA) weights conditioned on the defender model's internal activations. The defender LLM is then trained with an objective to nullify the effect of these adversarial weight additions, forcing it to maintain its safety alignment. We validate this approach against a diverse suite of 52 red-teaming attacks, including jailbreak prompting, latent space manipulation, and direct weight-space attacks. AntiDote is upto 27.4\% more robust against adversarial attacks compared to both tamper-resistance and unlearning baselines. Crucially, this robustness is achieved with a minimal trade-off in utility, incurring a performance degradation of upto less than 0.5\% across capability benchmarks including MMLU, HellaSwag, and GSM8K. Our work offers a practical and compute efficient methodology for building open-weight models where safety is a more integral and resilient property.
Related papers
- TamperBench: Systematically Stress-Testing LLM Safety Under Fine-Tuning and Tampering [18.943719866462512]
We introduce TamperBench, a framework to evaluate the tamper resistance of large language models (LLMs)<n>TamperBench curates state-of-the-art weight-space fine-tuning attacks and latent-space representation attacks.<n>We use TamperBench to evaluate 21 open-weight LLMs, including defense-augmented variants, across nine tampering threats.
arXiv Detail & Related papers (2026-02-06T18:04:38Z) - Mitigating Jailbreaks with Intent-Aware LLMs [42.48292327349576]
Large language models (LLMs) remain vulnerable to jailbreak attacks via adversarially crafted instructions.<n>We propose Intent-FT, a simple and lightweight fine-tuning approach that explicitly trains LLMs to infer the underlying intent of an instruction before responding.<n> Empirically, Intent-FT consistently mitigates all evaluated attack categories, with no single attack exceeding a 50% success rate.
arXiv Detail & Related papers (2025-08-16T15:03:33Z) - Robust Anti-Backdoor Instruction Tuning in LVLMs [53.766434746801366]
We introduce a lightweight, certified-agnostic defense framework for large visual language models (LVLMs)<n>Our framework finetunes only adapter modules and text embedding layers under instruction tuning.<n>Experiments against seven attacks on Flickr30k and MSCOCO demonstrate that ours reduces their attack success rate to nearly zero.
arXiv Detail & Related papers (2025-06-04T01:23:35Z) - LightDefense: A Lightweight Uncertainty-Driven Defense against Jailbreaks via Shifted Token Distribution [84.2846064139183]
Large Language Models (LLMs) face threats from jailbreak prompts.<n>We propose LightDefense, a lightweight defense mechanism targeted at white-box models.
arXiv Detail & Related papers (2025-04-02T09:21:26Z) - Improving LLM Safety Alignment with Dual-Objective Optimization [81.98466438000086]
Existing training-time safety alignment techniques for large language models (LLMs) remain vulnerable to jailbreak attacks.<n>We propose an improved safety alignment that disentangles DPO objectives into two components: (1) robust refusal training, which encourages refusal even when partial unsafe generations are produced, and (2) targeted unlearning of harmful knowledge.
arXiv Detail & Related papers (2025-03-05T18:01:05Z) - Latent-space adversarial training with post-aware calibration for defending large language models against jailbreak attacks [23.793583584784685]
Large language models (LLMs) are susceptible to jailbreak attacks, which exploit system vulnerabilities to circumvent safety measures and elicit harmful or inappropriate outputs.<n>We introduce LATPC, a Latent-space Adrial Training with Post-aware framework.<n>LATPC identifies safety-critical latent dimensions by contrasting harmful and benign inputs, enabling the adaptive construction of targeted refusal feature removal attacks.
arXiv Detail & Related papers (2025-01-18T02:57:12Z) - Robust LLM safeguarding via refusal feature adversarial training [15.76605079209956]
Large language models (LLMs) are vulnerable to adversarial attacks that can elicit harmful responses.<n>We propose Refusal Feature Adrial Training (ReFAT), a novel algorithm that efficiently performs adversarial training.<n>Experiment results show that ReFAT significantly improves the robustness of three popular LLMs against a wide range of adversarial attacks.
arXiv Detail & Related papers (2024-09-30T08:41:39Z) - Evaluating Defences against Unsafe Feedback in RLHF [26.872318173182414]
This paper looks at learning from unsafe feedback with reinforcement learning.<n>We find that safety-aligned LLMs easily explore unsafe action spaces via generating harmful text.<n>In order to protect against this vulnerability, we adapt a number of both "implict" and "explicit" harmful fine-tuning defences.
arXiv Detail & Related papers (2024-09-19T17:10:34Z) - Tamper-Resistant Safeguards for Open-Weight LLMs [57.90526233549399]
We develop a method for building tamper-resistant safeguards into open-weight LLMs.<n>We find that our method greatly improves tamper-resistance while preserving benign capabilities.
arXiv Detail & Related papers (2024-08-01T17:59:12Z) - Purple-teaming LLMs with Adversarial Defender Training [57.535241000787416]
We present Purple-teaming LLMs with Adversarial Defender training (PAD)
PAD is a pipeline designed to safeguard LLMs by novelly incorporating the red-teaming (attack) and blue-teaming (safety training) techniques.
PAD significantly outperforms existing baselines in both finding effective attacks and establishing a robust safe guardrail.
arXiv Detail & Related papers (2024-07-01T23:25:30Z) - Efficient Adversarial Training in LLMs with Continuous Attacks [99.5882845458567]
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails.
We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses.
C-AdvIPO is an adversarial variant of IPO that does not require utility data for adversarially robust alignment.
arXiv Detail & Related papers (2024-05-24T14:20:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.