Towards Understanding and Improving Refusal in Compressed Models via Mechanistic Interpretability
- URL: http://arxiv.org/abs/2504.04215v1
- Date: Sat, 05 Apr 2025 16:00:44 GMT
- Title: Towards Understanding and Improving Refusal in Compressed Models via Mechanistic Interpretability
- Authors: Vishnu Kabir Chhabra, Mohammad Mahdi Khalili,
- Abstract summary: We investigate the safety of compressed models by examining the mechanisms of refusal.<n>We propose a lightweight, computationally efficient method to enhance the safety of compressed models without compromising their performance or utility.
- Score: 7.73472615056109
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid growth of large language models has spurred significant interest in model compression as a means to enhance their accessibility and practicality. While extensive research has explored model compression through the lens of safety, findings suggest that safety-aligned models often lose elements of trustworthiness post-compression. Simultaneously, the field of mechanistic interpretability has gained traction, with notable discoveries, such as the identification of a single direction in the residual stream mediating refusal behaviors across diverse model architectures. In this work, we investigate the safety of compressed models by examining the mechanisms of refusal, adopting a novel interpretability-driven perspective to evaluate model safety. Furthermore, leveraging insights from our interpretability analysis, we propose a lightweight, computationally efficient method to enhance the safety of compressed models without compromising their performance or utility.
Related papers
- Activation Space Interventions Can Be Transferred Between Large Language Models [0.0]
We show that safety interventions can be transferred between models through learned mappings of their shared activation spaces.<n>We demonstrate this approach on two well-established AI safety tasks: backdoor removal and refusal of harmful prompts.<n>We also propose a new task, textitcorrupted capabilities, where models are fine-tuned to embed knowledge tied to a backdoor.
arXiv Detail & Related papers (2025-03-06T13:38:44Z) - From Counterfactuals to Trees: Competitive Analysis of Model Extraction Attacks [4.293083690039339]
We formalize and characterize the risks and inherent complexity of model reconstruction.
We present the first formal analysis of model extraction attacks through the lens of competitive analysis.
We introduce novel reconstruction algorithms that achieve provably perfect fidelity while demonstrating strong anytime performance.
arXiv Detail & Related papers (2025-02-07T20:51:06Z) - Watch the Watcher! Backdoor Attacks on Security-Enhancing Diffusion Models [65.30406788716104]
This work investigates the vulnerabilities of security-enhancing diffusion models.
We demonstrate that these models are highly susceptible to DIFF2, a simple yet effective backdoor attack.
Case studies show that DIFF2 can significantly reduce both post-purification and certified accuracy across benchmark datasets and models.
arXiv Detail & Related papers (2024-06-14T02:39:43Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Sowing the Wind, Reaping the Whirlwind: The Impact of Editing Language Models [19.132597762214722]
Red-teaming or Jailbreaking large language models (LLMs) has emerged as a crucial area of study.
This paper investigates the intricate consequences of such modifications through model editing.
Our findings show that model editing serves as a cost-effective tool for topical red-teaming.
arXiv Detail & Related papers (2024-01-19T11:48:09Z) - JAB: Joint Adversarial Prompting and Belief Augmentation [81.39548637776365]
We introduce a joint framework in which we probe and improve the robustness of a black-box target model via adversarial prompting and belief augmentation.
This framework utilizes an automated red teaming approach to probe the target model, along with a belief augmenter to generate instructions for the target model to improve its robustness to those adversarial probes.
arXiv Detail & Related papers (2023-11-16T00:35:54Z) - On the Embedding Collapse when Scaling up Recommendation Models [53.66285358088788]
We identify the embedding collapse phenomenon as the inhibition of scalability, wherein the embedding matrix tends to occupy a low-dimensional subspace.
We propose a simple yet effective multi-embedding design incorporating embedding-set-specific interaction modules to learn embedding sets with large diversity.
arXiv Detail & Related papers (2023-10-06T17:50:38Z) - Understanding Data Augmentation from a Robustness Perspective [10.063624819905508]
Data augmentation stands out as a pivotal technique to amplify model robustness.
This manuscript takes both a theoretical and empirical approach to understanding the phenomenon.
Our empirical evaluations dissect the intricate mechanisms of emblematic data augmentation strategies.
These insights provide a novel lens through which we can re-evaluate model safety and robustness in visual recognition tasks.
arXiv Detail & Related papers (2023-09-07T10:54:56Z) - Fairness Increases Adversarial Vulnerability [50.90773979394264]
This paper shows the existence of a dichotomy between fairness and robustness, and analyzes when achieving fairness decreases the model robustness to adversarial samples.
Experiments on non-linear models and different architectures validate the theoretical findings in multiple vision domains.
The paper proposes a simple, yet effective, solution to construct models achieving good tradeoffs between fairness and robustness.
arXiv Detail & Related papers (2022-11-21T19:55:35Z) - What do Compressed Large Language Models Forget? Robustness Challenges
in Model Compression [68.82486784654817]
We study two popular model compression techniques including knowledge distillation and pruning.
We show that compressed models are significantly less robust than their PLM counterparts on adversarial test sets.
We develop a regularization strategy for model compression based on sample uncertainty.
arXiv Detail & Related papers (2021-10-16T00:20:04Z) - Enhancing Model Robustness and Fairness with Causality: A Regularization
Approach [15.981724441808147]
Recent work has raised concerns on the risk of spurious correlations and unintended biases in machine learning models.
We propose a simple and intuitive regularization approach to integrate causal knowledge during model training.
We build a predictive model that relies more on causal features and less on non-causal features.
arXiv Detail & Related papers (2021-10-03T02:49:33Z) - On the model-based stochastic value gradient for continuous
reinforcement learning [50.085645237597056]
We show that simple model-based agents can outperform state-of-the-art model-free agents in terms of both sample-efficiency and final reward.
Our findings suggest that model-based policy evaluation deserves closer attention.
arXiv Detail & Related papers (2020-08-28T17:58:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.