Finding Safety Neurons in Large Language Models
- URL: http://arxiv.org/abs/2406.14144v1
- Date: Thu, 20 Jun 2024 09:35:22 GMT
- Title: Finding Safety Neurons in Large Language Models
- Authors: Jianhui Chen, Xiaozhi Wang, Zijun Yao, Yushi Bai, Lei Hou, Juanzi Li,
- Abstract summary: Large language models (LLMs) excel in various capabilities but also pose safety risks such as generating harmful content and misinformation.
In this paper, we explore the inner mechanisms of safety alignment from the perspective of mechanistic interpretability.
We propose generation-time activation contrasting to locate these neurons and dynamic activation patching to evaluate their causal effects.
- Score: 44.873565067389016
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) excel in various capabilities but also pose safety risks such as generating harmful content and misinformation, even after safety alignment. In this paper, we explore the inner mechanisms of safety alignment from the perspective of mechanistic interpretability, focusing on identifying and analyzing safety neurons within LLMs that are responsible for safety behaviors. We propose generation-time activation contrasting to locate these neurons and dynamic activation patching to evaluate their causal effects. Experiments on multiple recent LLMs show that: (1) Safety neurons are sparse and effective. We can restore $90$% safety performance with intervention only on about $5$% of all the neurons. (2) Safety neurons encode transferrable mechanisms. They exhibit consistent effectiveness on different red-teaming datasets. The finding of safety neurons also interprets "alignment tax". We observe that the identified key neurons for safety and helpfulness significantly overlap, but they require different activation patterns of the shared neurons. Furthermore, we demonstrate an application of safety neurons in detecting unsafe outputs before generation. Our findings may promote further research on understanding LLM alignment. The source codes will be publicly released to facilitate future research.
Related papers
- Language Models Are Capable of Metacognitive Monitoring and Control of Their Internal Activations [1.0485739694839669]
Large language models (LLMs) can sometimes report the strategies they actually use to solve tasks, but they can also fail to do so.<n>This suggests some degree of metacognition -- the capacity to monitor one's own cognitive processes for subsequent reporting and self-control.<n>We introduce a neuroscience-inspired neurofeedback paradigm designed to quantify the ability of LLMs to explicitly report and control their activation patterns.
arXiv Detail & Related papers (2025-05-19T22:32:25Z) - NeuRel-Attack: Neuron Relearning for Safety Disalignment in Large Language Models [14.630626774362606]
Safety alignment in large language models (LLMs) is achieved through fine-tuning mechanisms that regulate neuron activations to suppress harmful content.
We propose a novel approach to induce disalignment by identifying and modifying the neurons responsible for safety constraints.
arXiv Detail & Related papers (2025-04-29T05:49:35Z) - Deciphering Functions of Neurons in Vision-Language Models [37.29432842212334]
This study aims to delve into the internals of vision-language models (VLMs) to interpret the functions of individual neurons.
We observe the activations of neurons with respects to the input visual tokens and text tokens, and reveal some interesting findings.
We build a framework that automates the explanation of neurons with the assistant of GPT-4o.
For visual neurons, we propose an activation simulator to assess the reliability of the explanations for visual neurons.
arXiv Detail & Related papers (2025-02-10T10:00:06Z) - Internal Activation as the Polar Star for Steering Unsafe LLM Behavior [50.463399903987245]
We introduce SafeSwitch, a framework that dynamically regulates unsafe outputs by monitoring and utilizing the model's internal states.
Our empirical results show that SafeSwitch reduces harmful outputs by over 80% on safety benchmarks while maintaining strong utility.
arXiv Detail & Related papers (2025-02-03T04:23:33Z) - Neuron Empirical Gradient: Discovering and Quantifying Neurons Global Linear Controllability [14.693407823048478]
We show that the neuron empirical gradient (NEG) captures how changes in activations affect predictions.<n>We also show that NEG effectively captures language skills across diverse prompts through skill neuron probing.<n>Further analysis highlights the key properties of NEG-based skill representation: efficiency, robustness, flexibility, and interdependency.
arXiv Detail & Related papers (2024-12-24T00:01:24Z) - Interpreting the Second-Order Effects of Neurons in CLIP [73.54377859089801]
We interpret the function of individual neurons in CLIP by automatically describing them using text.
We present the "second-order lens", analyzing the effect flowing from a neuron through the later attention heads, directly to the output.
Our results indicate that a scalable understanding of neurons can be used for model deception and for introducing new model capabilities.
arXiv Detail & Related papers (2024-06-06T17:59:52Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Neuron-Level Knowledge Attribution in Large Language Models [19.472889262384818]
We propose a static method for pinpointing significant neurons.
Compared to seven other methods, our approach demonstrates superior performance across three metrics.
We also apply our methods to analyze six types of knowledge across both attention and feed-forward network layers.
arXiv Detail & Related papers (2023-12-19T13:23:18Z) - Causality Analysis for Evaluating the Security of Large Language Models [9.102606258312246]
Large Language Models (LLMs) are increasingly adopted in many safety-critical applications.
Recent studies have shown that LLMs are still subject to attacks such as adversarial perturbation and Trojan attacks.
We propose a framework for conducting light-weight causality-analysis of LLMs at the token, layer, and neuron level.
arXiv Detail & Related papers (2023-12-13T03:35:43Z) - Visual Analytics of Neuron Vulnerability to Adversarial Attacks on
Convolutional Neural Networks [28.081328051535618]
Adversarial attacks on a convolutional neural network (CNN) could fool a high-performance CNN into making incorrect predictions.
Our work introduces a visual analytics approach to understanding adversarial attacks.
A visual analytics system is designed to incorporate visual reasoning for interpreting adversarial attacks.
arXiv Detail & Related papers (2023-03-06T01:01:56Z) - Adversarial Defense via Neural Oscillation inspired Gradient Masking [0.0]
Spiking neural networks (SNNs) attract great attention due to their low power consumption, low latency, and biological plausibility.
We propose a novel neural model that incorporates the bio-inspired oscillation mechanism to enhance the security of SNNs.
arXiv Detail & Related papers (2022-11-04T02:13:19Z) - Defense against Backdoor Attacks via Identifying and Purifying Bad
Neurons [36.57541102989073]
We propose a novel backdoor defense method to mark and purify infected neurons in neural networks.
New metric, called benign salience, can identify infected neurons with higher accuracy than the commonly used metric in backdoor defense.
New Adaptive Regularization (AR) mechanism is proposed to assist in purifying these identified infected neurons.
arXiv Detail & Related papers (2022-08-13T01:10:20Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.