Safety Alignment as Continual Learning: Mitigating the Alignment Tax via Orthogonal Gradient Projection
- URL: http://arxiv.org/abs/2602.07892v1
- Date: Sun, 08 Feb 2026 09:53:46 GMT
- Title: Safety Alignment as Continual Learning: Mitigating the Alignment Tax via Orthogonal Gradient Projection
- Authors: Guanglong Sun, Siyuan Zhang, Liyuan Wang, Jun Zhu, Hang Su, Yi Zhong,
- Abstract summary: Large Language Models (LLMs) often incur an alignment tax: safety post-training can reduce general utility.<n>We argue that this tax primarily arises from continual-learning-style forgetting in sequential alignment.<n>We propose Orthogonal Gradient Projection for Safety Alignment (OGPSA) to balance plasticity and stability.
- Score: 52.551864761088574
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) often incur an alignment tax: safety post-training can reduce general utility (e.g., reasoning and coding). We argue that this tax primarily arises from continual-learning-style forgetting in sequential alignment, where distribution shift and conflicting objectives cause safety updates to overwrite pre-trained competencies. Accordingly, we cast safety alignment as a continual learning (CL) problem that must balance plasticity (acquiring safety constraints) and stability (preserving general abilities). We propose Orthogonal Gradient Projection for Safety Alignment (OGPSA), a lightweight method that mitigates interference by constraining each safety update to be orthogonal (in a first-order sense) to a learned subspace capturing general capabilities. Specifically, OGPSA estimates a low-rank capability subspace from gradients on a small reference set and projects the safety gradient onto its orthogonal complement before updating. This produces safety-directed updates that minimally perturb prior knowledge while retaining capacity for alignment. OGPSA is plug-and-play and integrates into standard post-training pipelines without large-scale replay, auxiliary objectives, or retraining. Across Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and sequential SFT$\rightarrow$DPO settings, OGPSA consistently improves the safety--utility Pareto frontier over standard baselines. For instance, on Qwen2.5-7B-Instruct under SFT$\rightarrow$DPO, OGPSA preserves strong safety while recovering general capability, improving SimpleQA from 0.53\% to 3.03\% and IFEval from 51.94\% to 63.96\%. Our source code is available at \href{https://github.com/SunGL001/OGPSA}{OGPSA}
Related papers
- Understanding and Preserving Safety in Fine-Tuned LLMs [20.821783178639063]
Fine-tuning can substantially degrade safety alignment, even when the fine-tuning data is harmless.<n>We propose safety-preserving fine-tuning (SPF), a lightweight approach that explicitly removes gradient components conflicting with the low-rank safety subspace.<n> SPF consistently maintains downstream task performance and recovers nearly all pre-trained safety alignment, even under adversarial fine-tuning scenarios.
arXiv Detail & Related papers (2026-01-15T07:33:13Z) - Q-realign: Piggybacking Realignment on Quantization for Safe and Efficient LLM Deployment [55.14890249389052]
Existing defenses either embed safety recovery into fine-tuning or rely on fine-tuning-derived priors for post-hoc correction.<n>We propose textttQ-realign, a post-hoc defense method based on post-training quantization.<n>Our work provides a practical, turnkey solution for safety-aware deployment.
arXiv Detail & Related papers (2026-01-13T00:07:24Z) - Mitigating the Safety Alignment Tax with Null-Space Constrained Policy Optimization [15.729169158082598]
Safety alignment under Reinforcement Learning (RL) often suffers from forgetting learned general abilities.<n>We introduce Null-Space constrained Policy Optimization (NSPO), a novel RL framework for LLM safety alignment.<n>NSPO preserves the model's original core capabilities, while still guaranteeing a descent direction for effective safety alignment.
arXiv Detail & Related papers (2025-12-12T09:01:52Z) - Geometric-Disentangelment Unlearning [106.99160454669902]
gradient ascent on forget samples often harms retained knowledge.<n>We propose the Geometric-disment Unlearning (GU) that decomposes any candidate forget gradient update into tangential and normal components to retain space and executes only the normal component.<n>Our method is plug-and-play and can be attached to existing gradient-based unlearning procedures to mitigate side effects.
arXiv Detail & Related papers (2025-11-21T09:58:25Z) - A Guardrail for Safety Preservation: When Safety-Sensitive Subspace Meets Harmful-Resistant Null-Space [91.99501941169831]
GuardSpace is a guardrail framework for preserving safety alignment throughout fine-tuning.<n>For Llama-2-7B-Chat fine-tuned on GSM8K, GuardSpace outperforms the state-of-the-art method AsFT.
arXiv Detail & Related papers (2025-10-16T04:57:53Z) - UpSafe$^\circ$C: Upcycling for Controllable Safety in Large Language Models [67.91151588917396]
Large Language Models (LLMs) have achieved remarkable progress across a wide range of tasks, but remain vulnerable to safety risks such as harmful content generation and jailbreak attacks.<n>We propose UpSafe$circ$C, a unified framework for enhancing LLM safety through safety-aware upcycling.<n>Our results highlight a new direction for LLM safety: moving from static alignment toward dynamic, modular, and inference-aware control.
arXiv Detail & Related papers (2025-10-02T16:43:33Z) - AlignGuard-LoRA: Alignment-Preserving Fine-Tuning via Fisher-Guided Decomposition and Riemannian-Geodesic Collision Regularization [6.5225344327304535]
Low-rank adaptation (LoRA) has become a standard tool for efficiently fine-tuning large language models.<n>LoRA updates can induce alignment drift, weakening safety and behavioral constraints.<n>We propose AlignGuard-LoRA, a principled framework for preserving alignment during finetuning.
arXiv Detail & Related papers (2025-08-04T05:45:24Z) - AsFT: Anchoring Safety During LLM Fine-Tuning Within Narrow Safety Basin [38.577959886489076]
Large language models (LLMs) are vulnerable to safety risks during fine-tuning.<n>We propose a methodology for safety fine-tuning called AsFT (Anchoring Safety in Fine-Tuning)
arXiv Detail & Related papers (2025-06-10T05:59:48Z) - Fine-tuning Aligned Language Models Compromises Safety, Even When Users
Do Not Intend To! [88.90694413503614]
We find that the safety alignment of LLMs can be compromised by fine-tuning.
We jailbreak GPT-3.5 Turbo's safety guardrails by fine-tuning it on only 10 such examples.
We advocate for further research efforts toward reinforcing safety protocols for the custom fine-tuning of aligned LLMs.
arXiv Detail & Related papers (2023-10-05T17:12:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.