EvoEdit: Evolving Null-space Alignment for Robust and Efficient Knowledge Editing
- URL: http://arxiv.org/abs/2510.13851v1
- Date: Sat, 11 Oct 2025 21:36:14 GMT
- Title: EvoEdit: Evolving Null-space Alignment for Robust and Efficient Knowledge Editing
- Authors: Sicheng Lyu, Yu Gu, Xinyu Wang, Jerry Huang, Sitao Luan, Yufei Cui, Xiao-Wen Chang, Peng Lu,
- Abstract summary: Large language models (LLMs) require continual updates to rectify outdated or erroneous knowledge.<n>Existing approaches are mainly based on a locate-then-edit framework.<n>We introduce EvoEdit, a novel editing strategy that mitigates catastrophic interference through sequential null-space alignment.
- Score: 19.834477925624658
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) require continual updates to rectify outdated or erroneous knowledge. Model editing has emerged as a compelling paradigm for introducing targeted modifications without the computational burden of full retraining. Existing approaches are mainly based on a locate-then-edit framework. However, in sequential editing contexts, where multiple updates are applied over time, they exhibit significant limitations and suffer from catastrophic interference, i.e., new edits compromise previously integrated updates and degrade preserved knowledge. To address these challenges, we introduce EvoEdit, a novel editing strategy that mitigates catastrophic interference through sequential null-space alignment, enabling stable and efficient model editing. By performing sequential null-space alignment for each incoming edit, EvoEdit preserves both original and previously modified knowledge representations and maintains output invariance on preserved knowledge even across long edit sequences, effectively mitigating interference. Evaluations on real-world sequential knowledge-editing benchmarks show that EvoEdit achieves better or comparable performance than prior state-of-the-art locate-then-edit techniques, with up to 3.53 times speedup. Overall, these results underscore the necessity of developing more principled approaches for designing LLMs in dynamically evolving information settings, while providing a simple yet effective solution with strong theoretical guarantees.
Related papers
- EvoEdit: Lifelong Free-Text Knowledge Editing through Latent Perturbation Augmentation and Knowledge-driven Parameter Fusion [31.09201415423854]
We propose Lifelong Free-text Knowledge Editing (LF-Edit)<n>It enables models to incorporate updates expressed in natural language and supports continual editing over time.<n>Despite its promise, LF-Edit faces the dual challenge of integrating new knowledge while mitigating the forgetting of prior information.
arXiv Detail & Related papers (2025-12-04T07:55:36Z) - Energy-Regularized Sequential Model Editing on Hyperspheres [59.47007547581175]
Large language models (LLMs) require constant updates to remain aligned with evolving real-world knowledge.<n> sequential editing often destabilizes representations and induces catastrophic forgetting.<n>We propose SPHERE (Sparse Projection for Hyperspherical Energy-Regularized Editing), an HE-driven regularization strategy that stabilizes neuron weight distributions.
arXiv Detail & Related papers (2025-10-01T17:55:43Z) - Aligning Language Models with Real-time Knowledge Editing [11.503574001763246]
We introduce CRAFT, an ever-evolving real-world benchmark for knowledge editing.<n>It features well-designed paired edits for composite reasoning, and evaluates models on alias portability and temporal and common-sense locality.<n> Towards flexible real-time editing, we propose KEDAS, a novel paradigm of knowledge editing alignment featuring diverse edit augmentation and self-adaptive post-alignment inference.
arXiv Detail & Related papers (2025-08-02T10:25:36Z) - MEMOIR: Lifelong Model Editing with Minimal Overwrite and Informed Retention for LLMs [76.28901550926021]
Existing methods for lifelong model editing compromise generalization, interfere with past edits, or fail to scale to long editing sequences.<n>We propose MEMOIR, a novel scalable framework that injects knowledge through a residual memory, while preserving the core capabilities of the pre-trained model.<n>MeMOIR achieves state-of-the-art performance across reliability, generalization, and locality metrics, scaling to thousands of sequential edits with minimal forgetting.
arXiv Detail & Related papers (2025-06-09T16:16:42Z) - LyapLock: Bounded Knowledge Preservation in Sequential Large Language Model Editing [27.918524905286475]
Current locate-then-edit approaches exhibit a progressive performance decline during sequential editing.<n>textbfLyapLock is proposed to decompose the long-term constrained programming into tractable stepwise subproblems for efficient solving.<n> Experimental results show that our framework scales sequential editing capacity to over 10,000 edits while stabilizing general capabilities and boosting average editing efficacy by 11.89% over SOTA baselines.
arXiv Detail & Related papers (2025-05-21T16:16:33Z) - DeltaEdit: Enhancing Sequential Editing in Large Language Models by Controlling Superimposed Noise [1.2697731449512988]
Sequential knowledge editing techniques aim to continuously update the knowledge in large language models at a low cost.<n>Existing sequential editing methods suffer from a significant decline in editing success rates after long-term editing.<n>We propose DeltaEdit, a novel method that reduces interference between edits to mitigate deviation.<n> Experimental results demonstrate that DeltaEdit significantly outperforms existing methods in edit success rates and the retention of generalization capabilities.
arXiv Detail & Related papers (2025-05-12T07:11:26Z) - AnyEdit: Edit Any Knowledge Encoded in Language Models [76.28789588247659]
We propose AnyEdit, a new autoregressive editing paradigm for large language models (LLMs)<n>It decomposes long-form knowledge into sequential chunks and iteratively edits the key token in each chunk, ensuring consistent and accurate outputs.<n>It outperforms strong baselines by 21.5% on benchmarks including UnKEBench, AKEW, and our new EditEverything dataset for long-form diverse-formatted knowledge.
arXiv Detail & Related papers (2025-02-08T16:18:37Z) - O-Edit: Orthogonal Subspace Editing for Language Model Sequential Editing [0.0]
Large language models (LLMs) acquire knowledge during pre-training, but over time, this knowledge may become incorrect or outdated, necessitating updates after training.
We propose Orthogonal Subspace Editing, O-Edit. This algorithmizes the direction of each knowledge update, minimizing interference between successive updates and reducing the impact of new updates on unrelated knowledge.
It can perform thousands of edits on mainstream LLMs, achieving an average performance improvement that is 4.2 times better than existing methods while effectively preserving the model's performance on downstream tasks, all with minimal additional parameter overhead.
arXiv Detail & Related papers (2024-10-15T10:16:45Z) - ELDER: Enhancing Lifelong Model Editing with Mixture-of-LoRA [55.697627106315004]
Large language models (LLMs) require model editing to efficiently update specific knowledge within them and avoid factual errors.<n>Previous approaches manage sequential edits by freezing original parameters and discretely allocating new parameters for each knowledge update.<n>We propose ELDER, a novel approach to create a continuous association between data and adapters.
arXiv Detail & Related papers (2024-08-19T02:27:00Z) - EVEDIT: Event-based Knowledge Editing with Deductive Editing Boundaries [69.72012539060731]
We introduce a theoretical framework for efficient knowledge editing (KE) in large language models (LLMs)
We propose a novel task of event-based knowledge editing that pairs facts with event descriptions.
We empirically demonstrate the superiority of event-based editing over the existing setting on resolving uncertainty in edited models.
arXiv Detail & Related papers (2024-02-17T16:34:50Z) - Memory-Based Model Editing at Scale [102.28475739907498]
Existing model editors struggle to accurately model an edit's intended scope.
We propose Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model (SERAC)
SERAC stores edits in an explicit memory and learns to reason over them to modulate the base model's predictions as needed.
arXiv Detail & Related papers (2022-06-13T23:40:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.