Precise Localization of Memories: A Fine-grained Neuron-level Knowledge Editing Technique for LLMs
- URL: http://arxiv.org/abs/2503.01090v2
- Date: Mon, 17 Mar 2025 07:34:41 GMT
- Title: Precise Localization of Memories: A Fine-grained Neuron-level Knowledge Editing Technique for LLMs
- Authors: Haowen Pan, Xiaozhi Wang, Yixin Cao, Zenglin Shi, Xun Yang, Juanzi Li, Meng Wang,
- Abstract summary: We propose a Fine-grained Neuron-level Knowledge Editing (FiNE) method that enhances editing locality without affecting success rates.<n>By precisely identifying and modifying specific neurons within feed-forward networks, FiNE significantly improves knowledge localization and editing.
- Score: 47.06544781855325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge editing aims to update outdated information in Large Language Models (LLMs). A representative line of study is locate-then-edit methods, which typically employ causal tracing to identify the modules responsible for recalling factual knowledge about entities. However, we find these methods are often sensitive only to changes in the subject entity, leaving them less effective at adapting to changes in relations. This limitation results in poor editing locality, which can lead to the persistence of irrelevant or inaccurate facts, ultimately compromising the reliability of LLMs. We believe this issue arises from the insufficient precision of knowledge localization. To address this, we propose a Fine-grained Neuron-level Knowledge Editing (FiNE) method that enhances editing locality without affecting overall success rates. By precisely identifying and modifying specific neurons within feed-forward networks, FiNE significantly improves knowledge localization and editing. Quantitative experiments demonstrate that FiNE efficiently achieves better overall performance compared to existing techniques, providing new insights into the localization and modification of knowledge within LLMs.
Related papers
- CaKE: Circuit-aware Editing Enables Generalizable Knowledge Learners [88.35958039968081]
CaKE (Circuit-aware Knowledge Editing) is a novel method that enables more effective integration of updated knowledge in large language models.
Results show that CaKE enables more accurate and consistent use of updated knowledge across related reasoning tasks.
arXiv Detail & Related papers (2025-03-20T17:14:34Z) - Knowledge Editing for Large Language Model with Knowledge Neuronal Ensemble [13.608354678065222]
We propose a novel knowledge editing method called Knowledge Neuronal Ensemble (KNE)<n>A knowledge neuronal ensemble represents a group of neurons encoding specific knowledge, thus mitigating the issue of frequent parameter modification.<n> Experimental results on three widely used knowledge editing datasets show that the KNE method significantly improves the accuracy of knowledge editing.
arXiv Detail & Related papers (2024-12-30T00:58:00Z) - Editing the Mind of Giants: An In-Depth Exploration of Pitfalls of Knowledge Editing in Large Language Models [26.516571783335824]
Recent studies have identified side effects, such as knowledge distortion and the deterioration of general abilities, that have emerged after editing.
This survey presents a comprehensive study of these side effects, providing a unified perspective on the challenges of knowledge editing in large language models.
arXiv Detail & Related papers (2024-06-03T15:28:21Z) - Editing Factual Knowledge and Explanatory Ability of Medical Large Language Models [89.13883089162951]
Model editing aims to precisely alter the behaviors of large language models (LLMs) in relation to specific knowledge.
This approach has proven effective in addressing issues of hallucination and outdated information in LLMs.
However, the potential of using model editing to modify knowledge in the medical field remains largely unexplored.
arXiv Detail & Related papers (2024-02-28T06:40:57Z) - Learning to Edit: Aligning LLMs with Knowledge Editing [101.96620267293731]
We propose a Learning to Edit (LTE) framework, focusing on teaching large language models to apply updated knowledge into input questions.
LTE features a two-phase process: (i) the Alignment Phase, which fine-tunes LLMs on a meticulously curated parallel dataset to make reliable, in-scope edits.
We demonstrate LTE's superiority in knowledge editing performance, robustness in both batch and sequential editing, minimal interference on general tasks, and rapid editing speeds.
arXiv Detail & Related papers (2024-02-19T07:45:17Z) - A Comprehensive Study of Knowledge Editing for Large Language Models [82.65729336401027]
Large Language Models (LLMs) have shown extraordinary capabilities in understanding and generating text that closely mirrors human communication.
This paper defines the knowledge editing problem and provides a comprehensive review of cutting-edge approaches.
We introduce a new benchmark, KnowEdit, for a comprehensive empirical evaluation of representative knowledge editing approaches.
arXiv Detail & Related papers (2024-01-02T16:54:58Z) - Evaluating Dependencies in Fact Editing for Language Models: Specificity
and Implication Awareness [26.589633375359647]
We aim to ensure that the editing of learned facts respects internal logical constraints, which are known as dependency of knowledge.
Existing work on editing LLMs has partially addressed the issue of dependency, when the editing of a fact should apply to its lexical variations without disrupting irrelevant ones.
We propose an evaluation protocol with an accompanying question-answering dataset, DepEdit, that provides a comprehensive assessment of the editing process.
arXiv Detail & Related papers (2023-12-04T12:45:30Z) - Unveiling the Pitfalls of Knowledge Editing for Large Language Models [41.83423510576848]
It is still unclear whether knowledge editing might introduce side effects that pose potential risks or not.
This paper pioneers the investigation into the potential pitfalls associated with knowledge editing for Large Language Models.
Experimental results vividly demonstrate that knowledge editing might inadvertently cast a shadow of unintended consequences.
arXiv Detail & Related papers (2023-10-03T15:10:46Z) - Eva-KELLM: A New Benchmark for Evaluating Knowledge Editing of LLMs [54.22416829200613]
Eva-KELLM is a new benchmark for evaluating knowledge editing of large language models.
Experimental results indicate that the current methods for knowledge editing using raw documents are not effective in yielding satisfactory results.
arXiv Detail & Related papers (2023-08-19T09:17:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.