Knowledge Editing for Large Language Model with Knowledge Neuronal Ensemble
- URL: http://arxiv.org/abs/2412.20637v1
- Date: Mon, 30 Dec 2024 00:58:00 GMT
- Title: Knowledge Editing for Large Language Model with Knowledge Neuronal Ensemble
- Authors: Yongchang Li, Yujin Zhu, Tao Yan, Shijian Fan, Gang Wu, Liang Xu,
- Abstract summary: We propose a novel knowledge editing method called Knowledge Neuronal Ensemble (KNE)<n>A knowledge neuronal ensemble represents a group of neurons encoding specific knowledge, thus mitigating the issue of frequent parameter modification.<n> Experimental results on three widely used knowledge editing datasets show that the KNE method significantly improves the accuracy of knowledge editing.
- Score: 13.608354678065222
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As real-world knowledge is constantly evolving, ensuring the timeliness and accuracy of a model's knowledge is crucial. This has made knowledge editing in large language models increasingly important. However, existing knowledge editing methods face several challenges, including parameter localization coupling, imprecise localization, and a lack of dynamic interaction across layers. In this paper, we propose a novel knowledge editing method called Knowledge Neuronal Ensemble (KNE). A knowledge neuronal ensemble represents a group of neurons encoding specific knowledge, thus mitigating the issue of frequent parameter modification caused by coupling in parameter localization. The KNE method enhances the precision and accuracy of parameter localization by computing gradient attribution scores for each parameter at each layer. During the editing process, only the gradients and losses associated with the knowledge neuronal ensemble are computed, with error backpropagation performed accordingly, ensuring dynamic interaction and collaborative updates among parameters. Experimental results on three widely used knowledge editing datasets show that the KNE method significantly improves the accuracy of knowledge editing and achieves, or even exceeds, the performance of the best baseline methods in portability and locality metrics.
Related papers
- CaKE: Circuit-aware Editing Enables Generalizable Knowledge Learners [88.35958039968081]
CaKE (Circuit-aware Knowledge Editing) is a novel method that enables more effective integration of updated knowledge in large language models.
Results show that CaKE enables more accurate and consistent use of updated knowledge across related reasoning tasks.
arXiv Detail & Related papers (2025-03-20T17:14:34Z) - Precise Localization of Memories: A Fine-grained Neuron-level Knowledge Editing Technique for LLMs [47.06544781855325]
We propose a Fine-grained Neuron-level Knowledge Editing (FiNE) method that enhances editing locality without affecting success rates.
By precisely identifying and modifying specific neurons within feed-forward networks, FiNE significantly improves knowledge localization and editing.
arXiv Detail & Related papers (2025-03-03T01:30:28Z) - Capability Localization: Capabilities Can be Localized rather than Individual Knowledge [22.63726568778859]
Large scale language models have achieved superior performance in tasks related to natural language processing.
Previous studies assumed that individual knowledge is stored in local parameters, and the storage form of individual knowledge is dispersed parameters, parameter layers, or parameter chains.
This paper proposes a Commonality Neuron localization (CNL) method, which successfully locates commonality neurons and achieves a neuron overlap rate of 96.42% on the GSM8K dataset.
arXiv Detail & Related papers (2025-02-28T12:22:13Z) - GeoEdit: Geometric Knowledge Editing for Large Language Models [52.37408324849593]
Regular updates are essential for maintaining up-to-date knowledge in large language models (LLMs)
We propose a novel framework called Geometric Knowledge Editing (GeoEdit)
GeoEdit distinguishes between neurons associated with new knowledge updates and those related to general knowledge perturbations.
For the remaining neurons, we integrate both old and new knowledge for aligned directions and apply a "forget-then-learn" editing strategy for opposite directions.
arXiv Detail & Related papers (2025-02-27T10:27:48Z) - Multilingual Knowledge Editing with Language-Agnostic Factual Neurons [98.73585104789217]
Same factual knowledge in different languages generally activates a shared set of neurons, which we call language-agnostic factual neurons (LAFNs)
These neurons represent the same factual knowledge shared across languages and imply the semantic connections among multilingual knowledge.
We propose a new MKE method by Locating and Updating Language-Agnostic Factual Neurons (LU-LAFNs) to edit multilingual knowledge simultaneously.
arXiv Detail & Related papers (2024-06-24T08:06:56Z) - Everything is Editable: Extend Knowledge Editing to Unstructured Data in Large Language Models [65.10456412127405]
A significant portion of real-world knowledge is stored in an unstructured format.
Techniques like local layer key-value storage and term-driven optimization are not effective for handling unstructured knowledge.
We propose a novel Unstructured Knowledge Editing method, namely UnKE, which extends previous assumptions in the layer dimension and token dimension.
arXiv Detail & Related papers (2024-05-24T08:42:40Z) - Stable Knowledge Editing in Large Language Models [68.98582618305679]
We introduce StableKE, a knowledge editing method based on knowledge augmentation rather than knowledge localization.
To overcome the expense of human labeling, StableKE integrates two automated knowledge augmentation strategies.
StableKE surpasses other knowledge editing methods, demonstrating stability both edited knowledge and multi-hop knowledge.
arXiv Detail & Related papers (2024-02-20T14:36:23Z) - A Comprehensive Study of Knowledge Editing for Large Language Models [82.65729336401027]
Large Language Models (LLMs) have shown extraordinary capabilities in understanding and generating text that closely mirrors human communication.
This paper defines the knowledge editing problem and provides a comprehensive review of cutting-edge approaches.
We introduce a new benchmark, KnowEdit, for a comprehensive empirical evaluation of representative knowledge editing approaches.
arXiv Detail & Related papers (2024-01-02T16:54:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.