EVEDIT: Event-based Knowledge Editing with Deductive Editing Boundaries
- URL: http://arxiv.org/abs/2402.11324v1
- Date: Sat, 17 Feb 2024 16:34:50 GMT
- Title: EVEDIT: Event-based Knowledge Editing with Deductive Editing Boundaries
- Authors: Jiateng Liu, Pengfei Yu, Yuji Zhang, Sha Li, Zixuan Zhang, Heng Ji
- Abstract summary: We introduce a theoretical framework for efficient knowledge editing (KE) in large language models (LLMs)
We propose a novel task of event-based knowledge editing that pairs facts with event descriptions.
We empirically demonstrate the superiority of event-based editing over the existing setting on resolving uncertainty in edited models.
- Score: 69.72012539060731
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The dynamic nature of real-world information necessitates efficient knowledge
editing (KE) in large language models (LLMs) for knowledge updating. However,
current KE approaches, which typically operate on (subject, relation, object)
triples, ignore the contextual information and the relation among different
knowledge. Such editing methods could thus encounter an uncertain editing
boundary, leaving a lot of relevant knowledge in ambiguity: Queries that could
be answered pre-edit cannot be reliably answered afterward. In this work, we
analyze this issue by introducing a theoretical framework for KE that
highlights an overlooked set of knowledge that remains unchanged and aids in
knowledge deduction during editing, which we name as the deduction anchor. We
further address this issue by proposing a novel task of event-based knowledge
editing that pairs facts with event descriptions. This task manifests not only
a closer simulation of real-world editing scenarios but also a more logically
sound setting, implicitly defining the deduction anchor to address the issue of
indeterminate editing boundaries. We empirically demonstrate the superiority of
event-based editing over the existing setting on resolving uncertainty in
edited models, and curate a new benchmark dataset EvEdit derived from the
CounterFact dataset. Moreover, while we observe that the event-based setting is
significantly challenging for existing approaches, we propose a novel approach
Self-Edit that showcases stronger performance, achieving 55.6% consistency
improvement while maintaining the naturalness of generation.
Related papers
- K-Edit: Language Model Editing with Contextual Knowledge Awareness [71.73747181407323]
Knowledge-based model editing enables precise modifications to the weights of large language models.
We present K-Edit, an effective approach to generating contextually consistent knowledge edits.
arXiv Detail & Related papers (2025-02-15T01:35:13Z) - AnyEdit: Edit Any Knowledge Encoded in Language Models [69.30638272162267]
We propose AnyEdit, a new autoregressive editing paradigm for large language models (LLMs)
It decomposes long-form knowledge into sequential chunks and iteratively edits the key token in each chunk, ensuring consistent and accurate outputs.
It outperforms strong baselines by 21.5% on benchmarks including UnKEBench, AKEW, and our new EditEverything dataset for long-form diverse-formatted knowledge.
arXiv Detail & Related papers (2025-02-08T16:18:37Z) - Related Knowledge Perturbation Matters: Rethinking Multiple Pieces of Knowledge Editing in Same-Subject [49.559994791305535]
Current state-of-the-art editing methods struggle when tasked with editing multiple related knowledge pieces for the same subject.
We introduce the $textS2textRKE$(Same-Subject Related Knowledge Editing) benchmark.
Our experiments reveal that only mainstream locate-then-edit methods, such as ROME and MEMIT, exhibit "related knowledge perturbation"
arXiv Detail & Related papers (2025-02-08T04:47:17Z) - Uncovering Overfitting in Large Language Model Editing [35.55260822503773]
We identify and investigate the phenomenon of Editing Overfit, where edited models assign disproportionately high probabilities to the edit target.
We propose a new plug-and-play strategy called Learn to Inference (LTI), which introduce a Multi-stage Inference Constraint module to guide the edited models in recalling new knowledge.
arXiv Detail & Related papers (2024-10-10T11:09:00Z) - Relation Also Knows: Rethinking the Recall and Editing of Factual Associations in Auto-Regressive Transformer Language Models [15.698183471185066]
The storage and recall of factual associations in auto-regressive transformer language models (LMs) have drawn a great deal of attention.
Most editing works achieve knowledge editing under the guidance of existing interpretations of knowledge recall that mainly focus on subject knowledge.
In this work, we discover a novel relation-focused perspective to interpret the knowledge recall of transformer LMs during inference and apply it on single knowledge editing to avoid over-generalizing.
arXiv Detail & Related papers (2024-08-27T14:22:02Z) - Everything is Editable: Extend Knowledge Editing to Unstructured Data in Large Language Models [65.10456412127405]
A significant portion of real-world knowledge is stored in an unstructured format.
Techniques like local layer key-value storage and term-driven optimization are not effective for handling unstructured knowledge.
We propose a novel Unstructured Knowledge Editing method, namely UnKE, which extends previous assumptions in the layer dimension and token dimension.
arXiv Detail & Related papers (2024-05-24T08:42:40Z) - Propagation and Pitfalls: Reasoning-based Assessment of Knowledge
Editing through Counterfactual Tasks [36.292901021210575]
We introduce a novel reasoning-based benchmark -- ReCoE (Reasoning-based Counterfactual Editing dataset)
We conduct a thorough analysis of existing knowledge editing techniques, including input augmentation, finetuning, and locate-and-edit.
All model editing methods show notably low performance on this dataset, especially in certain reasoning schemes.
arXiv Detail & Related papers (2024-01-31T04:12:59Z) - A Comprehensive Study of Knowledge Editing for Large Language Models [82.65729336401027]
Large Language Models (LLMs) have shown extraordinary capabilities in understanding and generating text that closely mirrors human communication.
This paper defines the knowledge editing problem and provides a comprehensive review of cutting-edge approaches.
We introduce a new benchmark, KnowEdit, for a comprehensive empirical evaluation of representative knowledge editing approaches.
arXiv Detail & Related papers (2024-01-02T16:54:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.