EVEDIT: Event-based Knowledge Editing with Deductive Editing Boundaries
- URL: http://arxiv.org/abs/2402.11324v1
- Date: Sat, 17 Feb 2024 16:34:50 GMT
- Title: EVEDIT: Event-based Knowledge Editing with Deductive Editing Boundaries
- Authors: Jiateng Liu, Pengfei Yu, Yuji Zhang, Sha Li, Zixuan Zhang, Heng Ji
- Abstract summary: We introduce a theoretical framework for efficient knowledge editing (KE) in large language models (LLMs)
We propose a novel task of event-based knowledge editing that pairs facts with event descriptions.
We empirically demonstrate the superiority of event-based editing over the existing setting on resolving uncertainty in edited models.
- Score: 69.72012539060731
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The dynamic nature of real-world information necessitates efficient knowledge
editing (KE) in large language models (LLMs) for knowledge updating. However,
current KE approaches, which typically operate on (subject, relation, object)
triples, ignore the contextual information and the relation among different
knowledge. Such editing methods could thus encounter an uncertain editing
boundary, leaving a lot of relevant knowledge in ambiguity: Queries that could
be answered pre-edit cannot be reliably answered afterward. In this work, we
analyze this issue by introducing a theoretical framework for KE that
highlights an overlooked set of knowledge that remains unchanged and aids in
knowledge deduction during editing, which we name as the deduction anchor. We
further address this issue by proposing a novel task of event-based knowledge
editing that pairs facts with event descriptions. This task manifests not only
a closer simulation of real-world editing scenarios but also a more logically
sound setting, implicitly defining the deduction anchor to address the issue of
indeterminate editing boundaries. We empirically demonstrate the superiority of
event-based editing over the existing setting on resolving uncertainty in
edited models, and curate a new benchmark dataset EvEdit derived from the
CounterFact dataset. Moreover, while we observe that the event-based setting is
significantly challenging for existing approaches, we propose a novel approach
Self-Edit that showcases stronger performance, achieving 55.6% consistency
improvement while maintaining the naturalness of generation.
Related papers
- Uncovering Overfitting in Large Language Model Editing [35.55260822503773]
We identify and investigate the phenomenon of Editing Overfit, where edited models assign disproportionately high probabilities to the edit target.
We propose a new plug-and-play strategy called Learn to Inference (LTI), which introduce a Multi-stage Inference Constraint module to guide the edited models in recalling new knowledge.
arXiv Detail & Related papers (2024-10-10T11:09:00Z) - Relation Also Knows: Rethinking the Recall and Editing of Factual Associations in Auto-Regressive Transformer Language Models [15.698183471185066]
The storage and recall of factual associations in auto-regressive transformer language models (LMs) have drawn a great deal of attention.
Most editing works achieve knowledge editing under the guidance of existing interpretations of knowledge recall that mainly focus on subject knowledge.
In this work, we discover a novel relation-focused perspective to interpret the knowledge recall of transformer LMs during inference and apply it on knowledge editing to avoid over-generalizing.
arXiv Detail & Related papers (2024-08-27T14:22:02Z) - How Well Can Knowledge Edit Methods Edit Perplexing Knowledge? [18.022428746019582]
This study investigates the capability of knowledge editing methods to incorporate new knowledge with varying degrees of "perplexingness"
We find significant negative correlations between the "perplexingness" of the new knowledge and the edit efficacy across all 12 scenarios.
Further exploration into the influence of knowledge hierarchy on editing outcomes indicates that knowledge positioned at higher hierarchical levels is more challenging to modify in some scenarios.
arXiv Detail & Related papers (2024-06-25T03:41:02Z) - Everything is Editable: Extend Knowledge Editing to Unstructured Data in Large Language Models [65.10456412127405]
A significant portion of real-world knowledge is stored in an unstructured format.
Techniques like local layer key-value storage and term-driven optimization are not effective for handling unstructured knowledge.
We propose a novel Unstructured Knowledge Editing method, namely UnKE, which extends previous assumptions in the layer dimension and token dimension.
arXiv Detail & Related papers (2024-05-24T08:42:40Z) - Event-level Knowledge Editing [53.767465515537545]
Existing work edits large language models (LLMs) at the level of factual knowledge triplets.
We propose a new task setting: event-level knowledge editing, which directly edits new events into LLMs.
We construct a high-quality event-level editing benchmark ELKEN, consisting of 1,515 event edits, 6,449 questions about factual knowledge, and 10,150 questions about future tendencies.
arXiv Detail & Related papers (2024-02-20T15:36:41Z) - The Butterfly Effect of Model Editing: Few Edits Can Trigger Large Language Models Collapse [58.0132400208411]
Even a single edit can trigger model collapse, manifesting as significant performance degradation in various benchmark tasks.
benchmarking Large Language Models after each edit is impractically time-consuming and resource-intensive.
We have utilized GPT-3.5 to develop a new dataset, HardEdit, based on hard cases.
arXiv Detail & Related papers (2024-02-15T01:50:38Z) - On the Robustness of Editing Large Language Models [57.477943944826904]
Large language models (LLMs) have played a pivotal role in building communicative AI, yet they encounter the challenge of efficient updates.
This work seeks to understand the strengths and limitations of editing methods, facilitating practical applications of communicative AI.
arXiv Detail & Related papers (2024-02-08T17:06:45Z) - Propagation and Pitfalls: Reasoning-based Assessment of Knowledge
Editing through Counterfactual Tasks [36.292901021210575]
We introduce a novel reasoning-based benchmark -- ReCoE (Reasoning-based Counterfactual Editing dataset)
We conduct a thorough analysis of existing knowledge editing techniques, including input augmentation, finetuning, and locate-and-edit.
All model editing methods show notably low performance on this dataset, especially in certain reasoning schemes.
arXiv Detail & Related papers (2024-01-31T04:12:59Z) - A Comprehensive Study of Knowledge Editing for Large Language Models [82.65729336401027]
Large Language Models (LLMs) have shown extraordinary capabilities in understanding and generating text that closely mirrors human communication.
This paper defines the knowledge editing problem and provides a comprehensive review of cutting-edge approaches.
We introduce a new benchmark, KnowEdit, for a comprehensive empirical evaluation of representative knowledge editing approaches.
arXiv Detail & Related papers (2024-01-02T16:54:58Z) - Assessing Knowledge Editing in Language Models via Relation Perspective [21.64869056276927]
This paper constructs a new benchmark named RaKE, which focuses on relation-based knowledge editing.
We establish a suite of innovative metrics for evaluation and conduct comprehensive experiments involving various knowledge editing baselines.
Our research results confirm that knowledge related to relations is not only stored in the FFN network but also in the attention layers.
arXiv Detail & Related papers (2023-11-15T15:44:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.