EVEDIT: Event-based Knowledge Editing with Deductive Editing Boundaries
- URL: http://arxiv.org/abs/2402.11324v1
- Date: Sat, 17 Feb 2024 16:34:50 GMT
- Title: EVEDIT: Event-based Knowledge Editing with Deductive Editing Boundaries
- Authors: Jiateng Liu, Pengfei Yu, Yuji Zhang, Sha Li, Zixuan Zhang, Heng Ji
- Abstract summary: We introduce a theoretical framework for efficient knowledge editing (KE) in large language models (LLMs)
We propose a novel task of event-based knowledge editing that pairs facts with event descriptions.
We empirically demonstrate the superiority of event-based editing over the existing setting on resolving uncertainty in edited models.
- Score: 69.72012539060731
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The dynamic nature of real-world information necessitates efficient knowledge
editing (KE) in large language models (LLMs) for knowledge updating. However,
current KE approaches, which typically operate on (subject, relation, object)
triples, ignore the contextual information and the relation among different
knowledge. Such editing methods could thus encounter an uncertain editing
boundary, leaving a lot of relevant knowledge in ambiguity: Queries that could
be answered pre-edit cannot be reliably answered afterward. In this work, we
analyze this issue by introducing a theoretical framework for KE that
highlights an overlooked set of knowledge that remains unchanged and aids in
knowledge deduction during editing, which we name as the deduction anchor. We
further address this issue by proposing a novel task of event-based knowledge
editing that pairs facts with event descriptions. This task manifests not only
a closer simulation of real-world editing scenarios but also a more logically
sound setting, implicitly defining the deduction anchor to address the issue of
indeterminate editing boundaries. We empirically demonstrate the superiority of
event-based editing over the existing setting on resolving uncertainty in
edited models, and curate a new benchmark dataset EvEdit derived from the
CounterFact dataset. Moreover, while we observe that the event-based setting is
significantly challenging for existing approaches, we propose a novel approach
Self-Edit that showcases stronger performance, achieving 55.6% consistency
improvement while maintaining the naturalness of generation.
Related papers
- How Well Can Knowledge Edit Methods Edit Perplexing Knowledge? [18.022428746019582]
This study investigates the capability of knowledge editing methods to incorporate new knowledge with varying degrees of "perplexingness"
We find significant negative correlations between the "perplexingness" of the new knowledge and the edit efficacy across all 12 scenarios.
Further exploration into the influence of knowledge hierarchy on editing outcomes indicates that knowledge positioned at higher hierarchical levels is more challenging to modify in some scenarios.
arXiv Detail & Related papers (2024-06-25T03:41:02Z) - UnKE: Unstructured Knowledge Editing in Large Language Models [65.10456412127405]
We propose a novel unstructured knowledge editing method, namely UnKE.
By utilizing key-value pairs at the layer level, UnKE effectively represents and edits complex and comprehensive unstructured knowledge.
Results on newly proposed unstructure knowledge editing dataset (UnKE) and traditional structured datasets demonstrate that UnKE achieves remarkable performance.
arXiv Detail & Related papers (2024-05-24T08:42:40Z) - Detecting Edited Knowledge in Language Models [5.260519479124422]
Knowledge editing methods (KEs) can update language models' obsolete or inaccurate knowledge learned from pre-training.
Knowing whether a generated output is based on edited knowledge or first-hand knowledge from pre-training can increase users' trust in generative models.
We propose a novel task: detecting edited knowledge in language models.
arXiv Detail & Related papers (2024-05-04T22:02:24Z) - Updating Language Models with Unstructured Facts: Towards Practical
Knowledge Editing [87.35944788684958]
We propose a new benchmark, Unstructured Knowledge Editing (UKE)
UKE evaluates editing performance directly using unstructured texts as knowledge updates, termed unstructured facts.
We conduct extensive experiments on newly built datasets and demonstrate that UKE poses a significant challenge to state-of-the-art knowledge editing methods.
arXiv Detail & Related papers (2024-02-29T07:08:34Z) - Event-level Knowledge Editing [53.767465515537545]
Existing work edits large language models (LLMs) at the level of factual knowledge triplets.
We propose a new task setting: event-level knowledge editing, which directly edits new events into LLMs.
We construct a high-quality event-level editing benchmark ELKEN, consisting of 1,515 event edits, 6,449 questions about factual knowledge, and 10,150 questions about future tendencies.
arXiv Detail & Related papers (2024-02-20T15:36:41Z) - The Butterfly Effect of Model Editing: Few Edits Can Trigger Large Language Models Collapse [58.0132400208411]
Even a single edit can trigger model collapse, manifesting as significant performance degradation in various benchmark tasks.
benchmarking Large Language Models after each edit is impractically time-consuming and resource-intensive.
We have utilized GPT-3.5 to develop a new dataset, HardEdit, based on hard cases.
arXiv Detail & Related papers (2024-02-15T01:50:38Z) - Propagation and Pitfalls: Reasoning-based Assessment of Knowledge
Editing through Counterfactual Tasks [36.292901021210575]
We introduce a novel reasoning-based benchmark -- ReCoE (Reasoning-based Counterfactual Editing dataset)
We conduct a thorough analysis of existing knowledge editing techniques, including input augmentation, finetuning, and locate-and-edit.
All model editing methods show notably low performance on this dataset, especially in certain reasoning schemes.
arXiv Detail & Related papers (2024-01-31T04:12:59Z) - A Comprehensive Study of Knowledge Editing for Large Language Models [82.65729336401027]
Large Language Models (LLMs) have shown extraordinary capabilities in understanding and generating text that closely mirrors human communication.
This paper defines the knowledge editing problem and provides a comprehensive review of cutting-edge approaches.
We introduce a new benchmark, KnowEdit, for a comprehensive empirical evaluation of representative knowledge editing approaches.
arXiv Detail & Related papers (2024-01-02T16:54:58Z) - Assessing Knowledge Editing in Language Models via Relation Perspective [21.64869056276927]
This paper constructs a new benchmark named RaKE, which focuses on relation-based knowledge editing.
We establish a suite of innovative metrics for evaluation and conduct comprehensive experiments involving various knowledge editing baselines.
Our research results confirm that knowledge related to relations is not only stored in the FFN network but also in the attention layers.
arXiv Detail & Related papers (2023-11-15T15:44:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.