SetKE: Knowledge Editing for Knowledge Elements Overlap
- URL: http://arxiv.org/abs/2504.20972v1
- Date: Tue, 29 Apr 2025 17:40:29 GMT
- Title: SetKE: Knowledge Editing for Knowledge Elements Overlap
- Authors: Yifan Wei, Xiaoyan Yu, Ran Song, Hao Peng, Angsheng Li,
- Abstract summary: Large Language Models (LLMs) excel in tasks such as retrieval and question answering but require updates to incorporate new knowledge and reduce inaccuracies and hallucinations.<n> Knowledge Editing (KE) provides a promising alternative but often overlooks the Knowledge Element Overlap (KEO) phenomenon, where multiple triplets share common elements, leading to editing conflicts.<n>We propose a new formulation, Knowledge Set Editing (KSE), and introduce SetKE, a method that edits sets of triplets simultaneously.
- Score: 25.72267270228574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) excel in tasks such as retrieval and question answering but require updates to incorporate new knowledge and reduce inaccuracies and hallucinations. Traditional updating methods, like fine-tuning and incremental learning, face challenges such as overfitting and high computational costs. Knowledge Editing (KE) provides a promising alternative but often overlooks the Knowledge Element Overlap (KEO) phenomenon, where multiple triplets share common elements, leading to editing conflicts. We identify the prevalence of KEO in existing KE datasets and show its significant impact on current KE methods, causing performance degradation in handling such triplets. To address this, we propose a new formulation, Knowledge Set Editing (KSE), and introduce SetKE, a method that edits sets of triplets simultaneously. Experimental results demonstrate that SetKE outperforms existing methods in KEO scenarios on mainstream LLMs. Additionally, we introduce EditSet, a dataset containing KEO triplets, providing a comprehensive benchmark.
Related papers
- CaKE: Circuit-aware Editing Enables Generalizable Knowledge Learners [88.35958039968081]
CaKE (Circuit-aware Knowledge Editing) is a novel method that enables more effective integration of updated knowledge in large language models.<n>Results show that CaKE enables more accurate and consistent use of updated knowledge across related reasoning tasks.
arXiv Detail & Related papers (2025-03-20T17:14:34Z) - Related Knowledge Perturbation Matters: Rethinking Multiple Pieces of Knowledge Editing in Same-Subject [49.559994791305535]
Current state-of-the-art editing methods struggle when tasked with editing multiple related knowledge pieces for the same subject.
We introduce the $textS2textRKE$(Same-Subject Related Knowledge Editing) benchmark.
Our experiments reveal that only mainstream locate-then-edit methods, such as ROME and MEMIT, exhibit "related knowledge perturbation"
arXiv Detail & Related papers (2025-02-08T04:47:17Z) - Knowledge Editing through Chain-of-Thought [12.270274049887298]
Large Language Models (LLMs) have demonstrated exceptional capabilities across a wide range of natural language processing (NLP) tasks.
Keeping these models up-to-date with evolving world knowledge remains a significant challenge due to the high costs of frequent retraining.
We propose EditCoT, a novel knowledge editing framework that flexibly and efficiently updates LLMs across various tasks without retraining.
arXiv Detail & Related papers (2024-12-23T17:17:50Z) - CollabEdit: Towards Non-destructive Collaborative Knowledge Editing [23.013415033531974]
This manuscript dives into the first investigation of collaborative Knowledge Editing.<n>We identify the unique three challenges therein, including knowledge overlap, knowledge conflict, and knowledge forgetting.<n>We propose a non-destructive collaborative KE framework, COLLABEDIT, which employs a novel model merging mechanism to mimic the global KE behavior.
arXiv Detail & Related papers (2024-10-12T12:10:14Z) - Cross-Lingual Multi-Hop Knowledge Editing [53.028586843468915]
We propose the Cross-Lingual Multi-Hop Knowledge Editing paradigm, for measuring and analyzing the performance of various SoTA knowledge editing techniques in a cross-lingual setup.
Specifically, we create a parallel cross-lingual benchmark, CROLIN-MQUAKE for measuring the knowledge editing capabilities.
Following this, we propose a significantly improved system for cross-lingual multi-hop knowledge editing, CLEVER-CKE.
arXiv Detail & Related papers (2024-07-14T17:18:16Z) - Everything is Editable: Extend Knowledge Editing to Unstructured Data in Large Language Models [65.10456412127405]
We propose a novel Unstructured Knowledge Editing method, namely UnKE.<n>In the layer dimension, we propose non-local block key-value storage to replace local layer key-value storage.<n>In the token dimension, we replace "term-driven optimization" with "cause-driven optimization", which edits the last token directly while preserving context.
arXiv Detail & Related papers (2024-05-24T08:42:40Z) - Event-level Knowledge Editing [53.767465515537545]
Existing work edits large language models (LLMs) at the level of factual knowledge triplets.
We propose a new task setting: event-level knowledge editing, which directly edits new events into LLMs.
We construct a high-quality event-level editing benchmark ELKEN, consisting of 1,515 event edits, 6,449 questions about factual knowledge, and 10,150 questions about future tendencies.
arXiv Detail & Related papers (2024-02-20T15:36:41Z) - EVEDIT: Event-based Knowledge Editing with Deductive Editing Boundaries [69.72012539060731]
We introduce a theoretical framework for efficient knowledge editing (KE) in large language models (LLMs)
We propose a novel task of event-based knowledge editing that pairs facts with event descriptions.
We empirically demonstrate the superiority of event-based editing over the existing setting on resolving uncertainty in edited models.
arXiv Detail & Related papers (2024-02-17T16:34:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.