OneEdit: A Neural-Symbolic Collaboratively Knowledge Editing System
- URL: http://arxiv.org/abs/2409.07497v1
- Date: Mon, 9 Sep 2024 16:46:47 GMT
- Title: OneEdit: A Neural-Symbolic Collaboratively Knowledge Editing System
- Authors: Ningyu Zhang, Zekun Xi, Yujie Luo, Peng Wang, Bozhong Tian, Yunzhi Yao, Jintian Zhang, Shumin Deng, Mengshu Sun, Lei Liang, Zhiqiang Zhang, Xiaowei Zhu, Jun Zhou, Huajun Chen,
- Abstract summary: Symbolic Knowledge Graphs (KGs) and neural Large Language Models (LLMs) can both represent knowledge.
OneEdit is a neural-symbolic prototype system for collaborative knowledge editing using natural language.
- Score: 48.261069440429125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge representation has been a central aim of AI since its inception. Symbolic Knowledge Graphs (KGs) and neural Large Language Models (LLMs) can both represent knowledge. KGs provide highly accurate and explicit knowledge representation, but face scalability issue; while LLMs offer expansive coverage of knowledge, but incur significant training costs and struggle with precise and reliable knowledge manipulation. To this end, we introduce OneEdit, a neural-symbolic prototype system for collaborative knowledge editing using natural language, which facilitates easy-to-use knowledge management with KG and LLM. OneEdit consists of three modules: 1) The Interpreter serves for user interaction with natural language; 2) The Controller manages editing requests from various users, leveraging the KG with rollbacks to handle knowledge conflicts and prevent toxic knowledge attacks; 3) The Editor utilizes the knowledge from the Controller to edit KG and LLM. We conduct experiments on two new datasets with KGs which demonstrate that OneEdit can achieve superior performance.
Related papers
- Commonsense Knowledge Editing Based on Free-Text in LLMs [23.18079655111236]
We propose a Dynamics-aware Module to locate the parameter positions corresponding to commonsense knowledge, and uses Knowledge Editing Module to update knowledge.
The experimental results indicate that the DEM can achieve excellent editing performance.
arXiv Detail & Related papers (2024-10-31T11:50:24Z) - Cross-Lingual Multi-Hop Knowledge Editing -- Benchmarks, Analysis and a Simple Contrastive Learning based Approach [53.028586843468915]
We propose the Cross-Lingual Multi-Hop Knowledge Editing paradigm, for measuring and analyzing the performance of various SoTA knowledge editing techniques in a cross-lingual setup.
Specifically, we create a parallel cross-lingual benchmark, CROLIN-MQUAKE for measuring the knowledge editing capabilities.
Following this, we propose a significantly improved system for cross-lingual multi-hop knowledge editing, CLEVER-CKE.
arXiv Detail & Related papers (2024-07-14T17:18:16Z) - Knowledge Graph-Enhanced Large Language Models via Path Selection [58.228392005755026]
Large Language Models (LLMs) have shown unprecedented performance in various real-world applications.
LLMs are known to generate factually inaccurate outputs, a.k.a. the hallucination problem.
We propose a principled framework KELP with three stages to handle the above problems.
arXiv Detail & Related papers (2024-06-19T21:45:20Z) - Knowledge Graph Enhanced Large Language Model Editing [37.6721061644483]
Large language models (LLMs) are pivotal in advancing natural language processing (NLP) tasks.
Existing editing methods struggle to track and incorporate changes in knowledge associated with edits.
We propose a novel model editing method that leverages knowledge graphs for enhancing LLM editing, namely GLAME.
arXiv Detail & Related papers (2024-02-21T07:52:26Z) - Stable Knowledge Editing in Large Language Models [68.98582618305679]
We introduce StableKE, a knowledge editing method based on knowledge augmentation rather than knowledge localization.
To overcome the expense of human labeling, StableKE integrates two automated knowledge augmentation strategies.
StableKE surpasses other knowledge editing methods, demonstrating stability both edited knowledge and multi-hop knowledge.
arXiv Detail & Related papers (2024-02-20T14:36:23Z) - On the Robustness of Editing Large Language Models [57.477943944826904]
Large language models (LLMs) have played a pivotal role in building communicative AI, yet they encounter the challenge of efficient updates.
This work seeks to understand the strengths and limitations of editing methods, facilitating practical applications of communicative AI.
arXiv Detail & Related papers (2024-02-08T17:06:45Z) - EasyEdit: An Easy-to-use Knowledge Editing Framework for Large Language Models [45.70959260613425]
We propose EasyEdit, an easy-to-use knowledge editing framework for Large Language Models.
It supports various cutting-edge knowledge editing approaches and can be readily applied to many well-known LLMs.
We report the knowledge editing results on LlaMA-2 with EasyEdit, demonstrating that knowledge editing surpasses traditional fine-tuning.
arXiv Detail & Related papers (2023-08-14T16:52:42Z) - JAKET: Joint Pre-training of Knowledge Graph and Language Understanding [73.43768772121985]
We propose a novel joint pre-training framework, JAKET, to model both the knowledge graph and language.
The knowledge module and language module provide essential information to mutually assist each other.
Our design enables the pre-trained model to easily adapt to unseen knowledge graphs in new domains.
arXiv Detail & Related papers (2020-10-02T05:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.