Retrieval-augmented Multilingual Knowledge Editing
- URL: http://arxiv.org/abs/2312.13040v1
- Date: Wed, 20 Dec 2023 14:08:58 GMT
- Title: Retrieval-augmented Multilingual Knowledge Editing
- Authors: Weixuan Wang, Barry Haddow, Alexandra Birch
- Abstract summary: Knowledge represented in Large Language Models (LLMs) is quite often incorrect and can also become obsolete over time.
Knowledge editing (KE) has developed as an effective and economical alternative to inject new knowledge.
We propose Retrieval-augmented Multilingual Knowledge Editor (ReMaKE) to update new knowledge in LLMs.
- Score: 81.6690436581947
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge represented in Large Language Models (LLMs) is quite often
incorrect and can also become obsolete over time. Updating knowledge via
fine-tuning is computationally resource-hungry and not reliable, and so
knowledge editing (KE) has developed as an effective and economical alternative
to inject new knowledge or to fix factual errors in LLMs. Although there has
been considerable interest in this area, current KE research exclusively
focuses on the monolingual setting, typically in English. However, what happens
if the new knowledge is supplied in one language, but we would like to query
the LLM in a different language? To address the problem of multilingual
knowledge editing, we propose Retrieval-augmented Multilingual Knowledge Editor
(ReMaKE) to update new knowledge in LLMs. ReMaKE can perform model-agnostic
knowledge editing in multilingual settings. ReMaKE concatenates the new
knowledge retrieved from a multilingual knowledge base with prompts. Our
experimental results show that ReMaKE outperforms baseline knowledge editing
methods by a significant margin and is the first KE method to work in a
multilingual setting. We provide our multilingual knowledge editing dataset
(MzsRE) in 12 languages, which along with code, and additional project
information is available at https://github.com/Vicky-Wil/ReMaKE.
Related papers
- Cross-Lingual Multi-Hop Knowledge Editing -- Benchmarks, Analysis and a Simple Contrastive Learning based Approach [53.028586843468915]
We propose the Cross-Lingual Multi-Hop Knowledge Editing paradigm, for measuring and analyzing the performance of various SoTA knowledge editing techniques in a cross-lingual setup.
Specifically, we create a parallel cross-lingual benchmark, CROLIN-MQUAKE for measuring the knowledge editing capabilities.
Following this, we propose a significantly improved system for cross-lingual multi-hop knowledge editing, CLEVER-CKE.
arXiv Detail & Related papers (2024-07-14T17:18:16Z) - BMIKE-53: Investigating Cross-Lingual Knowledge Editing with In-Context Learning [43.059873703788206]
Knowledge editing (KE) has emerged as a viable solution for updating the knowledge of large language models.
We introduce the BMIKE-53 benchmark for evaluating cross-lingual KE on 53 diverse languages across three KE task types.
Our evaluation focuses on cross-lingual knowledge transfer in terms of reliability, generality, locality, and portability.
arXiv Detail & Related papers (2024-06-25T17:48:56Z) - Multilingual Knowledge Editing with Language-Agnostic Factual Neurons [98.73585104789217]
We investigate how large language models (LLMs) represent multilingual factual knowledge.
We find that the same factual knowledge in different languages generally activates a shared set of neurons, which we call language-agnostic factual neurons.
Inspired by this finding, we propose a new MKE method by locating and modifying Language-Agnostic Factual Neurons (LAFN) to simultaneously edit multilingual knowledge.
arXiv Detail & Related papers (2024-06-24T08:06:56Z) - MEMLA: Enhancing Multilingual Knowledge Editing with Neuron-Masked Low-Rank Adaptation [18.087144677674786]
We focus on multilingual knowledge editing (MKE), which requires propagating updates across multiple languages.
We introduce the Multilingual Knowledge Editing Benchmark (MKEB), a novel dataset comprising 12 languages.
We also propose a method that enhances knowledge Editing with neuron-Masked Low-Rank Adaptation (MEMLA)
arXiv Detail & Related papers (2024-06-17T14:03:50Z) - MLaKE: Multilingual Knowledge Editing Benchmark for Large Language Models [65.10456412127405]
MLaKE is a benchmark for the adaptability of knowledge editing methods across five languages.
MLaKE aggregates fact chains from Wikipedia across languages and generates questions in both free-form and multiple-choice.
We evaluate the multilingual knowledge editing generalization capabilities of existing methods on MLaKE.
arXiv Detail & Related papers (2024-04-07T15:23:28Z) - Cross-Lingual Knowledge Editing in Large Language Models [73.12622532088564]
Knowledge editing has been shown to adapt large language models to new knowledge without retraining from scratch.
It is still unknown the effect of source language editing on a different target language.
We first collect a large-scale cross-lingual synthetic dataset by translating ZsRE from English to Chinese.
arXiv Detail & Related papers (2023-09-16T11:07:52Z) - Eva-KELLM: A New Benchmark for Evaluating Knowledge Editing of LLMs [54.22416829200613]
Eva-KELLM is a new benchmark for evaluating knowledge editing of large language models.
Experimental results indicate that the current methods for knowledge editing using raw documents are not effective in yielding satisfactory results.
arXiv Detail & Related papers (2023-08-19T09:17:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.