WISE: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language Models
- URL: http://arxiv.org/abs/2405.14768v1
- Date: Thu, 23 May 2024 16:35:52 GMT
- Title: WISE: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language Models
- Authors: Peng Wang, Zexi Li, Ningyu Zhang, Ziwen Xu, Yunzhi Yao, Yong Jiang, Pengjun Xie, Fei Huang, Huajun Chen,
- Abstract summary: Large language models (LLMs) need knowledge updates to meet the ever-growing world facts and correct the hallucinated responses.
Where the updated knowledge resides in memories is a fundamental question for model editing.
We propose WISE to bridge the gap between memories.
- Score: 78.22291694903659
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) need knowledge updates to meet the ever-growing world facts and correct the hallucinated responses, facilitating the methods of lifelong model editing. Where the updated knowledge resides in memories is a fundamental question for model editing. In this paper, we find that editing either long-term memory (direct model parameters) or working memory (non-parametric knowledge of neural network activations/representations by retrieval) will result in an impossible triangle -- reliability, generalization, and locality can not be realized together in the lifelong editing settings. For long-term memory, directly editing the parameters will cause conflicts with irrelevant pretrained knowledge or previous edits (poor reliability and locality). For working memory, retrieval-based activations can hardly make the model understand the edits and generalize (poor generalization). Therefore, we propose WISE to bridge the gap between memories. In WISE, we design a dual parametric memory scheme, which consists of the main memory for the pretrained knowledge and a side memory for the edited knowledge. We only edit the knowledge in the side memory and train a router to decide which memory to go through when given a query. For continual editing, we devise a knowledge-sharding mechanism where different sets of edits reside in distinct subspaces of parameters, and are subsequently merged into a shared memory without conflicts. Extensive experiments show that WISE can outperform previous model editing methods and overcome the impossible triangle under lifelong model editing of question answering, hallucination, and out-of-distribution settings across trending LLM architectures, e.g., GPT, LLaMA, and Mistral. Code will be released at https://github.com/zjunlp/EasyEdit.
Related papers
- Lifelong Knowledge Editing for LLMs with Retrieval-Augmented Continuous Prompt Learning [30.554641380670315]
We introduce RECIPE, a ContInuous Prompt lEarning method to boost editing efficacy and inference efficiency in lifelong learning.
RECIPE first converts knowledge statements into short and informative continuous prompts, prefixed to the LLM's input query embedding.
It further integrates the Knowledge Sentinel (KS) that acts as an intermediary to calculate a dynamic threshold.
Our retriever and prompt encoder are jointly trained to achieve editing properties, i.e. reliability, generality, and locality.
arXiv Detail & Related papers (2024-05-06T08:52:11Z) - MemLLM: Finetuning LLMs to Use An Explicit Read-Write Memory [49.96019697955383]
We introduce MemLLM, a novel method of enhancing knowledge capabilities by integrating a structured and explicit read-and-write memory module.
Our experiments indicate that MemLLM enhances performance and interpretability, in language modeling general and in particular.
We see MemLLM as an important step towards making LLMs more grounded and factual through memory augmentation.
arXiv Detail & Related papers (2024-04-17T18:13:16Z) - Larimar: Large Language Models with Episodic Memory Control [62.70727449128647]
Larimar is a brain-inspired architecture for enhancing Large Language Models with a distributed episodic memory.
Experimental results on multiple fact editing benchmarks demonstrate that Larimar attains accuracy comparable to most competitive baselines.
We provide mechanisms for selective fact forgetting, information leakage prevention, and input context length generalization with Larimar.
arXiv Detail & Related papers (2024-03-18T16:01:42Z) - Is it Possible to Edit Large Language Models Robustly? [60.36021686516329]
Large language models (LLMs) have played a pivotal role in building communicative AI to imitate human behaviors.
Recent studies have delved into the realm of model editing, which manipulates specific memories of language models and changes the related language generation.
This work seeks to understand the strengths and limitations of editing methods, thus facilitating robust, realistic applications of communicative AI.
arXiv Detail & Related papers (2024-02-08T17:06:45Z) - History Matters: Temporal Knowledge Editing in Large Language Model [42.74144542674756]
We introduce the task of Temporal Knowledge Editing (TKE) and establish a benchmark AToKe to evaluate current model editing methods.
We find that while existing model editing methods are effective at making models remember new knowledge, the edited model catastrophically forgets historical knowledge.
To address this gap, we propose a simple and general framework termed Multi-Editing with Time Objective (METO) for enhancing existing editing models.
arXiv Detail & Related papers (2023-12-09T07:51:56Z) - Massive Editing for Large Language Models via Meta Learning [27.972194696587813]
Large language models (LLMs) have enabled learning knowledge from the pre-training corpora, but the acquired knowledge may be fundamentally incorrect or outdated over time.
We propose the MAssive Language Model Editing Network (MALMEN), which formulates the parameter shift aggregation as the least square problem.
Our method is evaluated by editing up to thousands of facts on LMs with different architectures, i.e., BERT-base, GPT-2, T5-XL (2.8B), and GPT-J (6B)
arXiv Detail & Related papers (2023-11-08T13:03:06Z) - EasyEdit: An Easy-to-use Knowledge Editing Framework for Large Language Models [45.70959260613425]
We propose EasyEdit, an easy-to-use knowledge editing framework for Large Language Models.
It supports various cutting-edge knowledge editing approaches and can be readily applied to many well-known LLMs.
We report the knowledge editing results on LlaMA-2 with EasyEdit, demonstrating that knowledge editing surpasses traditional fine-tuning.
arXiv Detail & Related papers (2023-08-14T16:52:42Z) - MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop
Questions [80.69639629733484]
We present a benchmark, MQuAKE, comprising multi-hop questions that assess whether edited models correctly answer questions.
We propose a memory-based approach, MeLLo, which stores all edited facts externally while prompting the language model iteratively to generate answers consistent with the edited facts.
arXiv Detail & Related papers (2023-05-24T06:48:41Z) - Does Localization Inform Editing? Surprising Differences in
Causality-Based Localization vs. Knowledge Editing in Language Models [68.03946716358335]
We find that we can change how a fact is stored in a model by editing weights that are in a different location than where existing methods suggest that the fact is stored.
This is surprising because we would expect that localizing facts to specific model parameters would tell us where to manipulate knowledge in models.
Our results suggest, counterintuitively, that better mechanistic understanding of how pretrained language models work may not always translate to insights about how to best change their behavior.
arXiv Detail & Related papers (2023-01-10T21:26:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.