Training Plug-n-Play Knowledge Modules with Deep Context Distillation
- URL: http://arxiv.org/abs/2503.08727v1
- Date: Tue, 11 Mar 2025 01:07:57 GMT
- Title: Training Plug-n-Play Knowledge Modules with Deep Context Distillation
- Authors: Lucas Caccia, Alan Ansell, Edoardo Ponti, Ivan Vulić, Alessandro Sordoni,
- Abstract summary: In this paper, we propose a way of modularizing knowledge by training document-level Knowledge Modules (KMs)<n> KMs are lightweight components implemented as parameter-efficient LoRA modules.<n>Our method outperforms standard next-token prediction and pre-instruction training techniques, across two datasets.
- Score: 52.94830874557649
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamically integrating new or rapidly evolving information after (Large) Language Model pre-training remains challenging, particularly in low-data scenarios or when dealing with private and specialized documents. In-context learning and retrieval-augmented generation (RAG) face limitations, including their high inference costs and their inability to capture global document information. In this paper, we propose a way of modularizing knowledge by training document-level Knowledge Modules (KMs). KMs are lightweight components implemented as parameter-efficient LoRA modules, which are trained to store information about new documents and can be easily plugged into models on demand. We show that next-token prediction performs poorly as the training objective for KMs. We instead propose Deep Context Distillation: we learn KMs parameters such as to simulate hidden states and logits of a teacher that takes the document in context. Our method outperforms standard next-token prediction and pre-instruction training techniques, across two datasets. Finally, we highlight synergies between KMs and retrieval-augmented generation.
Related papers
- Memorization vs. Reasoning: Updating LLMs with New Knowledge [12.214561228023511]
We introduce Knowledge Update Playground (KUP), an automatic pipeline for simulating realistic knowledge updates.
We present a lightweight method called memory conditioned training (MCT), which conditions tokens in the update corpus on self-generated "memory" tokens during training.
Our results show that (1) KUP benchmark is highly challenging, with the best CPT models achieving $2%$ in indirect probing setting (reasoning) and (2) MCT training significantly outperforms prior continued pre-training (CPT) baselines.
arXiv Detail & Related papers (2025-04-16T23:03:40Z) - Training Dynamics of a 1.7B LLaMa Model: A Data-Efficient Approach [10.39475177812483]
We share insights gained from training DMaS-LLaMa-Lite on approximately 20 billion tokens of data.
We chronicle the full training trajectory, documenting how evolving validation loss levels and downstream benchmarks reflect transitions from incoherent text to fluent, contextually grounded output.
By detailing these experiences and offering training logs, checkpoints, and sample outputs, we aim to guide future researchers and practitioners in refining their pretraining strategies.
arXiv Detail & Related papers (2024-12-17T21:15:52Z) - MOS: Model Surgery for Pre-Trained Model-Based Class-Incremental Learning [62.78292142632335]
Class-Incremental Learning (CIL) requires models to continually acquire knowledge of new classes without forgetting old ones.<n>Existing work seeks to utilize lightweight components to adjust the model.<n>We propose MOdel Surgery (MOS) to rescue the model from forgetting previous knowledge.
arXiv Detail & Related papers (2024-12-12T16:57:20Z) - Beyond Content Relevance: Evaluating Instruction Following in Retrieval Models [25.301280441283147]
This study evaluates the instruction-following capabilities of various retrieval models beyond content relevance.<n>We develop a novel retrieval evaluation benchmark spanning six document-level attributes.<n>Our findings indicate that although fine-tuning models on instruction-aware retrieval datasets enhance performance, most models still fall short of instruction compliance.
arXiv Detail & Related papers (2024-10-31T11:47:21Z) - Soft Prompting for Unlearning in Large Language Models [11.504012974208466]
This work focuses on investigating machine unlearning for Large Language Models motivated by data protection regulations.
We propose a framework textbfSoft textbfPrompting for textbfUntextbflearning (SPUL)
We conduct a rigorous evaluation of the proposed method and our results indicate that SPUL can significantly improve the trade-off between utility and forgetting.
arXiv Detail & Related papers (2024-06-17T19:11:40Z) - TRELM: Towards Robust and Efficient Pre-training for Knowledge-Enhanced Language Models [31.209774088374374]
This paper introduces TRELM, a Robust and Efficient Pre-training framework for Knowledge-Enhanced Language Models.
We employ a robust approach to inject knowledge triples and employ a knowledge-augmented memory bank to capture valuable information.
We show that TRELM reduces pre-training time by at least 50% and outperforms other KEPLMs in knowledge probing tasks and multiple knowledge-aware language understanding tasks.
arXiv Detail & Related papers (2024-03-17T13:04:35Z) - Instruction-tuned Language Models are Better Knowledge Learners [106.38526595116961]
We propose pre-instruction-tuning (PIT), a method that instruction-tunes on questions prior to training on documents.
Extensive experiments and ablation studies demonstrate that pre-instruction-tuning significantly enhances the ability of LLMs to absorb knowledge from new documents.
arXiv Detail & Related papers (2024-02-20T09:20:32Z) - Knowledge-Augmented Reasoning Distillation for Small Language Models in
Knowledge-Intensive Tasks [90.11273439036455]
Large Language Models (LLMs) have shown promising performance in knowledge-intensive reasoning tasks.
We propose Knowledge-Augmented Reasoning Distillation (KARD), a novel method that fine-tunes small LMs to generate rationales from LLMs with augmented knowledge retrieved from an external knowledge base.
We empirically show that KARD significantly improves the performance of small T5 and GPT models on the challenging knowledge-intensive reasoning datasets.
arXiv Detail & Related papers (2023-05-28T13:00:00Z) - Knowledge Card: Filling LLMs' Knowledge Gaps with Plug-in Specialized Language Models [46.079902719883414]
We propose Knowledge Card, a modular framework to plug in new factual and relevant knowledge into general-purpose language models.
We first introduce knowledge cards -- specialized language models trained on corpora from specific domains and sources.
We then propose three content selectors to dynamically select and retain information in documents generated by knowledge cards.
arXiv Detail & Related papers (2023-05-17T05:25:27Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - Improving Meta-learning for Low-resource Text Classification and
Generation via Memory Imitation [87.98063273826702]
We propose a memory imitation meta-learning (MemIML) method that enhances the model's reliance on support sets for task adaptation.
A theoretical analysis is provided to prove the effectiveness of our method.
arXiv Detail & Related papers (2022-03-22T12:41:55Z) - REALM: Retrieval-Augmented Language Model Pre-Training [37.3178586179607]
We augment language model pre-training with a latent knowledge retriever, which allows the model to retrieve and attend over documents from a large corpus such as Wikipedia.
For the first time, we show how to pre-train such a knowledge retriever in an unsupervised manner.
We demonstrate the effectiveness of Retrieval-Augmented Language Model pre-training (REALM) by fine-tuning on the challenging task of Open-domain Question Answering (Open-QA)
arXiv Detail & Related papers (2020-02-10T18:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.