Towards Continual Entity Learning in Language Models for Conversational
Agents
- URL: http://arxiv.org/abs/2108.00082v1
- Date: Fri, 30 Jul 2021 21:10:09 GMT
- Title: Towards Continual Entity Learning in Language Models for Conversational
Agents
- Authors: Ravi Teja Gadde, Ivan Bulyko
- Abstract summary: We introduce entity-aware language models (EALM), where we integrate entity models trained on catalogues of entities into pre-trained LMs.
Our combined language model adaptively adds information from the entity models into the pre-trained LM depending on the sentence context.
We show significant perplexity improvements on task-oriented dialogue datasets, especially on long-tailed utterances.
- Score: 0.5330240017302621
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural language models (LM) trained on diverse corpora are known to work well
on previously seen entities, however, updating these models with dynamically
changing entities such as place names, song titles and shopping items requires
re-training from scratch and collecting full sentences containing these
entities. We aim to address this issue, by introducing entity-aware language
models (EALM), where we integrate entity models trained on catalogues of
entities into the pre-trained LMs. Our combined language model adaptively adds
information from the entity models into the pre-trained LM depending on the
sentence context. Our entity models can be updated independently of the
pre-trained LM, enabling us to influence the distribution of entities output by
the final LM, without any further training of the pre-trained LM. We show
significant perplexity improvements on task-oriented dialogue datasets,
especially on long-tailed utterances, with an ability to continually adapt to
new entities (to an extent).
Related papers
- Unlocking the Potential of Model Merging for Low-Resource Languages [66.7716891808697]
Adapting large language models to new languages typically involves continual pre-training (CT) followed by supervised fine-tuning (SFT)
We propose model merging as an alternative for low-resource languages, combining models with distinct capabilities into a single model without additional training.
Experiments based on Llama-2-7B demonstrate that model merging effectively endows LLMs for low-resource languages with task-solving abilities, outperforming CT-then-SFT in scenarios with extremely scarce data.
arXiv Detail & Related papers (2024-07-04T15:14:17Z) - Tracking the perspectives of interacting language models [11.601000749578647]
Large language models (LLMs) are capable of producing high quality information at unprecedented rates.
As these models continue to entrench themselves in society, the content they produce will become increasingly pervasive in databases.
arXiv Detail & Related papers (2024-06-17T17:20:16Z) - LLM Augmented LLMs: Expanding Capabilities through Composition [56.40953749310957]
CALM -- Composition to Augment Language Models -- introduces cross-attention between models to compose their representations and enable new capabilities.
We illustrate that augmenting PaLM2-S with a smaller model trained on low-resource languages results in an absolute improvement of up to 13% on tasks like translation into English.
When PaLM2-S is augmented with a code-specific model, we see a relative improvement of 40% over the base model for code generation and explanation tasks.
arXiv Detail & Related papers (2024-01-04T18:53:01Z) - Concept-aware Training Improves In-context Learning Ability of Language
Models [0.0]
Many recent language models (LMs) of Transformers family exhibit so-called in-context learning (ICL) ability.
We propose a method to create LMs able to better utilize the in-context information.
We measure that data sampling of Concept-aware Training consistently improves models' reasoning ability.
arXiv Detail & Related papers (2023-05-23T07:44:52Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - LMPriors: Pre-Trained Language Models as Task-Specific Priors [78.97143833642971]
We develop principled techniques for augmenting our models with suitable priors.
This is to encourage them to learn in ways that are compatible with our understanding of the world.
We draw inspiration from the recent successes of large-scale language models (LMs) to construct task-specific priors distilled from the rich knowledge of LMs.
arXiv Detail & Related papers (2022-10-22T19:09:18Z) - Efficient and Interpretable Neural Models for Entity Tracking [3.1985066117432934]
This thesis focuses on two key problems in relation to facilitating the use of entity tracking models.
We argue that computationally efficient entity tracking models can be developed by representing entities with rich, fixed-dimensional vector representations.
We also argue for the integration of entity tracking into language models as it will allow for: (i) wider application given the current ubiquitous use of pretrained language models in NLP applications.
arXiv Detail & Related papers (2022-08-30T13:25:27Z) - Entity Cloze By Date: What LMs Know About Unseen Entities [79.34707800653597]
Language models (LMs) are typically trained once on a large-scale corpus and used for years without being updated.
We propose a framework to analyze what LMs can infer about new entities that did not exist when the LMs were pretrained.
We derive a dataset of entities indexed by their origination date and paired with their English Wikipedia articles, from which we can find sentences about each entity.
arXiv Detail & Related papers (2022-05-05T17:59:31Z) - mLUKE: The Power of Entity Representations in Multilingual Pretrained
Language Models [15.873069955407406]
We train a multilingual language model with 24 languages with entity representations.
We show the model consistently outperforms word-based pretrained models in various cross-lingual transfer tasks.
We also evaluate the model with a multilingual cloze prompt task with the mLAMA dataset.
arXiv Detail & Related papers (2021-10-15T15:28:38Z) - MergeDistill: Merging Pre-trained Language Models using Distillation [5.396915402673246]
We propose MergeDistill, a framework to merge pre-trained LMs in a way that can best leverage their assets with minimal dependencies.
We demonstrate the applicability of our framework in a practical setting by leveraging pre-existing teacher LMs and training student LMs that perform competitively with or even outperform teacher LMs trained on several orders of magnitude more data and with a fixed model capacity.
arXiv Detail & Related papers (2021-06-05T08:22:05Z) - Unsupervised Paraphrasing with Pretrained Language Models [85.03373221588707]
We propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting.
Our recipe consists of task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking.
We show with automatic and human evaluations that our approach achieves state-of-the-art performance on both the Quora Question Pair and the ParaNMT datasets.
arXiv Detail & Related papers (2020-10-24T11:55:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.