Fusing Sentence Embeddings Into LSTM-based Autoregressive Language
Models
- URL: http://arxiv.org/abs/2208.02402v2
- Date: Fri, 5 Aug 2022 05:26:13 GMT
- Title: Fusing Sentence Embeddings Into LSTM-based Autoregressive Language
Models
- Authors: Vil\'em Zouhar, Marius Mosbach, Dietrich Klakow
- Abstract summary: We present an LSTM-based autoregressive language model which uses prefix embeddings (from a pretrained masked language model) via fusion.
We find that fusion helps reliably in lowering the perplexity (16.74 $rightarrow$ 15.80), which is even preserved after a transfer to a dataset from a different domain.
We also evaluate the best-performing fusion model by correlating its next word surprisal estimates with human reading times.
- Score: 20.24851041248274
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although masked language models are highly performant and widely adopted by
NLP practitioners, they can not be easily used for autoregressive language
modelling (next word prediction and sequence probability estimation). We
present an LSTM-based autoregressive language model which uses prefix
embeddings (from a pretrained masked language model) via fusion (e.g.
concatenation) to obtain a richer context representation for language
modelling. We find that fusion helps reliably in lowering the perplexity (16.74
$\rightarrow$ 15.80), which is even preserved after a transfer to a dataset
from a different domain than the training data. We also evaluate the
best-performing fusion model by correlating its next word surprisal estimates
with human reading times. Contradicting our expectation, and despite the
improvement in perplexity overall, the correlation remains the same as for the
baseline model. Lastly, while we focus on language models pre-trained on text
as the sources for the fusion, our approach can be possibly extended to fuse
any information represented as a fixed-size vector into an auto-regressive
language model. These include e.g. sentence external information retrieved for
a knowledge base or representations of multi-modal encoders.
Related papers
- Knowledge Fusion By Evolving Weights of Language Models [5.354527640064584]
This paper examines the approach of integrating multiple models into a unified model.
We propose a knowledge fusion method named Evolver, inspired by evolutionary algorithms.
arXiv Detail & Related papers (2024-06-18T02:12:34Z) - FiLM: Fill-in Language Models for Any-Order Generation [71.42044325886194]
Fill-in Language Model (FiLM) is a new language modeling approach that allows for flexible generation at any position without adhering to a specific generation order.
During inference, FiLM can seamlessly insert missing phrases, sentences, or paragraphs.
FiLM outperforms existing infilling methods that rely on left-to-right language models trained on rearranged text segments.
arXiv Detail & Related papers (2023-10-15T19:37:39Z) - Modeling Sequential Sentence Relation to Improve Cross-lingual Dense
Retrieval [87.11836738011007]
We propose a multilingual multilingual language model called masked sentence model (MSM)
MSM consists of a sentence encoder to generate the sentence representations, and a document encoder applied to a sequence of sentence vectors from a document.
To train the model, we propose a masked sentence prediction task, which masks and predicts the sentence vector via a hierarchical contrastive loss with sampled negatives.
arXiv Detail & Related papers (2023-02-03T09:54:27Z) - mFACE: Multilingual Summarization with Factual Consistency Evaluation [79.60172087719356]
Abstractive summarization has enjoyed renewed interest in recent years, thanks to pre-trained language models and the availability of large-scale datasets.
Despite promising results, current models still suffer from generating factually inconsistent summaries.
We leverage factual consistency evaluation models to improve multilingual summarization.
arXiv Detail & Related papers (2022-12-20T19:52:41Z) - Efficient Training of Language Models to Fill in the Middle [17.118891860985123]
We show that autoregressive language models can learn to infill text after we apply a straightforward transformation to the dataset.
We use these ablations to prescribe strong default settings and best practices to train FIM models.
We have released our best infilling model trained with best practices in our API, and release our infilling benchmarks to aid future research.
arXiv Detail & Related papers (2022-07-28T17:40:47Z) - Better Language Model with Hypernym Class Prediction [101.8517004687825]
Class-based language models (LMs) have been long devised to address context sparsity in $n$-gram LMs.
In this study, we revisit this approach in the context of neural LMs.
arXiv Detail & Related papers (2022-03-21T01:16:44Z) - Mixed Attention Transformer for LeveragingWord-Level Knowledge to Neural
Cross-Lingual Information Retrieval [15.902630454568811]
We propose a novel Mixed Attention Transformer (MAT) that incorporates external word level knowledge, such as a dictionary or translation table.
By encoding the translation knowledge into an attention matrix, the model with MAT is able to focus on the mutually translated words in the input sequence.
arXiv Detail & Related papers (2021-09-07T00:33:14Z) - Unsupervised Paraphrasing with Pretrained Language Models [85.03373221588707]
We propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting.
Our recipe consists of task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking.
We show with automatic and human evaluations that our approach achieves state-of-the-art performance on both the Quora Question Pair and the ParaNMT datasets.
arXiv Detail & Related papers (2020-10-24T11:55:28Z) - Pre-training Multilingual Neural Machine Translation by Leveraging
Alignment Information [72.2412707779571]
mRASP is an approach to pre-train a universal multilingual neural machine translation model.
We carry out experiments on 42 translation directions across a diverse setting, including low, medium, rich resource, and as well as transferring to exotic language pairs.
arXiv Detail & Related papers (2020-10-07T03:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.