Predictive Representation Learning for Language Modeling
- URL: http://arxiv.org/abs/2105.14214v1
- Date: Sat, 29 May 2021 05:03:47 GMT
- Title: Predictive Representation Learning for Language Modeling
- Authors: Qingfeng Lan, Luke Kumar, Martha White, Alona Fyshe
- Abstract summary: Correlates of secondary information appear in LSTM representations even though they are not part of an emphexplicitly supervised prediction task.
We propose Predictive Representation Learning (PRL), which explicitly constrains LSTMs to encode specific predictions.
- Score: 33.08232449211759
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To effectively perform the task of next-word prediction, long short-term
memory networks (LSTMs) must keep track of many types of information. Some
information is directly related to the next word's identity, but some is more
secondary (e.g. discourse-level features or features of downstream words).
Correlates of secondary information appear in LSTM representations even though
they are not part of an \emph{explicitly} supervised prediction task. In
contrast, in reinforcement learning (RL), techniques that explicitly supervise
representations to predict secondary information have been shown to be
beneficial. Inspired by that success, we propose Predictive Representation
Learning (PRL), which explicitly constrains LSTMs to encode specific
predictions, like those that might need to be learned implicitly. We show that
PRL 1) significantly improves two strong language modeling methods, 2)
converges more quickly, and 3) performs better when data is limited. Our work
shows that explicitly encoding a simple predictive task facilitates the search
for a more effective language model.
Related papers
- Gloss Attention for Gloss-free Sign Language Translation [60.633146518820325]
We show how gloss annotations make sign language translation easier.
We then propose emphgloss attention, which enables the model to keep its attention within video segments that have the same semantics locally.
Experimental results on multiple large-scale sign language datasets show that our proposed GASLT model significantly outperforms existing methods.
arXiv Detail & Related papers (2023-07-14T14:07:55Z) - Bidirectional Representations for Low Resource Spoken Language
Understanding [39.208462511430554]
We propose a representation model to encode speech in bidirectional rich encodings.
The approach uses a masked language modelling objective to learn the representations.
We show that the performance of the resulting encodings is better than comparable models on multiple datasets.
arXiv Detail & Related papers (2022-11-24T17:05:16Z) - Characterizing Verbatim Short-Term Memory in Neural Language Models [19.308884420859027]
We tested whether language models could retrieve the exact words that occurred previously in a text.
We found that the transformers retrieved both the identity and ordering of nouns from the first list.
Their ability to index prior tokens was dependent on learned attention patterns.
arXiv Detail & Related papers (2022-10-24T19:47:56Z) - Language Model Pre-Training with Sparse Latent Typing [66.75786739499604]
We propose a new pre-training objective, Sparse Latent Typing, which enables the model to sparsely extract sentence-level keywords with diverse latent types.
Experimental results show that our model is able to learn interpretable latent type categories in a self-supervised manner without using any external knowledge.
arXiv Detail & Related papers (2022-10-23T00:37:08Z) - Masked Language Modeling and the Distributional Hypothesis: Order Word
Matters Pre-training for Little [74.49773960145681]
A possible explanation for the impressive performance of masked language model (MLM)-training is that such models have learned to represent the syntactic structures prevalent in NLP pipelines.
In this paper, we propose a different explanation: pre-trains succeed on downstream tasks almost entirely due to their ability to model higher-order word co-occurrence statistics.
Our results show that purely distributional information largely explains the success of pre-training, and underscore the importance of curating challenging evaluation datasets that require deeper linguistic knowledge.
arXiv Detail & Related papers (2021-04-14T06:30:36Z) - InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language
Model Pre-Training [135.12061144759517]
We present an information-theoretic framework that formulates cross-lingual language model pre-training.
We propose a new pre-training task based on contrastive learning.
By leveraging both monolingual and parallel corpora, we jointly train the pretext to improve the cross-lingual transferability of pre-trained models.
arXiv Detail & Related papers (2020-07-15T16:58:01Z) - Analysis of Predictive Coding Models for Phonemic Representation
Learning in Small Datasets [0.0]
The present study investigates the behaviour of two predictive coding models, Autoregressive Predictive Coding and Contrastive Predictive Coding, in a phoneme discrimination task.
Our experiments show a strong correlation between the autoregressive loss and the phoneme discrimination scores with the two datasets.
The CPC model shows rapid convergence already after one pass over the training data, and, on average, its representations outperform those of APC on both languages.
arXiv Detail & Related papers (2020-07-08T15:46:13Z) - Pre-training Text Representations as Meta Learning [113.3361289756749]
We introduce a learning algorithm which directly optimize model's ability to learn text representations for effective learning of downstream tasks.
We show that there is an intrinsic connection between multi-task pre-training and model-agnostic meta-learning with a sequence of meta-train steps.
arXiv Detail & Related papers (2020-04-12T09:05:47Z) - Depth-Adaptive Graph Recurrent Network for Text Classification [71.20237659479703]
Sentence-State LSTM (S-LSTM) is a powerful and high efficient graph recurrent network.
We propose a depth-adaptive mechanism for the S-LSTM, which allows the model to learn how many computational steps to conduct for different words as required.
arXiv Detail & Related papers (2020-02-29T03:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.