CUE Vectors: Modular Training of Language Models Conditioned on Diverse
Contextual Signals
- URL: http://arxiv.org/abs/2203.08774v1
- Date: Wed, 16 Mar 2022 17:37:28 GMT
- Title: CUE Vectors: Modular Training of Language Models Conditioned on Diverse
Contextual Signals
- Authors: Scott Novotney, Sreeparna Mukherjee, Zeeshan Ahmed and Andreas Stolcke
- Abstract summary: We propose a framework to modularize the training of neural language models that use diverse forms of sentence-external context (including metadata)
Our approach, contextual universal embeddings (CUE), trains LMs on one set of context, such as date and author, and adapts to novel metadata types, such as article title, or previous sentence.
We validate the CUE framework on a NYTimes text corpus with multiple metadata types, for which the LM perplexity can be lowered from 36.6 to 27.4 by conditioning on context.
- Score: 11.310756148007753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a framework to modularize the training of neural language models
that use diverse forms of sentence-external context (including metadata) by
eliminating the need to jointly train sentence-external and within-sentence
encoders. Our approach, contextual universal embeddings (CUE), trains LMs on
one set of context, such as date and author, and adapts to novel metadata
types, such as article title, or previous sentence. The model consists of a
pretrained neural sentence LM, a BERT-based context encoder, and a masked
transformer decoder that estimates LM probabilities using sentence-internal and
sentence-external information. When context or metadata are unavailable, our
model learns to combine contextual and sentence-internal information using
noisy oracle unigram embeddings as a proxy. Real contextual information can be
introduced later and used to adapt a small number of parameters that map
contextual data into the decoder's embedding space. We validate the CUE
framework on a NYTimes text corpus with multiple metadata types, for which the
LM perplexity can be lowered from 36.6 to 27.4 by conditioning on context.
Bootstrapping a contextual LM with only a subset of the context/metadata during
training retains 85\% of the achievable gain. Training the model initially with
proxy context retains 67% of the perplexity gain after adapting to real
context. Furthermore, we can swap one type of pretrained sentence LM for
another without retraining the context encoders, by only adapting the decoder
model. Overall, we obtain a modular framework that allows incremental, scalable
training of context-enhanced LMs.
Related papers
- Towards Zero-Shot Multimodal Machine Translation [64.9141931372384]
We propose a method to bypass the need for fully supervised data to train multimodal machine translation systems.
Our method, called ZeroMMT, consists in adapting a strong text-only machine translation (MT) model by training it on a mixture of two objectives.
To prove that our method generalizes to languages with no fully supervised training data available, we extend the CoMMuTE evaluation dataset to three new languages: Arabic, Russian and Chinese.
arXiv Detail & Related papers (2024-07-18T15:20:31Z) - A Case Study on Context-Aware Neural Machine Translation with Multi-Task Learning [49.62044186504516]
In document-level neural machine translation (DocNMT), multi-encoder approaches are common in encoding context and source sentences.
Recent studies have shown that the context encoder generates noise and makes the model robust to the choice of context.
This paper further investigates this observation by explicitly modelling context encoding through multi-task learning (MTL) to make the model sensitive to the choice of context.
arXiv Detail & Related papers (2024-07-03T12:50:49Z) - Generative Context-aware Fine-tuning of Self-supervised Speech Models [54.389711404209415]
We study the use of generative large language models (LLM) generated context information.
We propose an approach to distill the generated information during fine-tuning of self-supervised speech models.
We evaluate the proposed approach using the SLUE and Libri-light benchmarks for several downstream tasks: automatic speech recognition, named entity recognition, and sentiment analysis.
arXiv Detail & Related papers (2023-12-15T15:46:02Z) - BERT4CTR: An Efficient Framework to Combine Pre-trained Language Model
with Non-textual Features for CTR Prediction [12.850529317775198]
We propose a novel framework BERT4CTR, with the Uni-Attention mechanism that can benefit from the interactions between non-textual and textual features.
BERT4CTR can outperform significantly the state-of-the-art frameworks to handle multi-modal inputs and be applicable to Click-Through-Rate (CTR) prediction.
arXiv Detail & Related papers (2023-08-17T08:25:54Z) - Exploring Unsupervised Pretraining Objectives for Machine Translation [99.5441395624651]
Unsupervised cross-lingual pretraining has achieved strong results in neural machine translation (NMT)
Most approaches adapt masked-language modeling (MLM) to sequence-to-sequence architectures, by masking parts of the input and reconstructing them in the decoder.
We compare masking with alternative objectives that produce inputs resembling real (full) sentences, by reordering and replacing words based on their context.
arXiv Detail & Related papers (2021-06-10T10:18:23Z) - Divide and Rule: Training Context-Aware Multi-Encoder Translation Models
with Little Resources [20.057692375546356]
Multi-encoder models aim to improve translation quality by encoding document-level contextual information alongside the current sentence.
We show that training these parameters takes large amount of data, since the contextual training signal is sparse.
We propose an efficient alternative, based on splitting sentence pairs, that allows to enrich the training signal of a set of parallel sentences.
arXiv Detail & Related papers (2021-03-31T15:15:32Z) - Unsupervised Paraphrasing with Pretrained Language Models [85.03373221588707]
We propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting.
Our recipe consists of task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking.
We show with automatic and human evaluations that our approach achieves state-of-the-art performance on both the Quora Question Pair and the ParaNMT datasets.
arXiv Detail & Related papers (2020-10-24T11:55:28Z) - Learning Contextualized Sentence Representations for Document-Level
Neural Machine Translation [59.191079800436114]
Document-level machine translation incorporates inter-sentential dependencies into the translation of a source sentence.
We propose a new framework to model cross-sentence dependencies by training neural machine translation (NMT) to predict both the target translation and surrounding sentences of a source sentence.
arXiv Detail & Related papers (2020-03-30T03:38:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.