Few-shot learning through contextual data augmentation
- URL: http://arxiv.org/abs/2103.16911v1
- Date: Wed, 31 Mar 2021 09:05:43 GMT
- Title: Few-shot learning through contextual data augmentation
- Authors: Farid Arthaud, Rachel Bawden and Alexandra Birch
- Abstract summary: Machine translation models need to adapt to new data to maintain their performance over time.
We show that adaptation on the scale of one to five examples is possible.
Our model reports better accuracy scores than a reference system trained with on average 313 parallel examples.
- Score: 74.20290390065475
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Machine translation (MT) models used in industries with constantly changing
topics, such as translation or news agencies, need to adapt to new data to
maintain their performance over time. Our aim is to teach a pre-trained MT
model to translate previously unseen words accurately, based on very few
examples. We propose (i) an experimental setup allowing us to simulate novel
vocabulary appearing in human-submitted translations, and (ii) corresponding
evaluation metrics to compare our approaches. We extend a data augmentation
approach using a pre-trained language model to create training examples with
similar contexts for novel words. We compare different fine-tuning and data
augmentation approaches and show that adaptation on the scale of one to five
examples is possible. Combining data augmentation with randomly selected
training sentences leads to the highest BLEU score and accuracy improvements.
Impressively, with only 1 to 5 examples, our model reports better accuracy
scores than a reference system trained with on average 313 parallel examples.
Related papers
- Ensembling Finetuned Language Models for Text Classification [55.15643209328513]
Finetuning is a common practice across different communities to adapt pretrained models to particular tasks.
ensembles of neural networks are typically used to boost performance and provide reliable uncertainty estimates.
We present a metadataset with predictions from five large finetuned models on six datasets and report results of different ensembling strategies.
arXiv Detail & Related papers (2024-10-25T09:15:54Z) - Segment-Based Interactive Machine Translation for Pre-trained Models [2.0871483263418806]
We explore the use of pre-trained large language models (LLM) in interactive machine translation environments.
The system generates perfect translations interactively using the feedback provided by the user at each iteration.
We compare the performance of mBART, mT5 and a state-of-the-art (SoTA) machine translation model on a benchmark dataset regarding user effort.
arXiv Detail & Related papers (2024-07-09T16:04:21Z) - Enhancing Translation Accuracy of Large Language Models through Continual Pre-Training on Parallel Data [13.587157318352869]
We propose a two-phase training approach where pre-trained large language models are continually pre-trained on parallel data.
We evaluate these methods on thirteen test sets for Japanese-to-English and English-to-Japanese translation.
arXiv Detail & Related papers (2024-07-03T14:23:36Z) - Investigating Pre-trained Language Models on Cross-Domain Datasets, a
Step Closer to General AI [0.8889304968879164]
We investigate the ability of pre-trained language models to generalize to different non-language tasks.
The four pre-trained models that we used, T5, BART, BERT, and GPT-2 achieve outstanding results.
arXiv Detail & Related papers (2023-06-21T11:55:17Z) - Unified Model Learning for Various Neural Machine Translation [63.320005222549646]
Existing machine translation (NMT) studies mainly focus on developing dataset-specific models.
We propose a versatile'' model, i.e., the Unified Model Learning for NMT (UMLNMT) that works with data from different tasks.
OurNMT results in substantial improvements over dataset-specific models with significantly reduced model deployment costs.
arXiv Detail & Related papers (2023-05-04T12:21:52Z) - Improving Few-Shot Performance of Language Models via Nearest Neighbor
Calibration [12.334422701057674]
We propose a novel nearest-neighbor calibration framework for in-context learning.
It is inspired by a phenomenon that the in-context learning paradigm produces incorrect labels when inferring training instances.
Experiments on various few-shot text classification tasks demonstrate that our method significantly improves in-context learning.
arXiv Detail & Related papers (2022-12-05T12:49:41Z) - Learning to Generalize to More: Continuous Semantic Augmentation for
Neural Machine Translation [50.54059385277964]
We present a novel data augmentation paradigm termed Continuous Semantic Augmentation (CsaNMT)
CsaNMT augments each training instance with an adjacency region that could cover adequate variants of literal expression under the same meaning.
arXiv Detail & Related papers (2022-04-14T08:16:28Z) - Improving Neural Machine Translation by Bidirectional Training [85.64797317290349]
We present a simple and effective pretraining strategy -- bidirectional training (BiT) for neural machine translation.
Specifically, we bidirectionally update the model parameters at the early stage and then tune the model normally.
Experimental results show that BiT pushes the SOTA neural machine translation performance across 15 translation tasks on 8 language pairs significantly higher.
arXiv Detail & Related papers (2021-09-16T07:58:33Z) - Efficient Nearest Neighbor Language Models [114.40866461741795]
Non-parametric neural language models (NLMs) learn predictive distributions of text utilizing an external datastore.
We show how to achieve up to a 6x speed-up in inference speed while retaining comparable performance.
arXiv Detail & Related papers (2021-09-09T12:32:28Z) - UmBERTo-MTSA @ AcCompl-It: Improving Complexity and Acceptability
Prediction with Multi-task Learning on Self-Supervised Annotations [0.0]
This work describes a self-supervised data augmentation approach used to improve learning models' performances when only a moderate amount of labeled data is available.
Nerve language models are fine-tuned using this procedure in the context of the AcCompl-it shared task at EVALITA 2020.
arXiv Detail & Related papers (2020-11-10T15:50:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.