Finding the Right Recipe for Low Resource Domain Adaptation in Neural
Machine Translation
- URL: http://arxiv.org/abs/2206.01137v1
- Date: Thu, 2 Jun 2022 16:38:33 GMT
- Title: Finding the Right Recipe for Low Resource Domain Adaptation in Neural
Machine Translation
- Authors: Virginia Adams, Sandeep Subramanian, Mike Chrzanowski, Oleksii
Hrinchuk, and Oleksii Kuchaiev
- Abstract summary: General translation models often struggle to generate accurate translations in specialized domains.
We conduct an in-depth empirical exploration of monolingual and parallel data approaches to domain adaptation.
Our work includes three domains: consumer electronic, clinical, and biomedical.
- Score: 7.2283509416724465
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: General translation models often still struggle to generate accurate
translations in specialized domains. To guide machine translation practitioners
and characterize the effectiveness of domain adaptation methods under different
data availability scenarios, we conduct an in-depth empirical exploration of
monolingual and parallel data approaches to domain adaptation of pre-trained,
third-party, NMT models in settings where architecture change is impractical.
We compare data centric adaptation methods in isolation and combination. We
study method effectiveness in very low resource (8k parallel examples) and
moderately low resource (46k parallel examples) conditions and propose an
ensemble approach to alleviate reductions in original domain translation
quality. Our work includes three domains: consumer electronic, clinical, and
biomedical and spans four language pairs - Zh-En, Ja-En, Es-En, and Ru-En. We
also make concrete recommendations for achieving high in-domain performance and
release our consumer electronic and medical domain datasets for all languages
and make our code publicly available.
Related papers
- Domain-Specific Text Generation for Machine Translation [7.803471587734353]
We propose a novel approach to domain adaptation leveraging state-of-the-art pretrained language models (LMs) for domain-specific data augmentation.
We employ mixed fine-tuning to train models that significantly improve translation of in-domain texts.
arXiv Detail & Related papers (2022-08-11T16:22:16Z) - Non-Parametric Domain Adaptation for End-to-End Speech Translation [72.37869362559212]
End-to-End Speech Translation (E2E-ST) has received increasing attention due to the potential of its less error propagation, lower latency, and fewer parameters.
We propose a novel non-parametric method that leverages domain-specific text translation corpus to achieve domain adaptation for the E2E-ST system.
arXiv Detail & Related papers (2022-05-23T11:41:02Z) - DaLC: Domain Adaptation Learning Curve Prediction for Neural Machine
Translation [10.03007605098947]
Domain Adaptation (DA) of Neural Machine Translation (NMT) model often relies on a pre-trained general NMT model which is adapted to the new domain on a sample of in-domain parallel data.
We propose a Domain Learning Curve prediction (DaLC) model that predicts prospective DA performance based on in-domain monolingual samples in the source language.
arXiv Detail & Related papers (2022-04-20T06:57:48Z) - Uncertainty-Aware Balancing for Multilingual and Multi-Domain Neural
Machine Translation Training [58.72619374790418]
MultiUAT dynamically adjusts the training data usage based on the model's uncertainty.
We analyze the cross-domain transfer and show the deficiency of static and similarity based methods.
arXiv Detail & Related papers (2021-09-06T08:30:33Z) - Domain Adaptation and Multi-Domain Adaptation for Neural Machine
Translation: A Survey [9.645196221785694]
We focus on robust approaches to domain adaptation for Neural Machine Translation (NMT) models.
In particular, we look at the case where a system may need to translate sentences from multiple domains.
We highlight the benefits of domain adaptation and multi-domain adaptation techniques to other lines of NMT research.
arXiv Detail & Related papers (2021-04-14T16:21:37Z) - FDMT: A Benchmark Dataset for Fine-grained Domain Adaptation in Machine
Translation [53.87731008029645]
We present a real-world fine-grained domain adaptation task in machine translation (FDMT)
The FDMT dataset consists of four sub-domains of information technology: autonomous vehicles, AI education, real-time networks and smart phone.
We make quantitative experiments and deep analyses in this new setting, which benchmarks the fine-grained domain adaptation task.
arXiv Detail & Related papers (2020-12-31T17:15:09Z) - Iterative Domain-Repaired Back-Translation [50.32925322697343]
In this paper, we focus on the domain-specific translation with low resources, where in-domain parallel corpora are scarce or nonexistent.
We propose a novel iterative domain-repaired back-translation framework, which introduces the Domain-Repair model to refine translations in synthetic bilingual data.
Experiments on adapting NMT models between specific domains and from the general domain to specific domains demonstrate the effectiveness of our proposed approach.
arXiv Detail & Related papers (2020-10-06T04:38:09Z) - Dynamic Data Selection and Weighting for Iterative Back-Translation [116.14378571769045]
We propose a curriculum learning strategy for iterative back-translation models.
We evaluate our models on domain adaptation, low-resource, and high-resource MT settings.
Experimental results demonstrate that our methods achieve improvements of up to 1.8 BLEU points over competitive baselines.
arXiv Detail & Related papers (2020-04-07T19:49:58Z) - A Simple Baseline to Semi-Supervised Domain Adaptation for Machine
Translation [73.3550140511458]
State-of-the-art neural machine translation (NMT) systems are data-hungry and perform poorly on new domains with no supervised data.
We propose a simple but effect approach to the semi-supervised domain adaptation scenario of NMT.
This approach iteratively trains a Transformer-based NMT model via three training objectives: language modeling, back-translation, and supervised translation.
arXiv Detail & Related papers (2020-01-22T16:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.