A Hybrid Approach for Improved Low Resource Neural Machine Translation
using Monolingual Data
- URL: http://arxiv.org/abs/2011.07403v3
- Date: Mon, 22 Nov 2021 13:53:24 GMT
- Title: A Hybrid Approach for Improved Low Resource Neural Machine Translation
using Monolingual Data
- Authors: Idris Abdulmumin, Bashir Shehu Galadanci, Abubakar Isa, Habeebah Adamu
Kakudi, Ismaila Idris Sinan
- Abstract summary: Many language pairs are low resource, meaning the amount and/or quality of available parallel data is not sufficient to train a neural machine translation (NMT) model.
This work proposes a novel approach that enables both the backward and forward models to benefit from the monolingual target data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many language pairs are low resource, meaning the amount and/or quality of
available parallel data is not sufficient to train a neural machine translation
(NMT) model which can reach an acceptable standard of accuracy. Many works have
explored using the readily available monolingual data in either or both of the
languages to improve the standard of translation models in low, and even high,
resource languages. One of the most successful of such works is the
back-translation that utilizes the translations of the target language
monolingual data to increase the amount of the training data. The quality of
the backward model which is trained on the available parallel data has been
shown to determine the performance of the back-translation approach. Despite
this, only the forward model is improved on the monolingual target data in
standard back-translation. A previous study proposed an iterative
back-translation approach for improving both models over several iterations.
But unlike in the traditional back-translation, it relied on both the target
and source monolingual data. This work, therefore, proposes a novel approach
that enables both the backward and forward models to benefit from the
monolingual target data through a hybrid of self-learning and back-translation
respectively. Experimental results have shown the superiority of the proposed
approach over the traditional back-translation method on English-German low
resource neural machine translation. We also proposed an iterative
self-learning approach that outperforms the iterative back-translation while
also relying only on the monolingual target data and require the training of
less models.
Related papers
- Optimal Transport Posterior Alignment for Cross-lingual Semantic Parsing [68.47787275021567]
Cross-lingual semantic parsing transfers parsing capability from a high-resource language (e.g., English) to low-resource languages with scarce training data.
We propose a new approach to cross-lingual semantic parsing by explicitly minimizing cross-lingual divergence between latent variables using Optimal Transport.
arXiv Detail & Related papers (2023-07-09T04:52:31Z) - Improving Cross-lingual Information Retrieval on Low-Resource Languages
via Optimal Transport Distillation [21.057178077747754]
In this work, we propose OPTICAL: Optimal Transport distillation for low-resource Cross-lingual information retrieval.
By separating the cross-lingual knowledge from knowledge of query document matching, OPTICAL only needs bitext data for distillation training.
Experimental results show that, with minimal training data, OPTICAL significantly outperforms strong baselines on low-resource languages.
arXiv Detail & Related papers (2023-01-29T22:30:36Z) - Summarize and Generate to Back-translate: Unsupervised Translation of
Programming Languages [86.08359401867577]
Back-translation is widely known for its effectiveness for neural machine translation when little to no parallel data is available.
We propose performing back-translation via code summarization and generation.
We show that our proposed approach performs competitively with state-of-the-art methods.
arXiv Detail & Related papers (2022-05-23T08:20:41Z) - Improving Multilingual Translation by Representation and Gradient
Regularization [82.42760103045083]
We propose a joint approach to regularize NMT models at both representation-level and gradient-level.
Our results demonstrate that our approach is highly effective in both reducing off-target translation occurrences and improving zero-shot translation performance.
arXiv Detail & Related papers (2021-09-10T10:52:21Z) - Distributionally Robust Multilingual Machine Translation [94.51866646879337]
We propose a new learning objective for Multilingual neural machine translation (MNMT) based on distributionally robust optimization.
We show how to practically optimize this objective for large translation corpora using an iterated best response scheme.
Our method consistently outperforms strong baseline methods in terms of average and per-language performance under both many-to-one and one-to-many translation settings.
arXiv Detail & Related papers (2021-09-09T03:48:35Z) - Multilingual Neural Semantic Parsing for Low-Resourced Languages [1.6244541005112747]
We introduce a new multilingual semantic parsing dataset in English, Italian and Japanese.
We show that joint multilingual training with pretrained encoders substantially outperforms our baselines on the TOP dataset.
We find that a semantic trained only on English data achieves a zero-shot performance of 44.9% exact-match accuracy on Italian sentences.
arXiv Detail & Related papers (2021-06-07T09:53:02Z) - Exploring Monolingual Data for Neural Machine Translation with Knowledge
Distillation [10.745228927771915]
We explore two types of monolingual data that can be included in knowledge distillation training for neural machine translation (NMT)
We find that source-side monolingual data improves model performance when evaluated by test-set originated from source-side.
We also show that it is not required to train the student model with the same data used by the teacher, as long as the domains are the same.
arXiv Detail & Related papers (2020-12-31T05:28:42Z) - Enhanced back-translation for low resource neural machine translation
using self-training [0.0]
This work proposes a self-training strategy where the output of the backward model is used to improve the model itself through the forward translation technique.
The technique was shown to improve baseline low resource IWSLT'14 English-German and IWSLT'15 English-Vietnamese backward translation models by 11.06 and 1.5 BLEUs respectively.
The synthetic data generated by the improved English-German backward model was used to train a forward model which out-performed another forward model trained using standard back-translation by 2.7 BLEU.
arXiv Detail & Related papers (2020-06-04T14:19:52Z) - Leveraging Monolingual Data with Self-Supervision for Multilingual
Neural Machine Translation [54.52971020087777]
Using monolingual data significantly boosts the translation quality of low-resource languages in multilingual models.
Self-supervision improves zero-shot translation quality in multilingual models.
We get up to 33 BLEU on ro-en translation without any parallel data or back-translation.
arXiv Detail & Related papers (2020-05-11T00:20:33Z) - Dynamic Data Selection and Weighting for Iterative Back-Translation [116.14378571769045]
We propose a curriculum learning strategy for iterative back-translation models.
We evaluate our models on domain adaptation, low-resource, and high-resource MT settings.
Experimental results demonstrate that our methods achieve improvements of up to 1.8 BLEU points over competitive baselines.
arXiv Detail & Related papers (2020-04-07T19:49:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.