On the Pareto Front of Multilingual Neural Machine Translation
- URL: http://arxiv.org/abs/2304.03216v3
- Date: Tue, 31 Oct 2023 15:58:53 GMT
- Title: On the Pareto Front of Multilingual Neural Machine Translation
- Authors: Liang Chen and Shuming Ma and Dongdong Zhang and Furu Wei and Baobao
Chang
- Abstract summary: We study how the performance of a given direction changes with its sampling ratio in Neural Machine Translation (MNMT)
We propose the Double Power Law to predict the unique performance trade-off front in MNMT.
In our experiments, it achieves better performance than temperature searching and gradient manipulation methods with only 1/5 to 1/2 of the total training budget.
- Score: 123.94355117635293
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we study how the performance of a given direction changes with
its sampling ratio in Multilingual Neural Machine Translation (MNMT). By
training over 200 multilingual models with various model sizes, data sizes, and
language directions, we find it interesting that the performance of certain
translation direction does not always improve with the increase of its weight
in the multi-task optimization objective. Accordingly, scalarization method
leads to a multitask trade-off front that deviates from the traditional Pareto
front when there exists data imbalance in the training corpus, which poses a
great challenge to improve the overall performance of all directions. Based on
our observations, we propose the Double Power Law to predict the unique
performance trade-off front in MNMT, which is robust across various languages,
data adequacy, and the number of tasks. Finally, we formulate the sample ratio
selection problem in MNMT as an optimization problem based on the Double Power
Law. In our experiments, it achieves better performance than temperature
searching and gradient manipulation methods with only 1/5 to 1/2 of the total
training budget. We release the code at
https://github.com/pkunlp-icler/ParetoMNMT for reproduction.
Related papers
- Improving Neural Machine Translation by Bidirectional Training [85.64797317290349]
We present a simple and effective pretraining strategy -- bidirectional training (BiT) for neural machine translation.
Specifically, we bidirectionally update the model parameters at the early stage and then tune the model normally.
Experimental results show that BiT pushes the SOTA neural machine translation performance across 15 translation tasks on 8 language pairs significantly higher.
arXiv Detail & Related papers (2021-09-16T07:58:33Z) - Improving Multilingual Translation by Representation and Gradient
Regularization [82.42760103045083]
We propose a joint approach to regularize NMT models at both representation-level and gradient-level.
Our results demonstrate that our approach is highly effective in both reducing off-target translation occurrences and improving zero-shot translation performance.
arXiv Detail & Related papers (2021-09-10T10:52:21Z) - Distributionally Robust Multilingual Machine Translation [94.51866646879337]
We propose a new learning objective for Multilingual neural machine translation (MNMT) based on distributionally robust optimization.
We show how to practically optimize this objective for large translation corpora using an iterated best response scheme.
Our method consistently outperforms strong baseline methods in terms of average and per-language performance under both many-to-one and one-to-many translation settings.
arXiv Detail & Related papers (2021-09-09T03:48:35Z) - On the Language Coverage Bias for Neural Machine Translation [81.81456880770762]
Language coverage bias is important for neural machine translation (NMT) because the target-original training data is not well exploited in current practice.
By carefully designing experiments, we provide comprehensive analyses of the language coverage bias in the training data.
We propose two simple and effective approaches to alleviate the language coverage bias problem.
arXiv Detail & Related papers (2021-06-07T01:55:34Z) - Multi-task Learning for Multilingual Neural Machine Translation [32.81785430242313]
We propose a multi-task learning framework that jointly trains the model with the translation task on bitext data and two denoising tasks on the monolingual data.
We show that the proposed approach can effectively improve the translation quality for both high-resource and low-resource languages.
arXiv Detail & Related papers (2020-10-06T06:54:12Z) - Balancing Training for Multilingual Neural Machine Translation [130.54253367251738]
multilingual machine translation (MT) models can translate to/from multiple languages.
Standard practice is to up-sample less resourced languages to increase representation.
We propose a method that instead automatically learns how to weight training data through a data scorer.
arXiv Detail & Related papers (2020-04-14T18:23:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.