Improving Neural Machine Translation by Denoising Training
- URL: http://arxiv.org/abs/2201.07365v2
- Date: Thu, 20 Jan 2022 03:55:52 GMT
- Title: Improving Neural Machine Translation by Denoising Training
- Authors: Liang Ding, Keqin Peng and Dacheng Tao
- Abstract summary: We present a simple and effective pretraining strategy Denoising Training DoT for neural machine translation.
We update the model parameters with source- and target-side denoising tasks at the early stage and then tune the model normally.
Experiments show DoT consistently improves the neural machine translation performance across 12 bilingual and 16 multilingual directions.
- Score: 95.96569884410137
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We present a simple and effective pretraining strategy {D}en{o}ising
{T}raining DoT for neural machine translation. Specifically, we update the
model parameters with source- and target-side denoising tasks at the early
stage and then tune the model normally. Notably, our approach does not increase
any parameters or training steps, requiring the parallel data merely.
Experiments show that DoT consistently improves the neural machine translation
performance across 12 bilingual and 16 multilingual directions (data size
ranges from 80K to 20M). In addition, we show that DoT can complement existing
data manipulation strategies, i.e. curriculum learning, knowledge distillation,
data diversification, bidirectional training, and back-translation.
Encouragingly, we found that DoT outperforms costly pretrained model mBART in
high-resource settings. Analyses show DoT is a novel in-domain cross-lingual
pretraining strategy and could offer further improvements with task-relevant
self-supervisions.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.