Netmarble AI Center's WMT21 Automatic Post-Editing Shared Task
Submission
- URL: http://arxiv.org/abs/2109.06515v1
- Date: Tue, 14 Sep 2021 08:21:18 GMT
- Title: Netmarble AI Center's WMT21 Automatic Post-Editing Shared Task
Submission
- Authors: Shinhyeok Oh, Sion Jang, Hu Xu, Shounan An, Insoo Oh
- Abstract summary: This paper describes Netmarble's submission to WMT21 Automatic Post-Editing (APE) Shared Task for the English-German language pair.
Facebook Fair's WMT19 news translation model was chosen to engage the large and powerful pre-trained neural networks.
For better performance, we leverage external translations as augmented machine translation (MT) during the post-training and fine-tuning.
- Score: 6.043109546012043
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper describes Netmarble's submission to WMT21 Automatic Post-Editing
(APE) Shared Task for the English-German language pair. First, we propose a
Curriculum Training Strategy in training stages. Facebook Fair's WMT19 news
translation model was chosen to engage the large and powerful pre-trained
neural networks. Then, we post-train the translation model with different
levels of data at each training stages. As the training stages go on, we make
the system learn to solve multiple tasks by adding extra information at
different training stages gradually. We also show a way to utilize the
additional data in large volume for APE tasks. For further improvement, we
apply Multi-Task Learning Strategy with the Dynamic Weight Average during the
fine-tuning stage. To fine-tune the APE corpus with limited data, we add some
related subtasks to learn a unified representation. Finally, for better
performance, we leverage external translations as augmented machine translation
(MT) during the post-training and fine-tuning. As experimental results show,
our APE system significantly improves the translations of provided MT results
by -2.848 and +3.74 on the development dataset in terms of TER and BLEU,
respectively. It also demonstrates its effectiveness on the test dataset with
higher quality than the development dataset.
Related papers
- Unified Model Learning for Various Neural Machine Translation [63.320005222549646]
Existing machine translation (NMT) studies mainly focus on developing dataset-specific models.
We propose a versatile'' model, i.e., the Unified Model Learning for NMT (UMLNMT) that works with data from different tasks.
OurNMT results in substantial improvements over dataset-specific models with significantly reduced model deployment costs.
arXiv Detail & Related papers (2023-05-04T12:21:52Z) - Improving Neural Machine Translation by Denoising Training [95.96569884410137]
We present a simple and effective pretraining strategy Denoising Training DoT for neural machine translation.
We update the model parameters with source- and target-side denoising tasks at the early stage and then tune the model normally.
Experiments show DoT consistently improves the neural machine translation performance across 12 bilingual and 16 multilingual directions.
arXiv Detail & Related papers (2022-01-19T00:11:38Z) - Improving Multilingual Translation by Representation and Gradient
Regularization [82.42760103045083]
We propose a joint approach to regularize NMT models at both representation-level and gradient-level.
Our results demonstrate that our approach is highly effective in both reducing off-target translation occurrences and improving zero-shot translation performance.
arXiv Detail & Related papers (2021-09-10T10:52:21Z) - The USYD-JD Speech Translation System for IWSLT 2021 [85.64797317290349]
This paper describes the University of Sydney& JD's joint submission of the IWSLT 2021 low resource speech translation task.
We trained our models with the officially provided ASR and MT datasets.
To achieve better translation performance, we explored the most recent effective strategies, including back translation, knowledge distillation, multi-feature reranking and transductive finetuning.
arXiv Detail & Related papers (2021-07-24T09:53:34Z) - FST: the FAIR Speech Translation System for the IWSLT21 Multilingual
Shared Task [36.51221186190272]
We describe our end-to-end multilingual speech translation system submitted to the IWSLT 2021 evaluation campaign.
Our system is built by leveraging transfer learning across modalities, tasks and languages.
arXiv Detail & Related papers (2021-07-14T19:43:44Z) - Facebook AI's WMT20 News Translation Task Submission [69.92594751788403]
This paper describes Facebook AI's submission to WMT20 shared news translation task.
We focus on the low resource setting and participate in two language pairs, Tamil -> English and Inuktitut -> English.
We approach the low resource problem using two main strategies, leveraging all available data and adapting the system to the target news domain.
arXiv Detail & Related papers (2020-11-16T21:49:00Z) - Multi-task Learning for Multilingual Neural Machine Translation [32.81785430242313]
We propose a multi-task learning framework that jointly trains the model with the translation task on bitext data and two denoising tasks on the monolingual data.
We show that the proposed approach can effectively improve the translation quality for both high-resource and low-resource languages.
arXiv Detail & Related papers (2020-10-06T06:54:12Z) - Balancing Training for Multilingual Neural Machine Translation [130.54253367251738]
multilingual machine translation (MT) models can translate to/from multiple languages.
Standard practice is to up-sample less resourced languages to increase representation.
We propose a method that instead automatically learns how to weight training data through a data scorer.
arXiv Detail & Related papers (2020-04-14T18:23:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.