Sequence-Level Training for Non-Autoregressive Neural Machine
Translation
- URL: http://arxiv.org/abs/2106.08122v1
- Date: Tue, 15 Jun 2021 13:30:09 GMT
- Title: Sequence-Level Training for Non-Autoregressive Neural Machine
Translation
- Authors: Chenze Shao, Yang Feng, Jinchao Zhang, Fandong Meng, Jie Zhou
- Abstract summary: Non-Autoregressive Neural Machine Translation (NAT) removes the autoregressive mechanism and achieves significant decoding speedup.
We propose using sequence-level training objectives to train NAT models, which evaluate the NAT outputs as a whole and correlates well with the real translation quality.
- Score: 33.17341980163439
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, Neural Machine Translation (NMT) has achieved notable
results in various translation tasks. However, the word-by-word generation
manner determined by the autoregressive mechanism leads to high translation
latency of the NMT and restricts its low-latency applications.
Non-Autoregressive Neural Machine Translation (NAT) removes the autoregressive
mechanism and achieves significant decoding speedup through generating target
words independently and simultaneously. Nevertheless, NAT still takes the
word-level cross-entropy loss as the training objective, which is not optimal
because the output of NAT cannot be properly evaluated due to the multimodality
problem. In this paper, we propose using sequence-level training objectives to
train NAT models, which evaluate the NAT outputs as a whole and correlates well
with the real translation quality. Firstly, we propose training NAT models to
optimize sequence-level evaluation metrics (e.g., BLEU) based on several novel
reinforcement algorithms customized for NAT, which outperforms the conventional
method by reducing the variance of gradient estimation. Secondly, we introduce
a novel training objective for NAT models, which aims to minimize the
Bag-of-Ngrams (BoN) difference between the model output and the reference
sentence. The BoN training objective is differentiable and can be calculated
efficiently without doing any approximations. Finally, we apply a three-stage
training strategy to combine these two methods to train the NAT model. We
validate our approach on four translation tasks (WMT14 En$\leftrightarrow$De,
WMT16 En$\leftrightarrow$Ro), which shows that our approach largely outperforms
NAT baselines and achieves remarkable performance on all translation tasks.
Related papers
- Revisiting Non-Autoregressive Translation at Scale [76.93869248715664]
We systematically study the impact of scaling on non-autoregressive translation (NAT) behaviors.
We show that scaling can alleviate the commonly-cited weaknesses of NAT models, resulting in better translation performance.
We establish a new benchmark by validating scaled NAT models on a scaled dataset.
arXiv Detail & Related papers (2023-05-25T15:22:47Z) - Optimizing Non-Autoregressive Transformers with Contrastive Learning [74.46714706658517]
Non-autoregressive Transformers (NATs) reduce the inference latency of Autoregressive Transformers (ATs) by predicting words all at once rather than in sequential order.
In this paper, we propose to ease the difficulty of modality learning via sampling from the model distribution instead of the data distribution.
arXiv Detail & Related papers (2023-05-23T04:20:13Z) - Selective Knowledge Distillation for Non-Autoregressive Neural Machine
Translation [34.22251326493591]
The Non-Autoregressive Transformer (NAT) achieves great success in neural machine translation tasks.
Existing knowledge distillation has side effects, such as propagating errors from the teacher to NAT students.
We introduce selective knowledge distillation by introducing an NAT to select NAT-friendly targets that are of high quality and easy to learn.
arXiv Detail & Related papers (2023-03-31T09:16:13Z) - Rephrasing the Reference for Non-Autoregressive Machine Translation [37.816198073720614]
Non-autoregressive neural machine translation (NAT) models suffer from the multi-modality problem that there may exist multiple possible translations of a source sentence.
We introduce a rephraser to provide a better training target for NAT by rephrasing the reference sentence according to the NAT output.
Our best variant achieves comparable performance to the autoregressive Transformer, while being 14.7 times more efficient in inference.
arXiv Detail & Related papers (2022-11-30T10:05:03Z) - Rejuvenating Low-Frequency Words: Making the Most of Parallel Data in
Non-Autoregressive Translation [98.11249019844281]
Knowledge distillation (KD) is commonly used to construct synthetic data for training non-autoregressive translation (NAT) models.
We propose reverse KD to rejuvenate more alignments for low-frequency target words.
Results demonstrate that the proposed approach can significantly and universally improve translation quality.
arXiv Detail & Related papers (2021-06-02T02:41:40Z) - Fully Non-autoregressive Neural Machine Translation: Tricks of the Trade [47.97977478431973]
Fully non-autoregressive neural machine translation (NAT) is proposed to simultaneously predict tokens with single forward of neural networks.
In this work, we target on closing the performance gap while maintaining the latency advantage.
arXiv Detail & Related papers (2020-12-31T18:52:59Z) - Understanding and Improving Lexical Choice in Non-Autoregressive
Translation [98.11249019844281]
We propose to expose the raw data to NAT models to restore the useful information of low-frequency words.
Our approach pushes the SOTA NAT performance on the WMT14 English-German and WMT16 Romanian-English datasets up to 27.8 and 33.8 BLEU points, respectively.
arXiv Detail & Related papers (2020-12-29T03:18:50Z) - Task-Level Curriculum Learning for Non-Autoregressive Neural Machine
Translation [188.3605563567253]
Non-autoregressive translation (NAT) achieves faster inference speed but at the cost of worse accuracy compared with autoregressive translation (AT)
We introduce semi-autoregressive translation (SAT) as intermediate tasks. SAT covers AT and NAT as its special cases.
We design curriculum schedules to gradually shift k from 1 to N, with different pacing functions and number of tasks trained at the same time.
Experiments on IWSLT14 De-En, IWSLT16 En-De, WMT14 En-De and De-En datasets show that TCL-NAT achieves significant accuracy improvements over previous NAT baseline
arXiv Detail & Related papers (2020-07-17T06:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.