Rewriter-Evaluator Architecture for Neural Machine Translation
- URL: http://arxiv.org/abs/2012.05414v4
- Date: Mon, 10 May 2021 02:11:35 GMT
- Title: Rewriter-Evaluator Architecture for Neural Machine Translation
- Authors: Yangming Li, Kaisheng Yao
- Abstract summary: We present a novel architecture, Rewriter-Evaluator, for improving neural machine translation (NMT) models.
It consists of a rewriter and an evaluator. At every pass, the rewriter produces a new translation to improve the past translation and the evaluator estimates the translation quality to decide whether to terminate the rewriting process.
We conduct extensive experiments on two translation tasks, Chinese-English and English-German, and show that the proposed architecture notably improves the performances of NMT models.
- Score: 17.45780516143211
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Encoder-decoder has been widely used in neural machine translation (NMT). A
few methods have been proposed to improve it with multiple passes of decoding.
However, their full potential is limited by a lack of appropriate termination
policies. To address this issue, we present a novel architecture,
Rewriter-Evaluator. It consists of a rewriter and an evaluator. Translating a
source sentence involves multiple passes. At every pass, the rewriter produces
a new translation to improve the past translation and the evaluator estimates
the translation quality to decide whether to terminate the rewriting process.
We also propose prioritized gradient descent (PGD) that facilitates training
the rewriter and the evaluator jointly. Though incurring multiple passes of
decoding, Rewriter-Evaluator with the proposed PGD method can be trained with a
similar time to that of training encoder-decoder models. We apply the proposed
architecture to improve the general NMT models (e.g., Transformer). We conduct
extensive experiments on two translation tasks, Chinese-English and
English-German, and show that the proposed architecture notably improves the
performances of NMT models and significantly outperforms previous baselines.
Related papers
- Improving Neural Machine Translation by Multi-Knowledge Integration with
Prompting [36.24578487904221]
We focus on how to integrate multi-knowledge, multiple types of knowledge, into NMT models to enhance the performance with prompting.
We propose a unified framework, which can integrate effectively multiple types of knowledge including sentences, terminologies/phrases and translation templates into NMT models.
arXiv Detail & Related papers (2023-12-08T02:55:00Z) - Contextual Refinement of Translations: Large Language Models for Sentence and Document-Level Post-Editing [12.843274390224853]
Large Language Models (LLM's) have demonstrated considerable success in various Natural Language Processing tasks.
We show that they have yet to attain state-of-the-art performance in Neural Machine Translation.
We propose adapting LLM's as Automatic Post-Editors (APE) rather than direct translators.
arXiv Detail & Related papers (2023-10-23T12:22:15Z) - On Search Strategies for Document-Level Neural Machine Translation [51.359400776242786]
Document-level neural machine translation (NMT) models produce a more consistent output across a document.
In this work, we aim to answer the question how to best utilize a context-aware translation model in decoding.
arXiv Detail & Related papers (2023-06-08T11:30:43Z) - Dual-Alignment Pre-training for Cross-lingual Sentence Embedding [79.98111074307657]
We propose a dual-alignment pre-training (DAP) framework for cross-lingual sentence embedding.
We introduce a novel representation translation learning (RTL) task, where the model learns to use one-side contextualized token representation to reconstruct its translation counterpart.
Our approach can significantly improve sentence embedding.
arXiv Detail & Related papers (2023-05-16T03:53:30Z) - Quality-Aware Decoding for Neural Machine Translation [64.24934199944875]
We propose quality-aware decoding for neural machine translation (NMT)
We leverage recent breakthroughs in reference-free and reference-based MT evaluation through various inference methods.
We find that quality-aware decoding consistently outperforms MAP-based decoding according both to state-of-the-art automatic metrics and to human assessments.
arXiv Detail & Related papers (2022-05-02T15:26:28Z) - BitextEdit: Automatic Bitext Editing for Improved Low-Resource Machine
Translation [53.55009917938002]
We propose to refine the mined bitexts via automatic editing.
Experiments demonstrate that our approach successfully improves the quality of CCMatrix mined bitext for 5 low-resource language-pairs and 10 translation directions by up to 8 BLEU points.
arXiv Detail & Related papers (2021-11-12T16:00:39Z) - Learning Kernel-Smoothed Machine Translation with Retrieved Examples [30.17061384497846]
Existing non-parametric approaches that retrieve similar examples from a database to guide the translation process are promising but are prone to overfit the retrieved examples.
We propose to learn Kernel-Smoothed Translation with Example Retrieval (KSTER), an effective approach to adapt neural machine translation models online.
arXiv Detail & Related papers (2021-09-21T06:42:53Z) - Exploring Unsupervised Pretraining Objectives for Machine Translation [99.5441395624651]
Unsupervised cross-lingual pretraining has achieved strong results in neural machine translation (NMT)
Most approaches adapt masked-language modeling (MLM) to sequence-to-sequence architectures, by masking parts of the input and reconstructing them in the decoder.
We compare masking with alternative objectives that produce inputs resembling real (full) sentences, by reordering and replacing words based on their context.
arXiv Detail & Related papers (2021-06-10T10:18:23Z) - Source and Target Bidirectional Knowledge Distillation for End-to-end
Speech Translation [88.78138830698173]
We focus on sequence-level knowledge distillation (SeqKD) from external text-based NMT models.
We train a bilingual E2E-ST model to predict paraphrased transcriptions as an auxiliary task with a single decoder.
arXiv Detail & Related papers (2021-04-13T19:00:51Z) - Character-level Transformer-based Neural Machine Translation [5.699756532377753]
We discuss a novel, Transformer-based approach, that we compare, both in speed and in quality to the Transformer at subword and character levels.
We evaluate our models on 4 language pairs from WMT'15: DE-EN, CS-EN, FI-EN and RU-EN.
The proposed novel architecture can be trained on a single GPU and is 34% percent faster than the character-level Transformer.
arXiv Detail & Related papers (2020-05-22T15:40:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.