CBSiMT: Mitigating Hallucination in Simultaneous Machine Translation
with Weighted Prefix-to-Prefix Training
- URL: http://arxiv.org/abs/2311.03672v1
- Date: Tue, 7 Nov 2023 02:44:45 GMT
- Title: CBSiMT: Mitigating Hallucination in Simultaneous Machine Translation
with Weighted Prefix-to-Prefix Training
- Authors: Mengge Liu, Wen Zhang, Xiang Li, Yanzhi Tian, Yuhang Guo, Jian Luan,
Bin Wang, Shuoying Chen
- Abstract summary: Simultaneous machine translation (SiMT) is a challenging task that requires starting translation before the full source sentence is available.
Prefix-to- framework is often applied to SiMT, which learns to predict target tokens using only a partial source prefix.
We propose a Confidence-Based Simultaneous Machine Translation framework, which uses model confidence to perceive hallucination tokens.
- Score: 13.462260072313894
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Simultaneous machine translation (SiMT) is a challenging task that requires
starting translation before the full source sentence is available.
Prefix-to-prefix framework is often applied to SiMT, which learns to predict
target tokens using only a partial source prefix. However, due to the word
order difference between languages, misaligned prefix pairs would make SiMT
models suffer from serious hallucination problems, i.e. target outputs that are
unfaithful to source inputs. Such problems can not only produce target tokens
that are not supported by the source prefix, but also hinder generating the
correct translation by receiving more source words. In this work, we propose a
Confidence-Based Simultaneous Machine Translation (CBSiMT) framework, which
uses model confidence to perceive hallucination tokens and mitigates their
negative impact with weighted prefix-to-prefix training. Specifically,
token-level and sentence-level weights are calculated based on model confidence
and acted on the loss function. We explicitly quantify the faithfulness of the
generated target tokens using the token-level weight, and employ the
sentence-level weight to alleviate the disturbance of sentence pairs with
serious word order differences on the model. Experimental results on MuST-C
English-to-Chinese and WMT15 German-to-English SiMT tasks demonstrate that our
method can consistently improve translation quality at most latency regimes,
with up to 2 BLEU scores improvement at low latency.
Related papers
- Language Model is a Branch Predictor for Simultaneous Machine
Translation [73.82754138171587]
We propose incorporating branch prediction techniques in SiMT tasks to reduce translation latency.
We utilize a language model as a branch predictor to predict potential branch directions.
When the actual source word deviates from the predicted source word, we use the real source word to decode the output again, replacing the predicted output.
arXiv Detail & Related papers (2023-12-22T07:32:47Z) - Glancing Future for Simultaneous Machine Translation [35.46823126036308]
We propose a novel method to bridge the gap between the prefix2 training and seq2seq training.
We gradually reduce the available source information from the whole sentence to the prefix corresponding to that latency.
Our method is applicable to a wide range of SiMT methods and experiments demonstrate that our method outperforms strong baselines.
arXiv Detail & Related papers (2023-09-12T12:46:20Z) - Towards Reliable Neural Machine Translation with Consistency-Aware
Meta-Learning [24.64700139151659]
Current Neural machine translation (NMT) systems suffer from a lack of reliability.
We present a consistency-aware meta-learning (CAML) framework derived from the model-agnostic meta-learning (MAML) algorithm to address it.
We conduct experiments on the NIST Chinese to English task, three WMT translation tasks, and the TED M2O task.
arXiv Detail & Related papers (2023-03-20T09:41:28Z) - Competency-Aware Neural Machine Translation: Can Machine Translation
Know its Own Translation Quality? [61.866103154161884]
Neural machine translation (NMT) is often criticized for failures that happen without awareness.
We propose a novel competency-aware NMT by extending conventional NMT with a self-estimator.
We show that the proposed method delivers outstanding performance on quality estimation.
arXiv Detail & Related papers (2022-11-25T02:39:41Z) - Understanding and Improving Sequence-to-Sequence Pretraining for Neural
Machine Translation [48.50842995206353]
We study the impact of the jointly pretrained decoder, which is the main difference between Seq2Seq pretraining and previous encoder-based pretraining approaches for NMT.
We propose simple and effective strategies, named in-domain pretraining and input adaptation to remedy the domain and objective discrepancies.
arXiv Detail & Related papers (2022-03-16T07:36:28Z) - Anticipation-free Training for Simultaneous Translation [70.85761141178597]
Simultaneous translation (SimulMT) speeds up the translation process by starting to translate before the source sentence is completely available.
Existing methods increase latency or introduce adaptive read-write policies for SimulMT models to handle local reordering and improve translation quality.
We propose a new framework that decomposes the translation process into the monotonic translation step and the reordering step.
arXiv Detail & Related papers (2022-01-30T16:29:37Z) - Exploring Unsupervised Pretraining Objectives for Machine Translation [99.5441395624651]
Unsupervised cross-lingual pretraining has achieved strong results in neural machine translation (NMT)
Most approaches adapt masked-language modeling (MLM) to sequence-to-sequence architectures, by masking parts of the input and reconstructing them in the decoder.
We compare masking with alternative objectives that produce inputs resembling real (full) sentences, by reordering and replacing words based on their context.
arXiv Detail & Related papers (2021-06-10T10:18:23Z) - It's Easier to Translate out of English than into it: Measuring Neural
Translation Difficulty by Cross-Mutual Information [90.35685796083563]
Cross-mutual information (XMI) is an asymmetric information-theoretic metric of machine translation difficulty.
XMI exploits the probabilistic nature of most neural machine translation models.
We present the first systematic and controlled study of cross-lingual translation difficulties using modern neural translation systems.
arXiv Detail & Related papers (2020-05-05T17:38:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.