Backtranslation Feedback Improves User Confidence in MT, Not Quality
- URL: http://arxiv.org/abs/2104.05688v1
- Date: Mon, 12 Apr 2021 17:50:24 GMT
- Title: Backtranslation Feedback Improves User Confidence in MT, Not Quality
- Authors: Vil\'em Zouhar, Michal Nov\'ak, Mat\'u\v{s} \v{Z}ilinec, Ond\v{r}ej
Bojar, Mateo Obreg\'on, Robin L. Hill, Fr\'ed\'eric Blain, Marina Fomicheva,
Lucia Specia, Lisa Yankovskaya
- Abstract summary: We show three ways in which user confidence in the outbound translation, as well as its overall final quality, can be affected.
In this paper, we describe an experiment on outbound translation from English to Czech and Estonian.
We show that backward translation feedback has a mixed effect on the whole process: it increases user confidence in the produced translation, but not the objective quality.
- Score: 18.282199360280433
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Translating text into a language unknown to the text's author, dubbed
outbound translation, is a modern need for which the user experience has
significant room for improvement, beyond the basic machine translation
facility. We demonstrate this by showing three ways in which user confidence in
the outbound translation, as well as its overall final quality, can be
affected: backward translation, quality estimation (with alignment) and source
paraphrasing. In this paper, we describe an experiment on outbound translation
from English to Czech and Estonian. We examine the effects of each proposed
feedback module and further focus on how the quality of machine translation
systems influence these findings and the user perception of success. We show
that backward translation feedback has a mixed effect on the whole process: it
increases user confidence in the produced translation, but not the objective
quality.
Related papers
- xTower: A Multilingual LLM for Explaining and Correcting Translation Errors [22.376508000237042]
xTower is an open large language model (LLM) built on top of TowerBase to provide free-text explanations for translation errors.
We test xTower across various experimental setups in generating translation corrections, demonstrating significant improvements in translation quality.
arXiv Detail & Related papers (2024-06-27T18:51:46Z) - Understanding and Addressing the Under-Translation Problem from the Perspective of Decoding Objective [72.83966378613238]
Under-translation and over-translation remain two challenging problems in state-of-the-art Neural Machine Translation (NMT) systems.
We conduct an in-depth analysis on the underlying cause of under-translation in NMT, providing an explanation from the perspective of decoding objective.
We propose employing the confidence of predicting End Of Sentence (EOS) as a detector for under-translation, and strengthening the confidence-based penalty to penalize candidates with a high risk of under-translation.
arXiv Detail & Related papers (2024-05-29T09:25:49Z) - Advancing Translation Preference Modeling with RLHF: A Step Towards
Cost-Effective Solution [57.42593422091653]
We explore leveraging reinforcement learning with human feedback to improve translation quality.
A reward model with strong language capabilities can more sensitively learn the subtle differences in translation quality.
arXiv Detail & Related papers (2024-02-18T09:51:49Z) - Hunayn: Elevating Translation Beyond the Literal [0.0]
This project introduces an advanced English-to-Arabic translator surpassing conventional tools.
Our approach involves fine-tuning on a self-scraped, purely literary Arabic dataset.
Evaluations against Google Translate show consistent outperformance in qualitative assessments.
arXiv Detail & Related papers (2023-10-20T16:03:33Z) - Competency-Aware Neural Machine Translation: Can Machine Translation
Know its Own Translation Quality? [61.866103154161884]
Neural machine translation (NMT) is often criticized for failures that happen without awareness.
We propose a novel competency-aware NMT by extending conventional NMT with a self-estimator.
We show that the proposed method delivers outstanding performance on quality estimation.
arXiv Detail & Related papers (2022-11-25T02:39:41Z) - Towards Debiasing Translation Artifacts [15.991970288297443]
We propose a novel approach to reducing translationese by extending an established bias-removal technique.
We use the Iterative Null-space Projection (INLP) algorithm, and show by measuring classification accuracy before and after debiasing, that translationese is reduced at both sentence and word level.
To the best of our knowledge, this is the first study to debias translationese as represented in latent embedding space.
arXiv Detail & Related papers (2022-05-16T21:46:51Z) - A Bayesian approach to translators' reliability assessment [0.0]
We consider the Translation Quality Assessment process as a complex process, considering it from the physics of complex systems point of view.
We build two Bayesian models that parameterise the features involved in the TQA process, namely the translation difficulty, the characteristics of the translators involved in producing the translation and assessing its quality.
We show that reviewers reliability cannot be taken for granted even if they are expert translators.
arXiv Detail & Related papers (2022-03-14T14:29:45Z) - Comprehension of Subtitles from Re-Translating Simultaneous Speech
Translation [0.0]
In simultaneous speech translation, one can vary the size of the output window, system latency and sometimes the allowed level of rewriting.
The effect of these properties on readability and comprehensibility has not been tested with modern neural translation systems.
It is a pilot study with 14 users on 2 hours of German documentaries or speeches with online translations into Czech.
arXiv Detail & Related papers (2022-03-04T17:41:39Z) - ChrEnTranslate: Cherokee-English Machine Translation Demo with Quality
Estimation and Corrective Feedback [70.5469946314539]
ChrEnTranslate is an online machine translation demonstration system for translation between English and an endangered language Cherokee.
It supports both statistical and neural translation models as well as provides quality estimation to inform users of reliability.
arXiv Detail & Related papers (2021-07-30T17:58:54Z) - Translation Artifacts in Cross-lingual Transfer Learning [51.66536640084888]
We show that machine translation can introduce subtle artifacts that have a notable impact in existing cross-lingual models.
In natural language inference, translating the premise and the hypothesis independently can reduce the lexical overlap between them.
We also improve the state-of-the-art in XNLI for the translate-test and zero-shot approaches by 4.3 and 2.8 points, respectively.
arXiv Detail & Related papers (2020-04-09T17:54:30Z) - A Set of Recommendations for Assessing Human-Machine Parity in Language
Translation [87.72302201375847]
We reassess Hassan et al.'s investigation into Chinese to English news translation.
We show that the professional human translations contained significantly fewer errors.
arXiv Detail & Related papers (2020-04-03T17:49:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.