Variance-Aware Machine Translation Test Sets
- URL: http://arxiv.org/abs/2111.04079v1
- Date: Sun, 7 Nov 2021 13:18:59 GMT
- Title: Variance-Aware Machine Translation Test Sets
- Authors: Runzhe Zhan, Xuebo Liu, Derek F. Wong, Lidia S. Chao
- Abstract summary: We release 70 small and discriminative test sets for machine translation (MT) evaluation called variance-aware test sets (VAT)
VAT is automatically created by a novel variance-aware filtering method that filters the indiscriminative test instances of the current MT test sets without any human labor.
- Score: 19.973201669851626
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We release 70 small and discriminative test sets for machine translation (MT)
evaluation called variance-aware test sets (VAT), covering 35 translation
directions from WMT16 to WMT20 competitions. VAT is automatically created by a
novel variance-aware filtering method that filters the indiscriminative test
instances of the current MT test sets without any human labor. Experimental
results show that VAT outperforms the original WMT test sets in terms of the
correlation with human judgement across mainstream language pairs and test
sets. Further analysis on the properties of VAT reveals the challenging
linguistic features (e.g., translation of low-frequency words and proper nouns)
for competitive MT systems, providing guidance for constructing future MT test
sets. The test sets and the code for preparing variance-aware MT test sets are
freely available at https://github.com/NLP2CT/Variance-Aware-MT-Test-Sets .
Related papers
- Towards Zero-Shot Multimodal Machine Translation [64.9141931372384]
We propose a method to bypass the need for fully supervised data to train multimodal machine translation systems.
Our method, called ZeroMMT, consists in adapting a strong text-only machine translation (MT) model by training it on a mixture of two objectives.
To prove that our method generalizes to languages with no fully supervised training data available, we extend the CoMMuTE evaluation dataset to three new languages: Arabic, Russian and Chinese.
arXiv Detail & Related papers (2024-07-18T15:20:31Z) - Evaluating Automatic Metrics with Incremental Machine Translation Systems [55.78547133890403]
We introduce a dataset comprising commercial machine translations, gathered weekly over six years across 12 translation directions.
We assume commercial systems improve over time, which enables us to evaluate machine translation (MT) metrics based on their preference for more recent translations.
arXiv Detail & Related papers (2024-07-03T17:04:17Z) - OTTAWA: Optimal TransporT Adaptive Word Aligner for Hallucination and Omission Translation Errors Detection [36.59354124910338]
Ottawa is a word aligner specifically designed to enhance the detection of hallucinations and omissions in Machine Translation systems.
Our approach yields competitive results compared to state-of-the-art methods across 18 language pairs on the HalOmi benchmark.
arXiv Detail & Related papers (2024-06-04T03:00:55Z) - The Case for Evaluating Multimodal Translation Models on Text Datasets [1.6192978014459543]
multimodal machine translation models should be evaluated by measuring their use of visual information and their ability to translate complex sentences.
Most current work in MMT is evaluated against the Multi30k testing sets, which do not measure these properties.
We propose that MMT models be evaluated using 1) the CoMMuTE evaluation framework, which measures the use of visual information by MMT models, 2) the text-only WMT news translation task test sets, which evaluates translation performance against complex sentences, and 3) the Multi30k test sets, for measuring MMT model performance against a real MMT dataset.
arXiv Detail & Related papers (2024-03-05T14:49:52Z) - Towards General Error Diagnosis via Behavioral Testing in Machine
Translation [48.108393938462974]
This paper proposes a new framework for conducting behavioral testing of machine translation (MT) systems.
The core idea of BTPGBT is to employ a novel bilingual translation pair generation approach.
Experimental results on various MT systems demonstrate that BTPGBT could provide comprehensive and accurate behavioral testing results.
arXiv Detail & Related papers (2023-10-20T09:06:41Z) - Automating Behavioral Testing in Machine Translation [9.151054827967933]
We propose to use Large Language Models to generate source sentences tailored to test the behavior of Machine Translation models.
We can then verify whether the MT model exhibits the expected behavior through matching candidate sets.
Our approach aims to make behavioral testing of MT systems practical while requiring only minimal human effort.
arXiv Detail & Related papers (2023-09-05T19:40:45Z) - Error Analysis Prompting Enables Human-Like Translation Evaluation in Large Language Models [57.80514758695275]
Using large language models (LLMs) for assessing the quality of machine translation (MT) achieves state-of-the-art performance at the system level.
We propose a new prompting method called textbftextttError Analysis Prompting (EAPrompt)
This technique emulates the commonly accepted human evaluation framework - Multidimensional Quality Metrics (MQM) and textitproduces explainable and reliable MT evaluations at both the system and segment level.
arXiv Detail & Related papers (2023-03-24T05:05:03Z) - Statistical Machine Translation for Indic Languages [1.8899300124593648]
This paper canvasses about the development of bilingual Statistical Machine Translation models.
To create the system, MOSES open-source SMT toolkit is explored.
In our experiment, the quality of the translation is evaluated using standard metrics such as BLEU, METEOR, and RIBES.
arXiv Detail & Related papers (2023-01-02T06:23:12Z) - Extrinsic Evaluation of Machine Translation Metrics [78.75776477562087]
It is unclear if automatic metrics are reliable at distinguishing good translations from bad translations at the sentence level.
We evaluate the segment-level performance of the most widely used MT metrics (chrF, COMET, BERTScore, etc.) on three downstream cross-lingual tasks.
Our experiments demonstrate that all metrics exhibit negligible correlation with the extrinsic evaluation of the downstream outcomes.
arXiv Detail & Related papers (2022-12-20T14:39:58Z) - Measuring Uncertainty in Translation Quality Evaluation (TQE) [62.997667081978825]
This work carries out motivated research to correctly estimate the confidence intervals citeBrown_etal2001Interval depending on the sample size of the translated text.
The methodology we applied for this work is from Bernoulli Statistical Distribution Modelling (BSDM) and Monte Carlo Sampling Analysis (MCSA)
arXiv Detail & Related papers (2021-11-15T12:09:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.