The Case for Evaluating Multimodal Translation Models on Text Datasets
- URL: http://arxiv.org/abs/2403.03014v1
- Date: Tue, 5 Mar 2024 14:49:52 GMT
- Title: The Case for Evaluating Multimodal Translation Models on Text Datasets
- Authors: Vipin Vijayan, Braeden Bowen, Scott Grigsby, Timothy Anderson, and
Jeremy Gwinnup
- Abstract summary: multimodal machine translation models should be evaluated by measuring their use of visual information and their ability to translate complex sentences.
Most current work in MMT is evaluated against the Multi30k testing sets, which do not measure these properties.
We propose that MMT models be evaluated using 1) the CoMMuTE evaluation framework, which measures the use of visual information by MMT models, 2) the text-only WMT news translation task test sets, which evaluates translation performance against complex sentences, and 3) the Multi30k test sets, for measuring MMT model performance against a real MMT dataset.
- Score: 1.6192978014459543
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: A good evaluation framework should evaluate multimodal machine translation
(MMT) models by measuring 1) their use of visual information to aid in the
translation task and 2) their ability to translate complex sentences such as
done for text-only machine translation. However, most current work in MMT is
evaluated against the Multi30k testing sets, which do not measure these
properties. Namely, the use of visual information by the MMT model cannot be
shown directly from the Multi30k test set results and the sentences in Multi30k
are are image captions, i.e., short, descriptive sentences, as opposed to
complex sentences that typical text-only machine translation models are
evaluated against.
Therefore, we propose that MMT models be evaluated using 1) the CoMMuTE
evaluation framework, which measures the use of visual information by MMT
models, 2) the text-only WMT news translation task test sets, which evaluates
translation performance against complex sentences, and 3) the Multi30k test
sets, for measuring MMT model performance against a real MMT dataset. Finally,
we evaluate recent MMT models trained solely against the Multi30k dataset
against our proposed evaluation framework and demonstrate the dramatic drop
performance against text-only testing sets compared to recent text-only MT
models.
Related papers
- Towards Zero-Shot Multimodal Machine Translation [64.9141931372384]
We propose a method to bypass the need for fully supervised data to train multimodal machine translation systems.
Our method, called ZeroMMT, consists in adapting a strong text-only machine translation (MT) model by training it on a mixture of two objectives.
To prove that our method generalizes to languages with no fully supervised training data available, we extend the CoMMuTE evaluation dataset to three new languages: Arabic, Russian and Chinese.
arXiv Detail & Related papers (2024-07-18T15:20:31Z) - Evaluating Automatic Metrics with Incremental Machine Translation Systems [55.78547133890403]
We introduce a dataset comprising commercial machine translations, gathered weekly over six years across 12 translation directions.
We assume commercial systems improve over time, which enables us to evaluate machine translation (MT) metrics based on their preference for more recent translations.
arXiv Detail & Related papers (2024-07-03T17:04:17Z) - 3AM: An Ambiguity-Aware Multi-Modal Machine Translation Dataset [90.95948101052073]
We introduce 3AM, an ambiguity-aware MMT dataset comprising 26,000 parallel sentence pairs in English and Chinese.
Our dataset is specifically designed to include more ambiguity and a greater variety of both captions and images than other MMT datasets.
Experimental results show that MMT models trained on our dataset exhibit a greater ability to exploit visual information than those trained on other MMT datasets.
arXiv Detail & Related papers (2024-04-29T04:01:30Z) - Adding Multimodal Capabilities to a Text-only Translation Model [1.6192978014459543]
Current work in multimodal machine translation (MMT) uses the Multi30k dataset for training and evaluation.
We find that the resulting models overfit to the Multi30k dataset to an extreme degree.
In order to perform well on both Multi30k and typical text-only datasets, we use a performant text-only machine translation (MT) model.
arXiv Detail & Related papers (2024-03-05T15:28:24Z) - Beyond Triplet: Leveraging the Most Data for Multimodal Machine
Translation [53.342921374639346]
Multimodal machine translation aims to improve translation quality by incorporating information from other modalities, such as vision.
Previous MMT systems mainly focus on better access and use of visual information and tend to validate their methods on image-related datasets.
This paper establishes new methods and new datasets for MMT.
arXiv Detail & Related papers (2022-12-20T15:02:38Z) - Tackling Ambiguity with Images: Improved Multimodal Machine Translation
and Contrastive Evaluation [72.6667341525552]
We present a new MMT approach based on a strong text-only MT model, which uses neural adapters and a novel guided self-attention mechanism.
We also introduce CoMMuTE, a Contrastive Multimodal Translation Evaluation set of ambiguous sentences and their possible translations.
Our approach obtains competitive results compared to strong text-only models on standard English-to-French, English-to-German and English-to-Czech benchmarks.
arXiv Detail & Related papers (2022-12-20T10:18:18Z) - Building Machine Translation Systems for the Next Thousand Languages [102.24310122155073]
We describe results in three research domains: building clean, web-mined datasets for 1500+ languages, developing practical MT models for under-served languages, and studying the limitations of evaluation metrics for these languages.
We hope that our work provides useful insights to practitioners working towards building MT systems for currently understudied languages, and highlights research directions that can complement the weaknesses of massively multilingual models in data-sparse settings.
arXiv Detail & Related papers (2022-05-09T00:24:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.