Adding Multimodal Capabilities to a Text-only Translation Model
- URL: http://arxiv.org/abs/2403.03045v1
- Date: Tue, 5 Mar 2024 15:28:24 GMT
- Title: Adding Multimodal Capabilities to a Text-only Translation Model
- Authors: Vipin Vijayan, Braeden Bowen, Scott Grigsby, Timothy Anderson, and
Jeremy Gwinnup
- Abstract summary: Current work in multimodal machine translation (MMT) uses the Multi30k dataset for training and evaluation.
We find that the resulting models overfit to the Multi30k dataset to an extreme degree.
In order to perform well on both Multi30k and typical text-only datasets, we use a performant text-only machine translation (MT) model.
- Score: 1.6192978014459543
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: While most current work in multimodal machine translation (MMT) uses the
Multi30k dataset for training and evaluation, we find that the resulting models
overfit to the Multi30k dataset to an extreme degree. Consequently, these
models perform very badly when evaluated against typical text-only testing sets
such as the WMT newstest datasets. In order to perform well on both Multi30k
and typical text-only datasets, we use a performant text-only machine
translation (MT) model as the starting point of our MMT model. We add
vision-text adapter layers connected via gating mechanisms to the MT model, and
incrementally transform the MT model into an MMT model by 1) pre-training using
vision-based masking of the source text and 2) fine-tuning on Multi30k.
Related papers
- Towards Zero-Shot Multimodal Machine Translation [64.9141931372384]
We propose a method to bypass the need for fully supervised data to train multimodal machine translation systems.
Our method, called ZeroMMT, consists in adapting a strong text-only machine translation (MT) model by training it on a mixture of two objectives.
To prove that our method generalizes to languages with no fully supervised training data available, we extend the CoMMuTE evaluation dataset to three new languages: Arabic, Russian and Chinese.
arXiv Detail & Related papers (2024-07-18T15:20:31Z) - 3AM: An Ambiguity-Aware Multi-Modal Machine Translation Dataset [90.95948101052073]
We introduce 3AM, an ambiguity-aware MMT dataset comprising 26,000 parallel sentence pairs in English and Chinese.
Our dataset is specifically designed to include more ambiguity and a greater variety of both captions and images than other MMT datasets.
Experimental results show that MMT models trained on our dataset exhibit a greater ability to exploit visual information than those trained on other MMT datasets.
arXiv Detail & Related papers (2024-04-29T04:01:30Z) - The Case for Evaluating Multimodal Translation Models on Text Datasets [1.6192978014459543]
multimodal machine translation models should be evaluated by measuring their use of visual information and their ability to translate complex sentences.
Most current work in MMT is evaluated against the Multi30k testing sets, which do not measure these properties.
We propose that MMT models be evaluated using 1) the CoMMuTE evaluation framework, which measures the use of visual information by MMT models, 2) the text-only WMT news translation task test sets, which evaluates translation performance against complex sentences, and 3) the Multi30k test sets, for measuring MMT model performance against a real MMT dataset.
arXiv Detail & Related papers (2024-03-05T14:49:52Z) - Incorporating Probing Signals into Multimodal Machine Translation via
Visual Question-Answering Pairs [45.41083125321069]
multimodal machine translation (MMT) systems exhibit decreased sensitivity to visual information when text inputs are complete.
A novel approach is proposed to generate parallel Visual Question-Answering (VQA) style pairs from the source text.
An MMT-VQA multitask learning framework is introduced to incorporate explicit probing signals from the dataset into the MMT training process.
arXiv Detail & Related papers (2023-10-26T04:13:49Z) - Unified Model Learning for Various Neural Machine Translation [63.320005222549646]
Existing machine translation (NMT) studies mainly focus on developing dataset-specific models.
We propose a versatile'' model, i.e., the Unified Model Learning for NMT (UMLNMT) that works with data from different tasks.
OurNMT results in substantial improvements over dataset-specific models with significantly reduced model deployment costs.
arXiv Detail & Related papers (2023-05-04T12:21:52Z) - Beyond Triplet: Leveraging the Most Data for Multimodal Machine
Translation [53.342921374639346]
Multimodal machine translation aims to improve translation quality by incorporating information from other modalities, such as vision.
Previous MMT systems mainly focus on better access and use of visual information and tend to validate their methods on image-related datasets.
This paper establishes new methods and new datasets for MMT.
arXiv Detail & Related papers (2022-12-20T15:02:38Z) - Machine Translation Customization via Automatic Training Data Selection
from the Web [97.98885151955467]
We describe an approach for customizing machine translation systems on specific domains.
We select data similar to the target customer data to train neural translation models.
Finally, we train MT models on our automatically selected data, obtaining a system specialized to the target domain.
arXiv Detail & Related papers (2021-02-20T03:29:41Z) - InterBERT: Vision-and-Language Interaction for Multi-modal Pretraining [76.32065400614162]
We propose a novel model, namely InterBERT (BERT for Interaction), which is the first model of our series of multimodal pretraining methods M6.
The model owns strong capability of modeling interaction between the information flows of different modalities.
We propose a large-scale dataset for multi-modal pretraining in Chinese, and we develop the Chinese InterBERT which is the first Chinese multi-modal pretrained model.
arXiv Detail & Related papers (2020-03-30T03:13:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.