Incorporating Probing Signals into Multimodal Machine Translation via
Visual Question-Answering Pairs
- URL: http://arxiv.org/abs/2310.17133v1
- Date: Thu, 26 Oct 2023 04:13:49 GMT
- Title: Incorporating Probing Signals into Multimodal Machine Translation via
Visual Question-Answering Pairs
- Authors: Yuxin Zuo, Bei Li, Chuanhao Lv, Tong Zheng, Tong Xiao, Jingbo Zhu
- Abstract summary: multimodal machine translation (MMT) systems exhibit decreased sensitivity to visual information when text inputs are complete.
A novel approach is proposed to generate parallel Visual Question-Answering (VQA) style pairs from the source text.
An MMT-VQA multitask learning framework is introduced to incorporate explicit probing signals from the dataset into the MMT training process.
- Score: 45.41083125321069
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents an in-depth study of multimodal machine translation
(MMT), examining the prevailing understanding that MMT systems exhibit
decreased sensitivity to visual information when text inputs are complete.
Instead, we attribute this phenomenon to insufficient cross-modal interaction,
rather than image information redundancy. A novel approach is proposed to
generate parallel Visual Question-Answering (VQA) style pairs from the source
text, fostering more robust cross-modal interaction. Using Large Language
Models (LLMs), we explicitly model the probing signal in MMT to convert it into
VQA-style data to create the Multi30K-VQA dataset. An MMT-VQA multitask
learning framework is introduced to incorporate explicit probing signals from
the dataset into the MMT training process. Experimental results on two
widely-used benchmarks demonstrate the effectiveness of this novel approach.
Our code and data would be available at:
\url{https://github.com/libeineu/MMT-VQA}.
Related papers
- Multimodality Helps Few-Shot 3D Point Cloud Semantic Segmentation [61.91492500828508]
Few-shot 3D point cloud segmentation (FS-PCS) aims at generalizing models to segment novel categories with minimal support samples.
We introduce a cost-free multimodal FS-PCS setup, utilizing textual labels and the potentially available 2D image modality.
We propose a simple yet effective Test-time Adaptive Cross-modal Seg (TACC) technique to mitigate training bias.
arXiv Detail & Related papers (2024-10-29T19:28:41Z) - Towards Zero-Shot Multimodal Machine Translation [64.9141931372384]
We propose a method to bypass the need for fully supervised data to train multimodal machine translation systems.
Our method, called ZeroMMT, consists in adapting a strong text-only machine translation (MT) model by training it on a mixture of two objectives.
To prove that our method generalizes to languages with no fully supervised training data available, we extend the CoMMuTE evaluation dataset to three new languages: Arabic, Russian and Chinese.
arXiv Detail & Related papers (2024-07-18T15:20:31Z) - 3AM: An Ambiguity-Aware Multi-Modal Machine Translation Dataset [90.95948101052073]
We introduce 3AM, an ambiguity-aware MMT dataset comprising 26,000 parallel sentence pairs in English and Chinese.
Our dataset is specifically designed to include more ambiguity and a greater variety of both captions and images than other MMT datasets.
Experimental results show that MMT models trained on our dataset exhibit a greater ability to exploit visual information than those trained on other MMT datasets.
arXiv Detail & Related papers (2024-04-29T04:01:30Z) - Cross-Modal Multi-Tasking for Speech-to-Text Translation via Hard
Parameter Sharing [72.56219471145232]
We propose a ST/MT multi-tasking framework with hard parameter sharing.
Our method reduces the speech-text modality gap via a pre-processing stage.
We show that our framework improves attentional encoder-decoder, Connectionist Temporal Classification (CTC), transducer, and joint CTC/attention models by an average of +0.5 BLEU.
arXiv Detail & Related papers (2023-09-27T17:48:14Z) - Beyond Triplet: Leveraging the Most Data for Multimodal Machine
Translation [53.342921374639346]
Multimodal machine translation aims to improve translation quality by incorporating information from other modalities, such as vision.
Previous MMT systems mainly focus on better access and use of visual information and tend to validate their methods on image-related datasets.
This paper establishes new methods and new datasets for MMT.
arXiv Detail & Related papers (2022-12-20T15:02:38Z) - Neural Machine Translation with Phrase-Level Universal Visual
Representations [11.13240570688547]
We propose a phrase-level retrieval-based method for MMT to get visual information for the source input from existing sentence-image data sets.
Our method performs retrieval at the phrase level and hence learns visual information from pairs of source phrase and grounded region.
Experiments show that the proposed method significantly outperforms strong baselines on multiple MMT datasets.
arXiv Detail & Related papers (2022-03-19T11:21:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.