Multi-modal Data Spectrum: Multi-modal Datasets are Multi-dimensional
- URL: http://arxiv.org/abs/2509.23499v1
- Date: Sat, 27 Sep 2025 21:13:29 GMT
- Title: Multi-modal Data Spectrum: Multi-modal Datasets are Multi-dimensional
- Authors: Divyam Madaan, Varshan Muhunthan, Kyunghyun Cho, Sumit Chopra,
- Abstract summary: We present a large-scale empirical study to quantify dependencies across 23 visual question-answering benchmarks using multi-modal large language models (MLLMs)<n>Our findings show that the reliance on vision, question (text), and their interaction varies significantly, both across and within benchmarks.<n>We discover that numerous benchmarks intended to mitigate text-only biases have inadvertently amplified image-only dependencies.<n>This characterization persists across model sizes, as larger models often use these intra-modality dependencies to achieve high performance that mask an underlying lack of multi-modal reasoning.
- Score: 40.11148315577635
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Understanding the interplay between intra-modality dependencies (the contribution of an individual modality to a target task) and inter-modality dependencies (the relationships between modalities and the target task) is fundamental to advancing multi-modal learning. However, the nature of and interaction between these dependencies within current benchmark evaluations remains poorly characterized. In this work, we present a large-scale empirical study to quantify these dependencies across 23 visual question-answering benchmarks using multi-modal large language models (MLLMs) covering domains such as general and expert knowledge reasoning, optical character recognition, and document understanding. Our findings show that the reliance on vision, question (text), and their interaction varies significantly, both across and within benchmarks. We discover that numerous benchmarks intended to mitigate text-only biases have inadvertently amplified image-only dependencies. This characterization persists across model sizes, as larger models often use these intra-modality dependencies to achieve high performance that mask an underlying lack of multi-modal reasoning. We provide a quantitative characterization of multi-modal datasets, enabling a principled approach to multi-modal benchmark design and evaluation.
Related papers
- Structured and Abstractive Reasoning on Multi-modal Relational Knowledge Images [58.553448128258566]
This paper bridges the dual gaps in large-scale high-quality data and capability enhancement methodologies.<n>We introduce STAR-64K, a dataset comprising 64K high-quality multi-modal instruction samples, and conduct experiments across 5 open-source MLLMs.
arXiv Detail & Related papers (2025-10-22T02:23:40Z) - Explaining multimodal LLMs via intra-modal token interactions [55.27436637894534]
Multimodal Large Language Models (MLLMs) have achieved remarkable success across diverse vision-language tasks, yet their internal decision-making mechanisms remain insufficiently understood.<n>We propose enhancing interpretability by leveraging intra-modal interaction.
arXiv Detail & Related papers (2025-09-26T14:39:13Z) - Diagnosing and Mitigating Modality Interference in Multimodal Large Language Models [26.005367102695317]
Multimodal Large Language Models can exhibit difficulty in distinguishing task-relevant from irrelevant signals.<n>We show that spurious information from irrelevant modalities often leads to significant performance degradation.<n>We propose a novel framework to finetune MLLMs, including perturbation-based data augmentation with both perturbations and adversarial perturbations.
arXiv Detail & Related papers (2025-05-26T07:31:32Z) - The Multimodal Paradox: How Added and Missing Modalities Shape Bias and Performance in Multimodal AI [0.0]
Multimodal learning has proven superior to unimodal counterparts in high-stakes decision-making.<n>While performance gains remain the gold standard for evaluating multimodal systems, concerns around bias and robustness are frequently overlooked.
arXiv Detail & Related papers (2025-05-05T20:42:44Z) - Jointly Modeling Inter- & Intra-Modality Dependencies for Multi-modal Learning [42.16496299814368]
We argue that conventional approaches that rely solely on either inter- or intra-modality dependencies may not be optimal in general.<n>We propose inter-language & intra-modality modeling (I2M2) framework, which captures and integrates both the inter- and intra-modality dependencies.<n>We evaluate our approach using real-world healthcare and vision-and-the-art datasets with state-of-the-art models.
arXiv Detail & Related papers (2024-05-27T19:22:41Z) - Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications [90.6849884683226]
We study the challenge of interaction quantification in a semi-supervised setting with only labeled unimodal data.
Using a precise information-theoretic definition of interactions, our key contribution is the derivation of lower and upper bounds.
We show how these theoretical results can be used to estimate multimodal model performance, guide data collection, and select appropriate multimodal models for various tasks.
arXiv Detail & Related papers (2023-06-07T15:44:53Z) - Robustness of Fusion-based Multimodal Classifiers to Cross-Modal Content
Dilutions [27.983902791798965]
We develop a model that generates dilution text that maintains relevance and topical coherence with the image and existing text.
We find that the performance of task-specific fusion-based multimodal classifiers drops by 23.3% and 22.5%, respectively, in the presence of dilutions generated by our model.
Our work aims to highlight and encourage further research on the robustness of deep multimodal models to realistic variations.
arXiv Detail & Related papers (2022-11-04T17:58:02Z) - Multi-modal Contrastive Representation Learning for Entity Alignment [57.92705405276161]
Multi-modal entity alignment aims to identify equivalent entities between two different multi-modal knowledge graphs.
We propose MCLEA, a Multi-modal Contrastive Learning based Entity Alignment model.
In particular, MCLEA firstly learns multiple individual representations from multiple modalities, and then performs contrastive learning to jointly model intra-modal and inter-modal interactions.
arXiv Detail & Related papers (2022-09-02T08:59:57Z) - High-Modality Multimodal Transformer: Quantifying Modality & Interaction
Heterogeneity for High-Modality Representation Learning [112.51498431119616]
This paper studies efficient representation learning for high-modality scenarios involving a large set of diverse modalities.
A single model, HighMMT, scales up to 10 modalities (text, image, audio, video, sensors, proprioception, speech, time-series, sets, and tables) and 15 tasks from 5 research areas.
arXiv Detail & Related papers (2022-03-02T18:56:20Z) - Relating by Contrasting: A Data-efficient Framework for Multimodal
Generative Models [86.9292779620645]
We develop a contrastive framework for generative model learning, allowing us to train the model not just by the commonality between modalities, but by the distinction between "related" and "unrelated" multimodal data.
Under our proposed framework, the generative model can accurately identify related samples from unrelated ones, making it possible to make use of the plentiful unlabeled, unpaired multimodal data.
arXiv Detail & Related papers (2020-07-02T15:08:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.