MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models
- URL: http://arxiv.org/abs/2503.14827v1
- Date: Wed, 19 Mar 2025 01:59:44 GMT
- Title: MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models
- Authors: Chejian Xu, Jiawei Zhang, Zhaorun Chen, Chulin Xie, Mintong Kang, Yujin Potter, Zhun Wang, Zhuowen Yuan, Alexander Xiong, Zidi Xiong, Chenhui Zhang, Lingzhi Yuan, Yi Zeng, Peiyang Xu, Chengquan Guo, Andy Zhou, Jeffrey Ziwei Tan, Xuandong Zhao, Francesco Pinto, Zhen Xiang, Yu Gai, Zinan Lin, Dan Hendrycks, Bo Li, Dawn Song,
- Abstract summary: Multimodal foundation models (MMFMs) play a crucial role in various applications, including autonomous driving, healthcare, and virtual assistants.<n>Existing benchmarks on multimodal models either predominantly assess the helpfulness of these models, or only focus on limited perspectives such as fairness and privacy.<n>We present the first unified platform, MMDT (Multimodal DecodingTrust), designed to provide a comprehensive safety and trustworthiness evaluation for MMFMs.
- Score: 101.70140132374307
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multimodal foundation models (MMFMs) play a crucial role in various applications, including autonomous driving, healthcare, and virtual assistants. However, several studies have revealed vulnerabilities in these models, such as generating unsafe content by text-to-image models. Existing benchmarks on multimodal models either predominantly assess the helpfulness of these models, or only focus on limited perspectives such as fairness and privacy. In this paper, we present the first unified platform, MMDT (Multimodal DecodingTrust), designed to provide a comprehensive safety and trustworthiness evaluation for MMFMs. Our platform assesses models from multiple perspectives, including safety, hallucination, fairness/bias, privacy, adversarial robustness, and out-of-distribution (OOD) generalization. We have designed various evaluation scenarios and red teaming algorithms under different tasks for each perspective to generate challenging data, forming a high-quality benchmark. We evaluate a range of multimodal models using MMDT, and our findings reveal a series of vulnerabilities and areas for improvement across these perspectives. This work introduces the first comprehensive and unique safety and trustworthiness evaluation platform for MMFMs, paving the way for developing safer and more reliable MMFMs and systems. Our platform and benchmark are available at https://mmdecodingtrust.github.io/.
Related papers
- aiXamine: Simplified LLM Safety and Security [7.933485586826888]
We present aiXamine, a comprehensive black-box evaluation platform for safety and security.
AiXamine integrates over 40 tests (i.e., benchmarks) organized into eight key services targeting specific dimensions of safety and security.
The platform aggregates the evaluation results into a single detailed report per model, providing a breakdown of model performance, test examples, and rich visualizations.
arXiv Detail & Related papers (2025-04-21T09:26:05Z) - Benchmarking Multi-modal Semantic Segmentation under Sensor Failures: Missing and Noisy Modality Robustness [61.87055159919641]
Multi-modal semantic segmentation (MMSS) addresses the limitations of single-modality data by integrating complementary information across modalities.
Despite notable progress, a significant gap persists between research and real-world deployment due to variability and uncertainty in multi-modal data quality.
We introduce a robustness benchmark that evaluates MMSS models under three scenarios: Entire-Missing Modality (EMM), Random-Missing Modality (RMM), and Noisy Modality (NM)
arXiv Detail & Related papers (2025-03-24T08:46:52Z) - SoK: Unifying Cybersecurity and Cybersafety of Multimodal Foundation Models with an Information Theory Approach [58.93030774141753]
Multimodal foundation models (MFMs) represent a significant advancement in artificial intelligence.
This paper conceptualizes cybersafety and cybersecurity in the context of multimodal learning.
We present a comprehensive Systematization of Knowledge (SoK) to unify these concepts in MFMs, identifying key threats.
arXiv Detail & Related papers (2024-11-17T23:06:20Z) - VHELM: A Holistic Evaluation of Vision Language Models [75.88987277686914]
We present the Holistic Evaluation of Vision Language Models (VHELM)
VHELM aggregates various datasets to cover one or more of the 9 aspects: visual perception, knowledge, reasoning, bias, fairness, multilinguality, robustness, toxicity, and safety.
Our framework is designed to be lightweight and automatic so that evaluation runs are cheap and fast.
arXiv Detail & Related papers (2024-10-09T17:46:34Z) - MultiTrust: A Comprehensive Benchmark Towards Trustworthy Multimodal Large Language Models [51.19622266249408]
MultiTrust is the first comprehensive and unified benchmark on the trustworthiness of MLLMs.<n>Our benchmark employs a rigorous evaluation strategy that addresses both multimodal risks and cross-modal impacts.<n>Extensive experiments with 21 modern MLLMs reveal some previously unexplored trustworthiness issues and risks.
arXiv Detail & Related papers (2024-06-11T08:38:13Z) - MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models [34.802736332993994]
We propose MMCert, the first certified defense against adversarial attacks to a multi-modal model.
We evaluate our MMCert using two benchmark datasets: one for the multi-modal road segmentation task and the other for the multi-modal emotion recognition task.
arXiv Detail & Related papers (2024-03-28T01:05:06Z) - COMMIT: Certifying Robustness of Multi-Sensor Fusion Systems against
Semantic Attacks [24.37030085306459]
We propose the first robustness certification framework COMMIT certify robustness of multi-sensor fusion systems against semantic attacks.
In particular, we propose a practical anisotropic noise mechanism that leverages randomized smoothing with multi-modal data.
We show that the certification for MSF models is at most 48.39% higher than that of single-modal models, which validates the advantages of MSF models.
arXiv Detail & Related papers (2024-03-04T18:57:11Z) - FM-ViT: Flexible Modal Vision Transformers for Face Anti-Spoofing [88.6654909354382]
We present a pure transformer-based framework, dubbed the Flexible Modal Vision Transformer (FM-ViT) for face anti-spoofing.
FM-ViT can flexibly target any single-modal (i.e., RGB) attack scenarios with the help of available multi-modal data.
Experiments demonstrate that the single model trained based on FM-ViT can not only flexibly evaluate different modal samples, but also outperforms existing single-modal frameworks by a large margin.
arXiv Detail & Related papers (2023-05-05T04:28:48Z) - MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal
Contributions in Vision and Language Models & Tasks [20.902155496422417]
Vision and language models exploit unrobust indicators in individual modalities instead of focusing on relevant information in each modality.
We propose MM-SHAP, a performance-agnostic multimodality score based on Shapley values.
arXiv Detail & Related papers (2022-12-15T21:41:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.