Robustness of Fusion-based Multimodal Classifiers to Cross-Modal Content
Dilutions
- URL: http://arxiv.org/abs/2211.02646v1
- Date: Fri, 4 Nov 2022 17:58:02 GMT
- Title: Robustness of Fusion-based Multimodal Classifiers to Cross-Modal Content
Dilutions
- Authors: Gaurav Verma, Vishwa Vinay, Ryan A. Rossi, Srijan Kumar
- Abstract summary: We develop a model that generates dilution text that maintains relevance and topical coherence with the image and existing text.
We find that the performance of task-specific fusion-based multimodal classifiers drops by 23.3% and 22.5%, respectively, in the presence of dilutions generated by our model.
Our work aims to highlight and encourage further research on the robustness of deep multimodal models to realistic variations.
- Score: 27.983902791798965
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As multimodal learning finds applications in a wide variety of high-stakes
societal tasks, investigating their robustness becomes important. Existing work
has focused on understanding the robustness of vision-and-language models to
imperceptible variations on benchmark tasks. In this work, we investigate the
robustness of multimodal classifiers to cross-modal dilutions - a plausible
variation. We develop a model that, given a multimodal (image + text) input,
generates additional dilution text that (a) maintains relevance and topical
coherence with the image and existing text, and (b) when added to the original
text, leads to misclassification of the multimodal input. Via experiments on
Crisis Humanitarianism and Sentiment Detection tasks, we find that the
performance of task-specific fusion-based multimodal classifiers drops by 23.3%
and 22.5%, respectively, in the presence of dilutions generated by our model.
Metric-based comparisons with several baselines and human evaluations indicate
that our dilutions show higher relevance and topical coherence, while
simultaneously being more effective at demonstrating the brittleness of the
multimodal classifiers. Our work aims to highlight and encourage further
research on the robustness of deep multimodal models to realistic variations,
especially in human-facing societal applications. The code and other resources
are available at https://claws-lab.github.io/multimodal-robustness/.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.