WisdoM: Improving Multimodal Sentiment Analysis by Fusing Contextual
World Knowledge
- URL: http://arxiv.org/abs/2401.06659v2
- Date: Tue, 20 Feb 2024 09:28:50 GMT
- Title: WisdoM: Improving Multimodal Sentiment Analysis by Fusing Contextual
World Knowledge
- Authors: Wenbin Wang, Liang Ding, Li Shen, Yong Luo, Han Hu, Dacheng Tao
- Abstract summary: We propose a plug-in framework named WisdoM to leverage the contextual world knowledge induced from the large vision-language models (LVLMs) for enhanced multimodal sentiment analysis.
We show that our approach has substantial improvements over several state-of-the-art methods.
- Score: 73.76722241704488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sentiment analysis is rapidly advancing by utilizing various data modalities
(e.g., text, image). However, most previous works relied on superficial
information, neglecting the incorporation of contextual world knowledge (e.g.,
background information derived from but beyond the given image and text pairs)
and thereby restricting their ability to achieve better multimodal sentiment
analysis (MSA). In this paper, we proposed a plug-in framework named WisdoM, to
leverage the contextual world knowledge induced from the large vision-language
models (LVLMs) for enhanced MSA. WisdoM utilizes LVLMs to comprehensively
analyze both images and corresponding texts, simultaneously generating
pertinent context. To reduce the noise in the context, we also introduce a
training-free contextual fusion mechanism. Experiments across diverse
granularities of MSA tasks consistently demonstrate that our approach has
substantial improvements (brings an average +1.96% F1 score among five advanced
methods) over several state-of-the-art methods.
Related papers
- Leveraging Entity Information for Cross-Modality Correlation Learning: The Entity-Guided Multimodal Summarization [49.08348604716746]
Multimodal Summarization with Multimodal Output (MSMO) aims to produce a multimodal summary that integrates both text and relevant images.
In this paper, we propose an Entity-Guided Multimodal Summarization model (EGMS)
Our model, building on BART, utilizes dual multimodal encoders with shared weights to process text-image and entity-image information concurrently.
arXiv Detail & Related papers (2024-08-06T12:45:56Z) - mTREE: Multi-Level Text-Guided Representation End-to-End Learning for Whole Slide Image Analysis [16.472295458683696]
Multi-modal learning adeptly integrates visual and textual data, but its application to histopathology image and text analysis remains challenging.
We introduce Multi-Level Text-Guided Representation End-to-End Learning (mTREE)
This novel text-guided approach effectively captures multi-scale Whole Slide Images (WSIs) by utilizing information from accompanying textual pathology information.
arXiv Detail & Related papers (2024-05-28T04:47:44Z) - TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image Understanding [91.30065932213758]
Large Multimodal Models (LMMs) have sparked a surge in research aimed at harnessing their remarkable reasoning abilities.
We propose TextCoT, a novel Chain-of-Thought framework for text-rich image understanding.
Our method is free of extra training, offering immediate plug-and-play functionality.
arXiv Detail & Related papers (2024-04-15T13:54:35Z) - TCAN: Text-oriented Cross Attention Network for Multimodal Sentiment Analysis [34.28164104577455]
Multimodal Sentiment Analysis (MSA) endeavors to understand human sentiment by leveraging language, visual, and acoustic modalities.
Past research predominantly focused on improving representation learning techniques and feature fusion strategies.
We introduce a Text-oriented Cross-Attention Network (TCAN) emphasizing the predominant role of the text modality in MSA.
arXiv Detail & Related papers (2024-04-06T07:56:09Z) - Recent Advances in Hate Speech Moderation: Multimodality and the Role of Large Models [52.24001776263608]
This comprehensive survey delves into the recent strides in HS moderation.
We highlight the burgeoning role of large language models (LLMs) and large multimodal models (LMMs)
We identify existing gaps in research, particularly in the context of underrepresented languages and cultures.
arXiv Detail & Related papers (2024-01-30T03:51:44Z) - From Text to Pixels: A Context-Aware Semantic Synergy Solution for
Infrared and Visible Image Fusion [66.33467192279514]
We introduce a text-guided multi-modality image fusion method that leverages the high-level semantics from textual descriptions to integrate semantics from infrared and visible images.
Our method not only produces visually superior fusion results but also achieves a higher detection mAP over existing methods, achieving state-of-the-art results.
arXiv Detail & Related papers (2023-12-31T08:13:47Z) - Exploring Multi-Modal Contextual Knowledge for Open-Vocabulary Object
Detection [72.36017150922504]
We propose a multi-modal contextual knowledge distillation framework, MMC-Det, to transfer the learned contextual knowledge from a teacher fusion transformer to a student detector.
The diverse multi-modal masked language modeling is realized by an object divergence constraint upon traditional multi-modal masked language modeling (MLM)
arXiv Detail & Related papers (2023-08-30T08:33:13Z) - A Novel Context-Aware Multimodal Framework for Persian Sentiment
Analysis [19.783517380422854]
We present a first of its kind Persian multimodal dataset comprising more than 800 utterances.
We present a novel context-aware multimodal sentiment analysis framework.
We employ both decision-level (late) and feature-level (early) fusion methods to integrate affective cross-modal information.
arXiv Detail & Related papers (2021-03-03T19:09:01Z) - An AutoML-based Approach to Multimodal Image Sentiment Analysis [1.0499611180329804]
We propose a method that combines both textual and image individual sentiment analysis into a final fused classification based on AutoML.
Our method achieved state-of-the-art performance in the B-T4SA dataset, with 95.19% accuracy.
arXiv Detail & Related papers (2021-02-16T11:28:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.