Exploiting Diverse Feature for Multimodal Sentiment Analysis
- URL: http://arxiv.org/abs/2308.13421v1
- Date: Fri, 25 Aug 2023 15:06:14 GMT
- Title: Exploiting Diverse Feature for Multimodal Sentiment Analysis
- Authors: Jia Li, Wei Qian, Kun Li, Qi Li, Dan Guo, Meng Wang
- Abstract summary: We present our solution to the MuSe-Personalisation sub-challenge in the MuSe 2023 Multimodal Sentiment Analysis Challenge.
Considering different people have personal characteristics, the main challenge of this task is how to build robustness feature presentation for sentiment prediction.
- Score: 40.39627083212711
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present our solution to the MuSe-Personalisation
sub-challenge in the MuSe 2023 Multimodal Sentiment Analysis Challenge. The
task of MuSe-Personalisation aims to predict the continuous arousal and valence
values of a participant based on their audio-visual, language, and
physiological signal modalities data. Considering different people have
personal characteristics, the main challenge of this task is how to build
robustness feature presentation for sentiment prediction. To address this
issue, we propose exploiting diverse features. Specifically, we proposed a
series of feature extraction methods to build a robust representation and model
ensemble. We empirically evaluate the performance of the utilized method on the
officially provided dataset. \textbf{As a result, we achieved 3rd place in the
MuSe-Personalisation sub-challenge.} Specifically, we achieve the results of
0.8492 and 0.8439 for MuSe-Personalisation in terms of arousal and valence CCC.
Related papers
- The MuSe 2024 Multimodal Sentiment Analysis Challenge: Social Perception and Humor Recognition [64.5207572897806]
The Multimodal Sentiment Analysis Challenge (MuSe) 2024 addresses two contemporary multimodal affect and sentiment analysis problems.
In the Social Perception Sub-Challenge (MuSe-Perception), participants will predict 16 different social attributes of individuals.
The Cross-Cultural Humor Detection Sub-Challenge (MuSe-Humor) dataset expands upon the Passau Spontaneous Football Coach Humor dataset.
arXiv Detail & Related papers (2024-06-11T22:26:20Z) - PsyCoT: Psychological Questionnaire as Powerful Chain-of-Thought for
Personality Detection [50.66968526809069]
We propose a novel personality detection method, called PsyCoT, which mimics the way individuals complete psychological questionnaires in a multi-turn dialogue manner.
Our experiments demonstrate that PsyCoT significantly improves the performance and robustness of GPT-3.5 in personality detection.
arXiv Detail & Related papers (2023-10-31T08:23:33Z) - Exploiting Modality-Specific Features For Multi-Modal Manipulation
Detection And Grounding [54.49214267905562]
We construct a transformer-based framework for multi-modal manipulation detection and grounding tasks.
Our framework simultaneously explores modality-specific features while preserving the capability for multi-modal alignment.
We propose an implicit manipulation query (IMQ) that adaptively aggregates global contextual cues within each modality.
arXiv Detail & Related papers (2023-09-22T06:55:41Z) - The MuSe 2023 Multimodal Sentiment Analysis Challenge: Mimicked
Emotions, Cross-Cultural Humour, and Personalisation [69.13075715686622]
MuSe 2023 is a set of shared tasks addressing three different contemporary multimodal affect and sentiment analysis problems.
MuSe 2023 seeks to bring together a broad audience from different research communities.
arXiv Detail & Related papers (2023-05-05T08:53:57Z) - Hybrid Multimodal Feature Extraction, Mining and Fusion for Sentiment
Analysis [31.097398034974436]
We present our solutions for the Multimodal Sentiment Analysis Challenge (MuSe) 2022, which includes MuSe-Humor, MuSe-Reaction and MuSe-Stress Sub-challenges.
The MuSe 2022 focuses on humor detection, emotional reactions and multimodal emotional stress utilising different modalities and data sets.
arXiv Detail & Related papers (2022-08-05T09:07:58Z) - The Multimodal Sentiment Analysis in Car Reviews (MuSe-CaR) Dataset:
Collection, Insights and Improvements [14.707930573950787]
We present MuSe-CaR, a first of its kind multimodal dataset.
The data is publicly available as it recently served as the testing bed for the 1st Multimodal Sentiment Analysis Challenge.
arXiv Detail & Related papers (2021-01-15T10:40:37Z) - Group-Level Emotion Recognition Using a Unimodal Privacy-Safe
Non-Individual Approach [0.0]
This article presents our unimodal privacy-safe and non-individual proposal for the audio-video group emotion recognition subtask at the Emotion Recognition in the Wild (EmotiW) Challenge 2020 1.
arXiv Detail & Related papers (2020-09-15T12:25:33Z) - MISA: Modality-Invariant and -Specific Representations for Multimodal
Sentiment Analysis [48.776247141839875]
We propose a novel framework, MISA, which projects each modality to two distinct subspaces.
The first subspace is modality-invariant, where the representations across modalities learn their commonalities and reduce the modality gap.
Our experiments on popular sentiment analysis benchmarks, MOSI and MOSEI, demonstrate significant gains over state-of-the-art models.
arXiv Detail & Related papers (2020-05-07T15:13:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.