Modality-Collaborative Transformer with Hybrid Feature Reconstruction
for Robust Emotion Recognition
- URL: http://arxiv.org/abs/2312.15848v1
- Date: Tue, 26 Dec 2023 01:59:23 GMT
- Title: Modality-Collaborative Transformer with Hybrid Feature Reconstruction
for Robust Emotion Recognition
- Authors: Chengxin Chen, Pengyuan Zhang
- Abstract summary: We propose a unified framework, Modality-Collaborative Transformer with Hybrid Feature Reconstruction (MCT-HFR)
MCT-HFR consists of a novel attention-based encoder which concurrently extracts and dynamically balances the intra- and inter-modality relations.
During model training, LFI leverages complete features as supervisory signals to recover local missing features, while GFA is designed to reduce the global semantic gap between pairwise complete and incomplete representations.
- Score: 35.15390769958969
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a vital aspect of affective computing, Multimodal Emotion Recognition has
been an active research area in the multimedia community. Despite recent
progress, this field still confronts two major challenges in real-world
applications: 1) improving the efficiency of constructing joint representations
from unaligned multimodal features, and 2) relieving the performance decline
caused by random modality feature missing. In this paper, we propose a unified
framework, Modality-Collaborative Transformer with Hybrid Feature
Reconstruction (MCT-HFR), to address these issues. The crucial component of MCT
is a novel attention-based encoder which concurrently extracts and dynamically
balances the intra- and inter-modality relations for all associated modalities.
With additional modality-wise parameter sharing, a more compact representation
can be encoded with less time and space complexity. To improve the robustness
of MCT, we further introduce HFR which consists of two modules: Local Feature
Imagination (LFI) and Global Feature Alignment (GFA). During model training,
LFI leverages complete features as supervisory signals to recover local missing
features, while GFA is designed to reduce the global semantic gap between
pairwise complete and incomplete representations. Experimental evaluations on
two popular benchmark datasets demonstrate that our proposed method
consistently outperforms advanced baselines in both complete and incomplete
data scenarios.
Related papers
- Accelerated Multi-Contrast MRI Reconstruction via Frequency and Spatial Mutual Learning [50.74383395813782]
We propose a novel Frequency and Spatial Mutual Learning Network (FSMNet) to explore global dependencies across different modalities.
The proposed FSMNet achieves state-of-the-art performance for the Multi-Contrast MR Reconstruction task with different acceleration factors.
arXiv Detail & Related papers (2024-09-21T12:02:47Z) - MMR-Mamba: Multi-Modal MRI Reconstruction with Mamba and Spatial-Frequency Information Fusion [17.084083262801737]
We propose MMR-Mamba, a novel framework that thoroughly and efficiently integrates multi-modal features for MRI reconstruction.
Specifically, we first design a Target modality-guided Cross Mamba (TCM) module in the spatial domain.
Then, we introduce a Selective Frequency Fusion (SFF) module to efficiently integrate global information in the Fourier domain.
arXiv Detail & Related papers (2024-06-27T07:30:54Z) - Modality Prompts for Arbitrary Modality Salient Object Detection [57.610000247519196]
This paper delves into the task of arbitrary modality salient object detection (AM SOD)
It aims to detect salient objects from arbitrary modalities, eg RGB images, RGB-D images, and RGB-D-T images.
A novel modality-adaptive Transformer (MAT) will be proposed to investigate two fundamental challenges of AM SOD.
arXiv Detail & Related papers (2024-05-06T11:02:02Z) - Deep Common Feature Mining for Efficient Video Semantic Segmentation [29.054945307605816]
We present Deep Common Feature Mining (DCFM) for video semantic segmentation.
DCFM explicitly decomposes features into two complementary components.
We show that our method has a superior balance between accuracy and efficiency.
arXiv Detail & Related papers (2024-03-05T06:17:59Z) - Exploiting modality-invariant feature for robust multimodal emotion
recognition with missing modalities [76.08541852988536]
We propose to use invariant features for a missing modality imagination network (IF-MMIN)
We show that the proposed model outperforms all baselines and invariantly improves the overall emotion recognition performance under uncertain missing-modality conditions.
arXiv Detail & Related papers (2022-10-27T12:16:25Z) - Efficient Multimodal Transformer with Dual-Level Feature Restoration for
Robust Multimodal Sentiment Analysis [47.29528724322795]
Multimodal Sentiment Analysis (MSA) has attracted increasing attention recently.
Despite significant progress, there are still two major challenges on the way towards robust MSA.
We propose a generic and unified framework to address them, named Efficient Multimodal Transformer with Dual-Level Feature Restoration (EMT-DLFR)
arXiv Detail & Related papers (2022-08-16T08:02:30Z) - Transformer-based Context Condensation for Boosting Feature Pyramids in
Object Detection [77.50110439560152]
Current object detectors typically have a feature pyramid (FP) module for multi-level feature fusion (MFF)
We propose a novel and efficient context modeling mechanism that can help existing FPs deliver better MFF results.
In particular, we introduce a novel insight that comprehensive contexts can be decomposed and condensed into two types of representations for higher efficiency.
arXiv Detail & Related papers (2022-07-14T01:45:03Z) - MSO: Multi-Feature Space Joint Optimization Network for RGB-Infrared
Person Re-Identification [35.97494894205023]
RGB-infrared cross-modality person re-identification (ReID) task aims to recognize the images of the same identity between the visible modality and the infrared modality.
Existing methods mainly use a two-stream architecture to eliminate the discrepancy between the two modalities in the final common feature space.
We present a novel multi-feature space joint optimization (MSO) network, which can learn modality-sharable features in both the single-modality space and the common space.
arXiv Detail & Related papers (2021-10-21T16:45:23Z) - Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal
Sentiment Analysis [96.46952672172021]
Bi-Bimodal Fusion Network (BBFN) is a novel end-to-end network that performs fusion on pairwise modality representations.
Model takes two bimodal pairs as input due to known information imbalance among modalities.
arXiv Detail & Related papers (2021-07-28T23:33:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.