Improving Multimodal fusion via Mutual Dependency Maximisation
- URL: http://arxiv.org/abs/2109.00922v1
- Date: Tue, 31 Aug 2021 06:26:26 GMT
- Title: Improving Multimodal fusion via Mutual Dependency Maximisation
- Authors: Pierre Colombo, Emile Chapuis, Matthieu Labeau, Chloe Clavel
- Abstract summary: Multimodal sentiment analysis is a trending area of research, and the multimodal fusion is one of its most active topic.
In this work, we investigate unexplored penalties and propose a set of new objectives that measure the dependency between modalities.
We demonstrate that our new penalties lead to a consistent improvement (up to $4.3$ on accuracy) across a large variety of state-of-the-art models.
- Score: 5.73995120847626
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multimodal sentiment analysis is a trending area of research, and the
multimodal fusion is one of its most active topic. Acknowledging humans
communicate through a variety of channels (i.e visual, acoustic, linguistic),
multimodal systems aim at integrating different unimodal representations into a
synthetic one. So far, a consequent effort has been made on developing complex
architectures allowing the fusion of these modalities. However, such systems
are mainly trained by minimising simple losses such as $L_1$ or cross-entropy.
In this work, we investigate unexplored penalties and propose a set of new
objectives that measure the dependency between modalities. We demonstrate that
our new penalties lead to a consistent improvement (up to $4.3$ on accuracy)
across a large variety of state-of-the-art models on two well-known sentiment
analysis datasets: \texttt{CMU-MOSI} and \texttt{CMU-MOSEI}. Our method not
only achieves a new SOTA on both datasets but also produces representations
that are more robust to modality drops. Finally, a by-product of our methods
includes a statistical network which can be used to interpret the high
dimensional representations learnt by the model.
Related papers
- U3M: Unbiased Multiscale Modal Fusion Model for Multimodal Semantic Segmentation [63.31007867379312]
We introduce U3M: An Unbiased Multiscale Modal Fusion Model for Multimodal Semantics.
We employ feature fusion at multiple scales to ensure the effective extraction and integration of both global and local features.
Experimental results demonstrate that our approach achieves superior performance across multiple datasets.
arXiv Detail & Related papers (2024-05-24T08:58:48Z) - Unified Multi-modal Unsupervised Representation Learning for
Skeleton-based Action Understanding [62.70450216120704]
Unsupervised pre-training has shown great success in skeleton-based action understanding.
We propose a Unified Multimodal Unsupervised Representation Learning framework, called UmURL.
UmURL exploits an efficient early-fusion strategy to jointly encode the multi-modal features in a single-stream manner.
arXiv Detail & Related papers (2023-11-06T13:56:57Z) - IMF: Interactive Multimodal Fusion Model for Link Prediction [13.766345726697404]
We introduce a novel Interactive Multimodal Fusion (IMF) model to integrate knowledge from different modalities.
Our approach has been demonstrated to be effective through empirical evaluations on several real-world datasets.
arXiv Detail & Related papers (2023-03-20T01:20:02Z) - Efficient Multimodal Transformer with Dual-Level Feature Restoration for
Robust Multimodal Sentiment Analysis [47.29528724322795]
Multimodal Sentiment Analysis (MSA) has attracted increasing attention recently.
Despite significant progress, there are still two major challenges on the way towards robust MSA.
We propose a generic and unified framework to address them, named Efficient Multimodal Transformer with Dual-Level Feature Restoration (EMT-DLFR)
arXiv Detail & Related papers (2022-08-16T08:02:30Z) - A Study of Syntactic Multi-Modality in Non-Autoregressive Machine
Translation [144.55713938260828]
It is difficult for non-autoregressive translation models to capture the multi-modal distribution of target translations.
We decompose it into short- and long-range syntactic multi-modalities and evaluate several recent NAT algorithms with advanced loss functions.
We design a new loss function to better handle the complicated syntactic multi-modality in real-world datasets.
arXiv Detail & Related papers (2022-07-09T06:48:10Z) - Multi-scale Cooperative Multimodal Transformers for Multimodal Sentiment
Analysis in Videos [58.93586436289648]
We propose a multi-scale cooperative multimodal transformer (MCMulT) architecture for multimodal sentiment analysis.
Our model outperforms existing approaches on unaligned multimodal sequences and has strong performance on aligned multimodal sequences.
arXiv Detail & Related papers (2022-06-16T07:47:57Z) - Multimodal Representations Learning Based on Mutual Information
Maximization and Minimization and Identity Embedding for Multimodal Sentiment
Analysis [33.73730195500633]
We propose a multimodal representation model based on Mutual information Maximization and Identity Embedding.
Experimental results on two public datasets demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2022-01-10T01:41:39Z) - Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal
Sentiment Analysis [96.46952672172021]
Bi-Bimodal Fusion Network (BBFN) is a novel end-to-end network that performs fusion on pairwise modality representations.
Model takes two bimodal pairs as input due to known information imbalance among modalities.
arXiv Detail & Related papers (2021-07-28T23:33:42Z) - MISA: Modality-Invariant and -Specific Representations for Multimodal
Sentiment Analysis [48.776247141839875]
We propose a novel framework, MISA, which projects each modality to two distinct subspaces.
The first subspace is modality-invariant, where the representations across modalities learn their commonalities and reduce the modality gap.
Our experiments on popular sentiment analysis benchmarks, MOSI and MOSEI, demonstrate significant gains over state-of-the-art models.
arXiv Detail & Related papers (2020-05-07T15:13:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.