Rethinking the constraints of multimodal fusion: case study in
Weakly-Supervised Audio-Visual Video Parsing
- URL: http://arxiv.org/abs/2105.14430v1
- Date: Sun, 30 May 2021 05:13:30 GMT
- Title: Rethinking the constraints of multimodal fusion: case study in
Weakly-Supervised Audio-Visual Video Parsing
- Authors: Jianning Wu, Zhuqing Jiang, Shiping Wen, Aidong Men, Haiying Wang
- Abstract summary: We show that selecting the optimal feature extraction network collocation is a very important subproblem in multimodal tasks.
A novel method is proposed to convert the optimization problem into an issue of comparative upper bounds by referring to the general practice of extreme value conversion in mathematics.
- Score: 5.395800183719964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For multimodal tasks, a good feature extraction network should extract
information as much as possible and ensure that the extracted feature embedding
and other modal feature embedding have an excellent mutual understanding. The
latter is often more critical in feature fusion than the former. Therefore,
selecting the optimal feature extraction network collocation is a very
important subproblem in multimodal tasks. Most of the existing studies ignore
this problem or adopt an ergodic approach. This problem is modeled as an
optimization problem in this paper. A novel method is proposed to convert the
optimization problem into an issue of comparative upper bounds by referring to
the general practice of extreme value conversion in mathematics. Compared with
the traditional method, it reduces the time cost.
Meanwhile, aiming at the common problem that the feature similarity and the
feature semantic similarity are not aligned in the multimodal time-series
problem, we refer to the idea of contrast learning and propose a multimodal
time-series contrastive loss(MTSC).
Based on the above issues, We demonstrated the feasibility of our approach in
the audio-visual video parsing task. Substantial analyses verify that our
methods promote the fusion of different modal features.
Related papers
- Multi-granularity Contrastive Cross-modal Collaborative Generation for End-to-End Long-term Video Question Answering [53.39158264785098]
Long-term Video Question Answering (VideoQA) is a challenging vision-and-language bridging task.
We present an entirely end-to-end solution for VideoQA: Multi-granularity Contrastive cross-modal collaborative Generation model.
arXiv Detail & Related papers (2024-10-12T06:21:58Z) - U3M: Unbiased Multiscale Modal Fusion Model for Multimodal Semantic Segmentation [63.31007867379312]
We introduce U3M: An Unbiased Multiscale Modal Fusion Model for Multimodal Semantics.
We employ feature fusion at multiple scales to ensure the effective extraction and integration of both global and local features.
Experimental results demonstrate that our approach achieves superior performance across multiple datasets.
arXiv Detail & Related papers (2024-05-24T08:58:48Z) - Multi-Task Learning with Multi-Task Optimization [31.518330903602095]
We show that a set of optimized yet well-distributed models embody different trade-offs in one algorithmic pass.
We investigate the proposed multi-task learning with multi-task optimization for solving various problem settings.
arXiv Detail & Related papers (2024-03-24T14:04:40Z) - Multimodal Representation Learning by Alternating Unimodal Adaptation [73.15829571740866]
We propose MLA (Multimodal Learning with Alternating Unimodal Adaptation) to overcome challenges where some modalities appear more dominant than others during multimodal learning.
MLA reframes the conventional joint multimodal learning process by transforming it into an alternating unimodal learning process.
It captures cross-modal interactions through a shared head, which undergoes continuous optimization across different modalities.
Experiments are conducted on five diverse datasets, encompassing scenarios with complete modalities and scenarios with missing modalities.
arXiv Detail & Related papers (2023-11-17T18:57:40Z) - Learning Unseen Modality Interaction [54.23533023883659]
Multimodal learning assumes all modality combinations of interest are available during training to learn cross-modal correspondences.
We pose the problem of unseen modality interaction and introduce a first solution.
It exploits a module that projects the multidimensional features of different modalities into a common space with rich information preserved.
arXiv Detail & Related papers (2023-06-22T10:53:10Z) - Revisiting Modality Imbalance In Multimodal Pedestrian Detection [6.7841188753203046]
We introduce a novel training setup with regularizer in the multimodal architecture to resolve the problem of this disparity between the modalities.
Specifically, our regularizer term helps to make the feature fusion method more robust by considering both the feature extractors equivalently important during the training.
arXiv Detail & Related papers (2023-02-24T11:56:57Z) - Generalizing Multimodal Variational Methods to Sets [35.69942798534849]
This paper presents a novel variational method on sets called the Set Multimodal VAE (SMVAE) for learning a multimodal latent space.
By modeling the joint-modality posterior distribution directly, the proposed SMVAE learns to exchange information between multiple modalities and compensate for the drawbacks caused by factorization.
arXiv Detail & Related papers (2022-12-19T23:50:19Z) - Adaptive Contrastive Learning on Multimodal Transformer for Review
Helpfulness Predictions [40.70793282367128]
We propose Multimodal Contrastive Learning for Multimodal Review Helpfulness Prediction (MRHP) problem.
In addition, we introduce Adaptive Weighting scheme for our contrastive learning approach.
Finally, we propose Multimodal Interaction module to address the unalignment nature of multimodal data.
arXiv Detail & Related papers (2022-11-07T13:05:56Z) - Mitigating Modality Collapse in Multimodal VAEs via Impartial
Optimization [7.4262579052708535]
We argue that this effect is a consequence of conflicting gradients during multimodal VAE training.
We show how to detect the sub-graphs in the computational graphs where gradients conflict.
We empirically show that our framework significantly improves the reconstruction performance, conditional generation, and coherence of the latent space across modalities.
arXiv Detail & Related papers (2022-06-09T13:29:25Z) - Self-attention fusion for audiovisual emotion recognition with
incomplete data [103.70855797025689]
We consider the problem of multimodal data analysis with a use case of audiovisual emotion recognition.
We propose an architecture capable of learning from raw data and describe three variants of it with distinct modality fusion mechanisms.
arXiv Detail & Related papers (2022-01-26T18:04:29Z) - Attention Bottlenecks for Multimodal Fusion [90.75885715478054]
Machine perception models are typically modality-specific and optimised for unimodal benchmarks.
We introduce a novel transformer based architecture that uses fusion' for modality fusion at multiple layers.
We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks.
arXiv Detail & Related papers (2021-06-30T22:44:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.