Preserving Modality Structure Improves Multi-Modal Learning
- URL: http://arxiv.org/abs/2308.13077v1
- Date: Thu, 24 Aug 2023 20:46:48 GMT
- Title: Preserving Modality Structure Improves Multi-Modal Learning
- Authors: Swetha Sirnam, Mamshad Nayeem Rizve, Nina Shvetsova, Hilde Kuehne,
Mubarak Shah
- Abstract summary: Self-supervised learning on large-scale multi-modal datasets allows learning semantically meaningful embeddings without relying on human annotations.
These methods often struggle to generalize well on out-of-domain data as they ignore the semantic structure present in modality-specific embeddings.
We propose a novel Semantic-Structure-Preserving Consistency approach to improve generalizability by preserving the modality-specific relationships in the joint embedding space.
- Score: 64.10085674834252
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Self-supervised learning on large-scale multi-modal datasets allows learning
semantically meaningful embeddings in a joint multi-modal representation space
without relying on human annotations. These joint embeddings enable zero-shot
cross-modal tasks like retrieval and classification. However, these methods
often struggle to generalize well on out-of-domain data as they ignore the
semantic structure present in modality-specific embeddings. In this context, we
propose a novel Semantic-Structure-Preserving Consistency approach to improve
generalizability by preserving the modality-specific relationships in the joint
embedding space. To capture modality-specific semantic relationships between
samples, we propose to learn multiple anchors and represent the multifaceted
relationship between samples with respect to their relationship with these
anchors. To assign multiple anchors to each sample, we propose a novel
Multi-Assignment Sinkhorn-Knopp algorithm. Our experimentation demonstrates
that our proposed approach learns semantically meaningful anchors in a
self-supervised manner. Furthermore, our evaluation on MSR-VTT and YouCook2
datasets demonstrates that our proposed multi-anchor assignment based solution
achieves state-of-the-art performance and generalizes to both inand
out-of-domain datasets. Code: https://github.com/Swetha5/Multi_Sinkhorn_Knopp
Related papers
- Anchors Aweigh! Sail for Optimal Unified Multi-Modal Representations [16.036997801745905]
Multimodal learning plays a crucial role in enabling machine learning models to fuse and utilize diverse data sources.
Recent binding methods, such as ImageBind, typically use a fixed anchor modality to align multimodal data in the anchor modal embedding space.
We propose CentroBind, a simple yet powerful approach that eliminates the need for a fixed anchor.
arXiv Detail & Related papers (2024-10-02T23:19:23Z) - Unsupervised Multimodal Clustering for Semantics Discovery in Multimodal Utterances [24.142013877384603]
This paper introduces a novel unsupervised multimodal clustering method (UMC), making a pioneering contribution to this field.
UMC introduces a unique approach to constructing augmentation views for multimodal data, which are then used to perform pre-training.
We show remarkable improvements of 2-6% scores in clustering metrics over state-of-the-art methods, marking the first successful endeavor in this domain.
arXiv Detail & Related papers (2024-05-21T13:24:07Z) - Multi-modal Semantic Understanding with Contrastive Cross-modal Feature
Alignment [11.897888221717245]
This paper proposes a novel CLIP-guided contrastive-learning-based architecture to perform multi-modal feature alignment.
Our model is simple to implement without using task-specific external knowledge, and thus can easily migrate to other multi-modal tasks.
arXiv Detail & Related papers (2024-03-11T01:07:36Z) - Unity by Diversity: Improved Representation Learning in Multimodal VAEs [24.691068754720106]
We show that a better latent representation can be obtained by replacing hard constraints with a soft constraint.
We show improved learned latent representations and imputation of missing data modalities compared to existing methods.
arXiv Detail & Related papers (2024-03-08T13:29:46Z) - Support-set based Multi-modal Representation Enhancement for Video
Captioning [121.70886789958799]
We propose a Support-set based Multi-modal Representation Enhancement (SMRE) model to mine rich information in a semantic subspace shared between samples.
Specifically, we propose a Support-set Construction (SC) module to construct a support-set to learn underlying connections between samples and obtain semantic-related visual elements.
During this process, we design a Semantic Space Transformation (SST) module to constrain relative distance and administrate multi-modal interactions in a self-supervised way.
arXiv Detail & Related papers (2022-05-19T03:40:29Z) - HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain
Language Model Compression [53.90578309960526]
Large pre-trained language models (PLMs) have shown overwhelming performances compared with traditional neural network methods.
We propose a hierarchical relational knowledge distillation (HRKD) method to capture both hierarchical and domain relational information.
arXiv Detail & Related papers (2021-10-16T11:23:02Z) - Multimodal Clustering Networks for Self-supervised Learning from
Unlabeled Videos [69.61522804742427]
This paper proposes a self-supervised training framework that learns a common multimodal embedding space.
We extend the concept of instance-level contrastive learning with a multimodal clustering step to capture semantic similarities across modalities.
The resulting embedding space enables retrieval of samples across all modalities, even from unseen datasets and different domains.
arXiv Detail & Related papers (2021-04-26T15:55:01Z) - Learning to Combine: Knowledge Aggregation for Multi-Source Domain
Adaptation [56.694330303488435]
We propose a Learning to Combine for Multi-Source Domain Adaptation (LtC-MSDA) framework.
In the nutshell, a knowledge graph is constructed on the prototypes of various domains to realize the information propagation among semantically adjacent representations.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-07-17T07:52:44Z) - Adversarial Continual Learning [99.56738010842301]
We propose a hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features.
Our model combines architecture growth to prevent forgetting of task-specific skills and an experience replay approach to preserve shared skills.
arXiv Detail & Related papers (2020-03-21T02:08:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.