Self-Supervised Multimodal Learning: A Survey
- URL: http://arxiv.org/abs/2304.01008v2
- Date: Fri, 4 Aug 2023 21:17:28 GMT
- Title: Self-Supervised Multimodal Learning: A Survey
- Authors: Yongshuo Zong, Oisin Mac Aodha, Timothy Hospedales
- Abstract summary: Multimodal learning aims to understand and analyze information from multiple modalities.
The heavy dependence on data paired with expensive human annotations impedes scaling up models.
Given the availability of large-scale unannotated data in the wild, self-supervised learning has become an attractive strategy to alleviate the annotation bottleneck.
- Score: 10.134639792276847
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multimodal learning, which aims to understand and analyze information from
multiple modalities, has achieved substantial progress in the supervised regime
in recent years. However, the heavy dependence on data paired with expensive
human annotations impedes scaling up models. Meanwhile, given the availability
of large-scale unannotated data in the wild, self-supervised learning has
become an attractive strategy to alleviate the annotation bottleneck. Building
on these two directions, self-supervised multimodal learning (SSML) provides
ways to learn from raw multimodal data. In this survey, we provide a
comprehensive review of the state-of-the-art in SSML, in which we elucidate
three major challenges intrinsic to self-supervised learning with multimodal
data: (1) learning representations from multimodal data without labels, (2)
fusion of different modalities, and (3) learning with unaligned data. We then
detail existing solutions to these challenges. Specifically, we consider (1)
objectives for learning from multimodal unlabeled data via self-supervision,
(2) model architectures from the perspective of different multimodal fusion
strategies, and (3) pair-free learning strategies for coarse-grained and
fine-grained alignment. We also review real-world applications of SSML
algorithms in diverse fields such as healthcare, remote sensing, and machine
translation. Finally, we discuss challenges and future directions for SSML. A
collection of related resources can be found at:
https://github.com/ys-zong/awesome-self-supervised-multimodal-learning.
Related papers
- Beyond Unimodal Learning: The Importance of Integrating Multiple Modalities for Lifelong Learning [23.035725779568587]
We study the role and interactions of multiple modalities in mitigating forgetting in deep neural networks (DNNs)
Our findings demonstrate that leveraging multiple views and complementary information from multiple modalities enables the model to learn more accurate and robust representations.
We propose a method for integrating and aligning the information from different modalities by utilizing the relational structural similarities between the data points in each modality.
arXiv Detail & Related papers (2024-05-04T22:02:58Z) - Generative Multi-Modal Knowledge Retrieval with Large Language Models [75.70313858231833]
We propose an innovative end-to-end generative framework for multi-modal knowledge retrieval.
Our framework takes advantage of the fact that large language models (LLMs) can effectively serve as virtual knowledge bases.
We demonstrate significant improvements ranging from 3.0% to 14.6% across all evaluation metrics when compared to strong baselines.
arXiv Detail & Related papers (2024-01-16T08:44:29Z) - Multimodal Representation Learning by Alternating Unimodal Adaptation [73.15829571740866]
We propose MLA (Multimodal Learning with Alternating Unimodal Adaptation) to overcome challenges where some modalities appear more dominant than others during multimodal learning.
MLA reframes the conventional joint multimodal learning process by transforming it into an alternating unimodal learning process.
It captures cross-modal interactions through a shared head, which undergoes continuous optimization across different modalities.
Experiments are conducted on five diverse datasets, encompassing scenarios with complete modalities and scenarios with missing modalities.
arXiv Detail & Related papers (2023-11-17T18:57:40Z) - Multimodal Graph Learning for Generative Tasks [89.44810441463652]
Multimodal learning combines multiple data modalities, broadening the types and complexity of data our models can utilize.
We propose Multimodal Graph Learning (MMGL), a framework for capturing information from multiple multimodal neighbors with relational structures among them.
arXiv Detail & Related papers (2023-10-11T13:25:03Z) - MinT: Boosting Generalization in Mathematical Reasoning via Multi-View
Fine-Tuning [53.90744622542961]
Reasoning in mathematical domains remains a significant challenge for small language models (LMs)
We introduce a new method that exploits existing mathematical problem datasets with diverse annotation styles.
Experimental results show that our strategy enables a LLaMA-7B model to outperform prior approaches.
arXiv Detail & Related papers (2023-07-16T05:41:53Z) - LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset,
Framework, and Benchmark [81.42376626294812]
We present Language-Assisted Multi-Modal instruction tuning dataset, framework, and benchmark.
Our aim is to establish LAMM as a growing ecosystem for training and evaluating MLLMs.
We present a comprehensive dataset and benchmark, which cover a wide range of vision tasks for 2D and 3D vision.
arXiv Detail & Related papers (2023-06-11T14:01:17Z) - Shared and Private Information Learning in Multimodal Sentiment Analysis with Deep Modal Alignment and Self-supervised Multi-Task Learning [8.868945335907867]
We propose a deep modal shared information learning module to capture the shared information between modalities.
We also use a label generation module based on a self-supervised learning strategy to capture the private information of the modalities.
Our approach outperforms current state-of-the-art methods on most of the metrics of the three public datasets.
arXiv Detail & Related papers (2023-05-15T09:24:48Z) - Beyond Just Vision: A Review on Self-Supervised Representation Learning
on Multimodal and Temporal Data [10.006890915441987]
Popularity of self-supervised learning is driven by the fact that traditional models typically require a huge amount of well-annotated data for training.
Self-supervised methods have been introduced to improve the efficiency of training data through discriminative pre-training of models.
We aim to provide the first comprehensive review of multimodal self-supervised learning methods for temporal data.
arXiv Detail & Related papers (2022-06-06T04:59:44Z) - Multimodal Clustering Networks for Self-supervised Learning from
Unlabeled Videos [69.61522804742427]
This paper proposes a self-supervised training framework that learns a common multimodal embedding space.
We extend the concept of instance-level contrastive learning with a multimodal clustering step to capture semantic similarities across modalities.
The resulting embedding space enables retrieval of samples across all modalities, even from unseen datasets and different domains.
arXiv Detail & Related papers (2021-04-26T15:55:01Z) - Learning Modality-Specific Representations with Self-Supervised
Multi-Task Learning for Multimodal Sentiment Analysis [11.368438990334397]
We develop a self-supervised learning strategy to acquire independent unimodal supervisions.
We conduct extensive experiments on three public multimodal baseline datasets.
Our method achieves comparable performance than human-annotated unimodal labels.
arXiv Detail & Related papers (2021-02-09T14:05:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.