TCGM: An Information-Theoretic Framework for Semi-Supervised
Multi-Modality Learning
- URL: http://arxiv.org/abs/2007.06793v1
- Date: Tue, 14 Jul 2020 03:32:03 GMT
- Title: TCGM: An Information-Theoretic Framework for Semi-Supervised
Multi-Modality Learning
- Authors: Xinwei Sun, Yilun Xu, Peng Cao, Yuqing Kong, Lingjing Hu, Shanghang
Zhang, Yizhou Wang
- Abstract summary: We propose a novel information-theoretic approach, namely textbfTotal textbfCorrelation textbfGain textbfMaximization (TCGM) for semi-supervised multi-modal learning.
We apply our method to various tasks and achieve state-of-the-art results, including news classification, emotion recognition and disease prediction.
- Score: 35.76792527025377
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fusing data from multiple modalities provides more information to train
machine learning systems. However, it is prohibitively expensive and
time-consuming to label each modality with a large amount of data, which leads
to a crucial problem of semi-supervised multi-modal learning. Existing methods
suffer from either ineffective fusion across modalities or lack of theoretical
guarantees under proper assumptions. In this paper, we propose a novel
information-theoretic approach, namely \textbf{T}otal \textbf{C}orrelation
\textbf{G}ain \textbf{M}aximization (TCGM), for semi-supervised multi-modal
learning, which is endowed with promising properties: (i) it can utilize
effectively the information across different modalities of unlabeled data
points to facilitate training classifiers of each modality (ii) it has
theoretical guarantee to identify Bayesian classifiers, i.e., the ground truth
posteriors of all modalities. Specifically, by maximizing TC-induced loss
(namely TC gain) over classifiers of all modalities, these classifiers can
cooperatively discover the equivalent class of ground-truth classifiers; and
identify the unique ones by leveraging limited percentage of labeled data. We
apply our method to various tasks and achieve state-of-the-art results,
including news classification, emotion recognition and disease prediction.
Related papers
- An Information Criterion for Controlled Disentanglement of Multimodal Data [39.601584166020274]
Multimodal representation learning seeks to relate and decompose information inherent in multiple modalities.
Disentangled Self-Supervised Learning (DisentangledSSL) is a novel self-supervised approach for learning disentangled representations.
arXiv Detail & Related papers (2024-10-31T14:57:31Z) - Cross-Modality Clustering-based Self-Labeling for Multimodal Data Classification [2.666791490663749]
Cross-Modality Clustering-based Self-Labeling ( CMCSL)
CMCSL groups instances belonging to each modality in the deep feature space and then propagates known labels within the resulting clusters.
Experimental evaluation conducted on 20 datasets derived from the MM-IMDb dataset.
arXiv Detail & Related papers (2024-08-05T15:43:56Z) - FedMM: Federated Multi-Modal Learning with Modality Heterogeneity in
Computational Pathology [3.802258033231335]
Federated Multi-Modal (FedMM) is a learning framework that trains multiple single-modal feature extractors to enhance subsequent classification performance.
FedMM notably outperforms two baselines in accuracy and AUC metrics.
arXiv Detail & Related papers (2024-02-24T16:58:42Z) - Multi-Scale Cross Contrastive Learning for Semi-Supervised Medical Image
Segmentation [14.536384387956527]
We develop a novel Multi-Scale Cross Supervised Contrastive Learning framework to segment structures in medical images.
Our approach contrasts multi-scale features based on ground-truth and cross-predicted labels, in order to extract robust feature representations.
It outperforms state-of-the-art semi-supervised methods by more than 3.0% in Dice.
arXiv Detail & Related papers (2023-06-25T16:55:32Z) - CLCLSA: Cross-omics Linked embedding with Contrastive Learning and Self
Attention for multi-omics integration with incomplete multi-omics data [47.2764293508916]
Integration of heterogeneous and high-dimensional multi-omics data is becoming increasingly important in understanding genetic data.
One obstacle faced when performing multi-omics data integration is the existence of unpaired multi-omics data due to instrument sensitivity and cost.
We propose a deep learning method for multi-omics integration with incomplete data by Cross-omics Linked unified embedding with Contrastive Learning and Self Attention.
arXiv Detail & Related papers (2023-04-12T00:22:18Z) - CLIP-Driven Fine-grained Text-Image Person Re-identification [50.94827165464813]
TIReID aims to retrieve the image corresponding to the given text query from a pool of candidate images.
We propose a CLIP-driven Fine-grained information excavation framework (CFine) to fully utilize the powerful knowledge of CLIP for TIReID.
arXiv Detail & Related papers (2022-10-19T03:43:12Z) - Variational Distillation for Multi-View Learning [104.17551354374821]
We design several variational information bottlenecks to exploit two key characteristics for multi-view representation learning.
Under rigorously theoretical guarantee, our approach enables IB to grasp the intrinsic correlation between observations and semantic labels.
arXiv Detail & Related papers (2022-06-20T03:09:46Z) - Leveraging Ensembles and Self-Supervised Learning for Fully-Unsupervised
Person Re-Identification and Text Authorship Attribution [77.85461690214551]
Learning from fully-unlabeled data is challenging in Multimedia Forensics problems, such as Person Re-Identification and Text Authorship Attribution.
Recent self-supervised learning methods have shown to be effective when dealing with fully-unlabeled data in cases where the underlying classes have significant semantic differences.
We propose a strategy to tackle Person Re-Identification and Text Authorship Attribution by enabling learning from unlabeled data even when samples from different classes are not prominently diverse.
arXiv Detail & Related papers (2022-02-07T13:08:11Z) - Dual-Teacher: Integrating Intra-domain and Inter-domain Teachers for
Annotation-efficient Cardiac Segmentation [65.81546955181781]
We propose a novel semi-supervised domain adaptation approach, namely Dual-Teacher.
The student model learns the knowledge of unlabeled target data and labeled source data by two teacher models.
We demonstrate that our approach is able to concurrently utilize unlabeled data and cross-modality data with superior performance.
arXiv Detail & Related papers (2020-07-13T10:00:44Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.