Gaussian Mixture Variational Autoencoder with Contrastive Learning for
Multi-Label Classification
- URL: http://arxiv.org/abs/2112.00976v1
- Date: Thu, 2 Dec 2021 04:23:34 GMT
- Title: Gaussian Mixture Variational Autoencoder with Contrastive Learning for
Multi-Label Classification
- Authors: Junwen Bai, Shufeng Kong, Carla P. Gomes
- Abstract summary: We propose a novel contrastive learning boosted multi-label prediction model.
By using contrastive learning in the supervised setting, we can exploit label information effectively.
We show that the learnt embeddings provide insights into the interpretation of label-label interactions.
- Score: 27.043136219527767
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-label classification (MLC) is a prediction task where each sample can
have more than one label. We propose a novel contrastive learning boosted
multi-label prediction model based on a Gaussian mixture variational
autoencoder (C-GMVAE), which learns a multimodal prior space and employs a
contrastive loss. Many existing methods introduce extra complex neural modules
to capture the label correlations, in addition to the prediction modules. We
found that by using contrastive learning in the supervised setting, we can
exploit label information effectively, and learn meaningful feature and label
embeddings capturing both the label correlations and predictive power, without
extra neural modules. Our method also adopts the idea of learning and aligning
latent spaces for both features and labels. C-GMVAE imposes a Gaussian mixture
structure on the latent space, to alleviate posterior collapse and
over-regularization issues, in contrast to previous works based on a unimodal
prior. C-GMVAE outperforms existing methods on multiple public datasets and can
often match other models' full performance with only 50% of the training data.
Furthermore, we show that the learnt embeddings provide insights into the
interpretation of label-label interactions.
Related papers
- Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning [81.83013974171364]
Semi-supervised multi-label learning (SSMLL) is a powerful framework for leveraging unlabeled data to reduce the expensive cost of collecting precise multi-label annotations.
Unlike semi-supervised learning, one cannot select the most probable label as the pseudo-label in SSMLL due to multiple semantics contained in an instance.
We propose a dual-perspective method to generate high-quality pseudo-labels.
arXiv Detail & Related papers (2024-07-26T09:33:53Z) - Exploring Beyond Logits: Hierarchical Dynamic Labeling Based on Embeddings for Semi-Supervised Classification [49.09505771145326]
We propose a Hierarchical Dynamic Labeling (HDL) algorithm that does not depend on model predictions and utilizes image embeddings to generate sample labels.
Our approach has the potential to change the paradigm of pseudo-label generation in semi-supervised learning.
arXiv Detail & Related papers (2024-04-26T06:00:27Z) - ProbMCL: Simple Probabilistic Contrastive Learning for Multi-label Visual Classification [16.415582577355536]
Multi-label image classification presents a challenging task in many domains, including computer vision and medical imaging.
Recent advancements have introduced graph-based and transformer-based methods to improve performance and capture label dependencies.
We propose Probabilistic Multi-label Contrastive Learning (ProbMCL), a novel framework to address these challenges.
arXiv Detail & Related papers (2024-01-02T22:15:20Z) - Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot
Text Classification Tasks [75.42002070547267]
We propose a self evolution learning (SE) based mixup approach for data augmentation in text classification.
We introduce a novel instance specific label smoothing approach, which linearly interpolates the model's output and one hot labels of the original samples to generate new soft for label mixing up.
arXiv Detail & Related papers (2023-05-22T23:43:23Z) - Asymmetric Co-teaching with Multi-view Consensus for Noisy Label
Learning [15.690502285538411]
We introduce our noisy-label learning approach, called Asymmetric Co-teaching (AsyCo)
AsyCo produces more consistent divergent results of the co-teaching models.
Experiments on synthetic and real-world noisy-label datasets show that AsyCo improves over current SOTA methods.
arXiv Detail & Related papers (2023-01-01T04:10:03Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - Evolving Multi-Label Fuzzy Classifier [5.53329677986653]
Multi-label classification has attracted much attention in the machine learning community to address the problem of assigning single samples to more than one class at the same time.
We propose an evolving multi-label fuzzy classifier (EFC-ML) which is able to self-adapt and self-evolve its structure with new incoming multi-label samples in an incremental, single-pass manner.
arXiv Detail & Related papers (2022-03-29T08:01:03Z) - Gated recurrent units and temporal convolutional network for multilabel
classification [122.84638446560663]
This work proposes a new ensemble method for managing multilabel classification.
The core of the proposed approach combines a set of gated recurrent units and temporal convolutional neural networks trained with variants of the Adam gradients optimization approach.
arXiv Detail & Related papers (2021-10-09T00:00:16Z) - GuidedMix-Net: Learning to Improve Pseudo Masks Using Labeled Images as
Reference [153.354332374204]
We propose a novel method for semi-supervised semantic segmentation named GuidedMix-Net.
We first introduce a feature alignment objective between labeled and unlabeled data to capture potentially similar image pairs.
MITrans is shown to be a powerful knowledge module for further progressive refining features of unlabeled data.
Along with supervised learning for labeled data, the prediction of unlabeled data is jointly learned with the generated pseudo masks.
arXiv Detail & Related papers (2021-06-29T02:48:45Z) - Disentangled Variational Autoencoder based Multi-Label Classification
with Covariance-Aware Multivariate Probit Model [10.004081409670516]
Multi-label classification is the challenging task of predicting the presence and absence of multiple targets.
We propose a novel framework for multi-label classification that effectively learns latent embedding spaces as well as label correlations.
arXiv Detail & Related papers (2020-07-12T23:08:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.