End-to-End Supervised Multilabel Contrastive Learning
- URL: http://arxiv.org/abs/2307.03967v1
- Date: Sat, 8 Jul 2023 12:46:57 GMT
- Title: End-to-End Supervised Multilabel Contrastive Learning
- Authors: Ahmad Sajedi and Samir Khaki and Konstantinos N. Plataniotis and Mahdi
S. Hosseini
- Abstract summary: Multilabel representation learning is recognized as a challenging problem that can be associated with either label dependencies between object categories or data-related issues.
Recent advances address these challenges from model- and data-centric viewpoints.
We propose a new end-to-end training framework -- dubbed KMCL -- to address the shortcomings of both model- and data-centric designs.
- Score: 38.26579519598804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multilabel representation learning is recognized as a challenging problem
that can be associated with either label dependencies between object categories
or data-related issues such as the inherent imbalance of positive/negative
samples. Recent advances address these challenges from model- and data-centric
viewpoints. In model-centric, the label correlation is obtained by an external
model designs (e.g., graph CNN) to incorporate an inductive bias for training.
However, they fail to design an end-to-end training framework, leading to high
computational complexity. On the contrary, in data-centric, the realistic
nature of the dataset is considered for improving the classification while
ignoring the label dependencies. In this paper, we propose a new end-to-end
training framework -- dubbed KMCL (Kernel-based Mutlilabel Contrastive
Learning) -- to address the shortcomings of both model- and data-centric
designs. The KMCL first transforms the embedded features into a mixture of
exponential kernels in Gaussian RKHS. It is then followed by encoding an
objective loss that is comprised of (a) reconstruction loss to reconstruct
kernel representation, (b) asymmetric classification loss to address the
inherent imbalance problem, and (c) contrastive loss to capture label
correlation. The KMCL models the uncertainty of the feature encoder while
maintaining a low computational footprint. Extensive experiments are conducted
on image classification tasks to showcase the consistent improvements of KMCL
over the SOTA methods. PyTorch implementation is provided in
\url{https://github.com/mahdihosseini/KMCL}.
Related papers
- Covariance-corrected Whitening Alleviates Network Degeneration on Imbalanced Classification [6.197116272789107]
Class imbalance is a critical issue in image classification that significantly affects the performance of deep recognition models.
We propose a novel framework called Whitening-Net to mitigate the degenerate solutions.
In scenarios with extreme class imbalance, the batch covariance statistic exhibits significant fluctuations, impeding the convergence of the whitening operation.
arXiv Detail & Related papers (2024-08-30T10:49:33Z) - Revisiting the Disequilibrium Issues in Tackling Heart Disease Classification Tasks [5.834731599084117]
Two primary obstacles arise in the field of heart disease classification.
Electrocardiogram (ECG) datasets consistently demonstrate imbalances and biases across various modalities.
We propose a Channel-wise Magnitude Equalizer (CME) on signal-encoded images.
We also propose the Inverted Weight Logarithmic Loss (IWL) to alleviate imbalances among the data.
arXiv Detail & Related papers (2024-07-19T09:50:49Z) - Robust Pseudo-label Learning with Neighbor Relation for Unsupervised Visible-Infrared Person Re-Identification [33.50249784731248]
Unsupervised Visible-Infrared Person Re-identification (USVI-ReID) aims to match pedestrian images across visible and infrared modalities without any annotations.
Recently, clustered pseudo-label methods have become predominant in USVI-ReID, although the inherent noise in pseudo-labels presents a significant obstacle.
We design a Robust Pseudo-label Learning with Neighbor Relation (RPNR) framework to correct noisy pseudo-labels.
Comprehensive experiments conducted on two widely recognized benchmarks, SYSU-MM01 and RegDB, demonstrate that RPNR outperforms the current state-of-the-art GUR with an average
arXiv Detail & Related papers (2024-05-09T08:17:06Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - ERASE: Error-Resilient Representation Learning on Graphs for Label Noise
Tolerance [53.73316938815873]
We propose a method called ERASE (Error-Resilient representation learning on graphs for lAbel noiSe tolerancE) to learn representations with error tolerance.
ERASE combines prototype pseudo-labels with propagated denoised labels and updates representations with error resilience.
Our method can outperform multiple baselines with clear margins in broad noise levels and enjoy great scalability.
arXiv Detail & Related papers (2023-12-13T17:59:07Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Incorporating Semi-Supervised and Positive-Unlabeled Learning for
Boosting Full Reference Image Quality Assessment [73.61888777504377]
Full-reference (FR) image quality assessment (IQA) evaluates the visual quality of a distorted image by measuring its perceptual difference with pristine-quality reference.
Unlabeled data can be easily collected from an image degradation or restoration process, making it encouraging to exploit unlabeled training data to boost FR-IQA performance.
In this paper, we suggest to incorporate semi-supervised and positive-unlabeled (PU) learning for exploiting unlabeled data while mitigating the adverse effect of outliers.
arXiv Detail & Related papers (2022-04-19T09:10:06Z) - Correct-N-Contrast: A Contrastive Approach for Improving Robustness to
Spurious Correlations [59.24031936150582]
Spurious correlations pose a major challenge for robust machine learning.
Models trained with empirical risk minimization (ERM) may learn to rely on correlations between class labels and spurious attributes.
We propose Correct-N-Contrast (CNC), a contrastive approach to directly learn representations robust to spurious correlations.
arXiv Detail & Related papers (2022-03-03T05:03:28Z) - Mind Your Clever Neighbours: Unsupervised Person Re-identification via
Adaptive Clustering Relationship Modeling [19.532602887109668]
Unsupervised person re-identification (Re-ID) attracts increasing attention due to its potential to resolve the scalability problem of supervised Re-ID models.
Most existing unsupervised methods adopt an iterative clustering mechanism, where the network was trained based on pseudo labels generated by unsupervised clustering.
To generate high-quality pseudo-labels and mitigate the impact of clustering errors, we propose a novel clustering relationship modeling framework for unsupervised person Re-ID.
arXiv Detail & Related papers (2021-12-03T10:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.