An Asymmetric Contrastive Loss for Handling Imbalanced Datasets
- URL: http://arxiv.org/abs/2207.07080v1
- Date: Thu, 14 Jul 2022 17:30:13 GMT
- Title: An Asymmetric Contrastive Loss for Handling Imbalanced Datasets
- Authors: Valentino Vito and Lim Yohanes Stefanus
- Abstract summary: We introduce an asymmetric version of CL, referred to as ACL, to address the problem of class imbalance.
In addition, we propose the asymmetric focal contrastive loss (AFCL) as a further generalization of both ACL and focal contrastive loss.
Results on the FMNIST and ISIC 2018 imbalanced datasets show that AFCL is capable of outperforming CL and FCL in terms of both weighted and unweighted classification accuracies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive learning is a representation learning method performed by
contrasting a sample to other similar samples so that they are brought closely
together, forming clusters in the feature space. The learning process is
typically conducted using a two-stage training architecture, and it utilizes
the contrastive loss (CL) for its feature learning. Contrastive learning has
been shown to be quite successful in handling imbalanced datasets, in which
some classes are overrepresented while some others are underrepresented.
However, previous studies have not specifically modified CL for imbalanced
datasets. In this work, we introduce an asymmetric version of CL, referred to
as ACL, in order to directly address the problem of class imbalance. In
addition, we propose the asymmetric focal contrastive loss (AFCL) as a further
generalization of both ACL and focal contrastive loss (FCL). Results on the
FMNIST and ISIC 2018 imbalanced datasets show that AFCL is capable of
outperforming CL and FCL in terms of both weighted and unweighted
classification accuracies. In the appendix, we provide a full axiomatic
treatment on entropy, along with complete proofs.
Related papers
- Decoupled Contrastive Learning for Long-Tailed Recognition [58.255966442426484]
Supervised Contrastive Loss (SCL) is popular in visual representation learning.
In the scenario of long-tailed recognition, where the number of samples in each class is imbalanced, treating two types of positive samples equally leads to the biased optimization for intra-category distance.
We propose a patch-based self distillation to transfer knowledge from head to tail classes to relieve the under-representation of tail classes.
arXiv Detail & Related papers (2024-03-10T09:46:28Z) - Hard-Negative Sampling for Contrastive Learning: Optimal Representation Geometry and Neural- vs Dimensional-Collapse [16.42457033976047]
We prove that the losses of Supervised Contrastive Learning (SCL), Hard-SCL (HSCL), and Unsupervised Contrastive Learning (UCL) are minimized by representations that exhibit Neural-Collapse (NC)
We also prove that for any representation mapping, the HSCL and Hard-UCL (HUCL) losses are lower bounded by the corresponding SCL and UCL losses.
arXiv Detail & Related papers (2023-11-09T04:40:32Z) - Uncertainty-guided Boundary Learning for Imbalanced Social Event
Detection [64.4350027428928]
We propose a novel uncertainty-guided class imbalance learning framework for imbalanced social event detection tasks.
Our model significantly improves social event representation and classification tasks in almost all classes, especially those uncertain ones.
arXiv Detail & Related papers (2023-10-30T03:32:04Z) - ECL: Class-Enhancement Contrastive Learning for Long-tailed Skin Lesion
Classification [7.7379419801373475]
Skin image datasets often suffer from imbalanced data distribution, exacerbating the difficulty of computer-aided skin disease diagnosis.
We propose class-Enhancement Contrastive Learning (ECL), which enriches the information of minority classes and treats different classes equally.
arXiv Detail & Related papers (2023-07-09T09:29:15Z) - Symmetric Neural-Collapse Representations with Supervised Contrastive
Loss: The Impact of ReLU and Batching [26.994954303270575]
Supervised contrastive loss (SCL) is a competitive and often superior alternative to the cross-entropy loss for classification.
While prior studies have demonstrated that both losses yield symmetric training representations under balanced data, this symmetry breaks under class imbalances.
This paper presents an intriguing discovery: the introduction of a ReLU activation at the final layer effectively restores the symmetry in SCL-learned representations.
arXiv Detail & Related papers (2023-06-13T17:55:39Z) - Supervised Contrastive Learning with Hard Negative Samples [16.42457033976047]
In contrastive learning (CL) learns a useful representation function by pulling positive samples close to each other.
In absence of class information, negative samples are chosen randomly and independently of the anchor.
Supervised CL (SCL) avoids this class collision by conditioning the negative sampling distribution to samples having labels different from that of the anchor.
arXiv Detail & Related papers (2022-08-31T19:20:04Z) - Hierarchical Semi-Supervised Contrastive Learning for
Contamination-Resistant Anomaly Detection [81.07346419422605]
Anomaly detection aims at identifying deviant samples from the normal data distribution.
Contrastive learning has provided a successful way to sample representation that enables effective discrimination on anomalies.
We propose a novel hierarchical semi-supervised contrastive learning framework, for contamination-resistant anomaly detection.
arXiv Detail & Related papers (2022-07-24T18:49:26Z) - Balanced Contrastive Learning for Long-Tailed Visual Recognition [32.789465918318925]
Real-world data typically follow a long-tailed distribution, where a few majority categories occupy most of the data.
In this paper, we focus on representation learning for imbalanced data.
We propose a novel loss for balanced contrastive learning (BCL)
arXiv Detail & Related papers (2022-07-19T03:48:59Z) - Adversarial Contrastive Learning via Asymmetric InfoNCE [64.42740292752069]
We propose to treat adversarial samples unequally when contrasted with an asymmetric InfoNCE objective.
In the asymmetric fashion, the adverse impacts of conflicting objectives between CL and adversarial learning can be effectively mitigated.
Experiments show that our approach consistently outperforms existing Adversarial CL methods.
arXiv Detail & Related papers (2022-07-18T04:14:36Z) - Semi-supervised Contrastive Learning with Similarity Co-calibration [72.38187308270135]
We propose a novel training strategy, termed as Semi-supervised Contrastive Learning (SsCL)
SsCL combines the well-known contrastive loss in self-supervised learning with the cross entropy loss in semi-supervised learning.
We show that SsCL produces more discriminative representation and is beneficial to few shot learning.
arXiv Detail & Related papers (2021-05-16T09:13:56Z) - Contrastive Learning with Adversarial Examples [79.39156814887133]
Contrastive learning (CL) is a popular technique for self-supervised learning (SSL) of visual representations.
This paper introduces a new family of adversarial examples for constrastive learning and using these examples to define a new adversarial training algorithm for SSL, denoted as CLAE.
arXiv Detail & Related papers (2020-10-22T20:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.