Parametric Contrastive Learning
- URL: http://arxiv.org/abs/2107.12028v1
- Date: Mon, 26 Jul 2021 08:37:23 GMT
- Title: Parametric Contrastive Learning
- Authors: Jiequan Cui, Zhisheng Zhong, Shu Liu, Bei Yu, Jiaya Jia
- Abstract summary: We propose Parametric Contrastive Learning (PaCo) to tackle long-tailed recognition.
PaCo can adaptively enhance the intensity of pushing samples of the same class close.
Experiments on long-tailed CIFAR, ImageNet, Places, and iNaturalist 2018 manifest the new state-of-the-art for long-tailed recognition.
- Score: 65.70554597097248
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we propose Parametric Contrastive Learning (PaCo) to tackle
long-tailed recognition. Based on theoretical analysis, we observe supervised
contrastive loss tends to bias on high-frequency classes and thus increases the
difficulty of imbalance learning. We introduce a set of parametric class-wise
learnable centers to rebalance from an optimization perspective. Further, we
analyze our PaCo loss under a balanced setting. Our analysis demonstrates that
PaCo can adaptively enhance the intensity of pushing samples of the same class
close as more samples are pulled together with their corresponding centers and
benefit hard example learning. Experiments on long-tailed CIFAR, ImageNet,
Places, and iNaturalist 2018 manifest the new state-of-the-art for long-tailed
recognition. On full ImageNet, models trained with PaCo loss surpass supervised
contrastive learning across various ResNet backbones. Our code is available at
\url{https://github.com/jiequancui/Parametric-Contrastive-Learning}.
Related papers
- Rethinking Class-Incremental Learning from a Dynamic Imbalanced Learning Perspective [5.170794699087535]
Deep neural networks suffer from catastrophic forgetting when continually learning new concepts.
We argue that the imbalance between old task and new task data contributes to forgetting of the old tasks.
We propose Uniform Prototype Contrastive Learning (UPCL), where uniform and compact features are learned.
arXiv Detail & Related papers (2024-05-24T02:26:37Z) - Tuned Contrastive Learning [77.67209954169593]
We propose a novel contrastive loss function -- Tuned Contrastive Learning (TCL) loss.
TCL generalizes to multiple positives and negatives in a batch and offers parameters to tune and improve the gradient responses from hard positives and hard negatives.
We show how to extend TCL to self-supervised setting and empirically compare it with various SOTA self-supervised learning methods.
arXiv Detail & Related papers (2023-05-18T03:26:37Z) - Generalized Parametric Contrastive Learning [60.62901294843829]
Generalized Parametric Contrastive Learning (GPaCo/PaCo) works well on both imbalanced and balanced data.
Experiments on long-tailed benchmarks manifest the new state-of-the-art for long-tailed recognition.
arXiv Detail & Related papers (2022-09-26T03:49:28Z) - Chaos is a Ladder: A New Theoretical Understanding of Contrastive
Learning via Augmentation Overlap [64.60460828425502]
We propose a new guarantee on the downstream performance of contrastive learning.
Our new theory hinges on the insight that the support of different intra-class samples will become more overlapped under aggressive data augmentations.
We propose an unsupervised model selection metric ARC that aligns well with downstream accuracy.
arXiv Detail & Related papers (2022-03-25T05:36:26Z) - Rebalanced Siamese Contrastive Mining for Long-Tailed Recognition [120.80038161330623]
We show that supervised contrastive learning suffers a dual class-imbalance problem at both the original batch and Siamese batch levels.
We propose supervised hard positive and negative pairs mining to pick up informative pairs for contrastive computation and improve representation learning.
arXiv Detail & Related papers (2022-03-22T07:30:38Z) - Holistic Deep Learning [3.718942345103135]
This paper presents a novel holistic deep learning framework that addresses the challenges of vulnerability to input perturbations, overparametrization, and performance instability.
The proposed framework holistically improves accuracy, robustness, sparsity, and stability over standard deep learning models.
arXiv Detail & Related papers (2021-10-29T14:46:32Z) - Self-Damaging Contrastive Learning [92.34124578823977]
Unlabeled data in reality is commonly imbalanced and shows a long-tail distribution.
This paper proposes a principled framework called Self-Damaging Contrastive Learning to automatically balance the representation learning without knowing the classes.
Our experiments show that SDCLR significantly improves not only overall accuracies but also balancedness.
arXiv Detail & Related papers (2021-06-06T00:04:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.