Contrastive Learning based Hybrid Networks for Long-Tailed Image
Classification
- URL: http://arxiv.org/abs/2103.14267v1
- Date: Fri, 26 Mar 2021 05:22:36 GMT
- Title: Contrastive Learning based Hybrid Networks for Long-Tailed Image
Classification
- Authors: Peng Wang, Kai Han, Xiu-Shen Wei, Lei Zhang, Lei Wang
- Abstract summary: We propose a novel hybrid network structure composed of a supervised contrastive loss to learn image representations and a cross-entropy loss to learn classifiers.
Experiments on three long-tailed classification datasets demonstrate the advantage of the proposed contrastive learning based hybrid networks in long-tailed classification.
- Score: 31.647639786095993
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Learning discriminative image representations plays a vital role in
long-tailed image classification because it can ease the classifier learning in
imbalanced cases. Given the promising performance contrastive learning has
shown recently in representation learning, in this work, we explore effective
supervised contrastive learning strategies and tailor them to learn better
image representations from imbalanced data in order to boost the classification
accuracy thereon. Specifically, we propose a novel hybrid network structure
being composed of a supervised contrastive loss to learn image representations
and a cross-entropy loss to learn classifiers, where the learning is
progressively transited from feature learning to the classifier learning to
embody the idea that better features make better classifiers. We explore two
variants of contrastive loss for feature learning, which vary in the forms but
share a common idea of pulling the samples from the same class together in the
normalized embedding space and pushing the samples from different classes
apart. One of them is the recently proposed supervised contrastive (SC) loss,
which is designed on top of the state-of-the-art unsupervised contrastive loss
by incorporating positive samples from the same class. The other is a
prototypical supervised contrastive (PSC) learning strategy which addresses the
intensive memory consumption in standard SC loss and thus shows more promise
under limited memory budget. Extensive experiments on three long-tailed
classification datasets demonstrate the advantage of the proposed contrastive
learning based hybrid networks in long-tailed classification.
Related papers
- Bayesian Learning-driven Prototypical Contrastive Loss for Class-Incremental Learning [42.14439854721613]
We propose a prototypical network with a Bayesian learning-driven contrastive loss (BLCL) tailored specifically for class-incremental learning scenarios.
Our approach dynamically adapts the balance between the cross-entropy and contrastive loss functions with a Bayesian learning technique.
arXiv Detail & Related papers (2024-05-17T19:49:02Z) - LeOCLR: Leveraging Original Images for Contrastive Learning of Visual Representations [4.680881326162484]
Contrastive instance discrimination methods outperform supervised learning in downstream tasks such as image classification and object detection.
A common augmentation technique in contrastive learning is random cropping followed by resizing.
We introduce LeOCLR, a framework that employs a novel instance discrimination approach and an adapted loss function.
arXiv Detail & Related papers (2024-03-11T15:33:32Z) - Unsupervised Feature Clustering Improves Contrastive Representation
Learning for Medical Image Segmentation [18.75543045234889]
Self-supervised instance discrimination is an effective contrastive pretext task to learn feature representations and address limited medical image annotations.
We propose a new self-supervised contrastive learning method that uses unsupervised feature clustering to better select positive and negative image samples.
Our method outperforms state-of-the-art self-supervised contrastive techniques on these tasks.
arXiv Detail & Related papers (2022-11-15T22:54:29Z) - Joint Debiased Representation and Image Clustering Learning with
Self-Supervision [3.1806743741013657]
We develop a novel joint clustering and contrastive learning framework.
We adapt the debiased contrastive loss to avoid under-clustering minority classes of imbalanced datasets.
arXiv Detail & Related papers (2022-09-14T21:23:41Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z) - Improving Music Performance Assessment with Contrastive Learning [78.8942067357231]
This study investigates contrastive learning as a potential method to improve existing MPA systems.
We introduce a weighted contrastive loss suitable for regression tasks applied to a convolutional neural network.
Our results show that contrastive-based methods are able to match and exceed SoTA performance for MPA regression tasks.
arXiv Detail & Related papers (2021-08-03T19:24:25Z) - Solving Inefficiency of Self-supervised Representation Learning [87.30876679780532]
Existing contrastive learning methods suffer from very low learning efficiency.
Under-clustering and over-clustering problems are major obstacles to learning efficiency.
We propose a novel self-supervised learning framework using a median triplet loss.
arXiv Detail & Related papers (2021-04-18T07:47:10Z) - Unleashing the Power of Contrastive Self-Supervised Visual Models via
Contrast-Regularized Fine-Tuning [94.35586521144117]
We investigate whether applying contrastive learning to fine-tuning would bring further benefits.
We propose Contrast-regularized tuning (Core-tuning), a novel approach for fine-tuning contrastive self-supervised visual models.
arXiv Detail & Related papers (2021-02-12T16:31:24Z) - Fully Unsupervised Person Re-identification viaSelective Contrastive
Learning [58.5284246878277]
Person re-identification (ReID) aims at searching the same identity person among images captured by various cameras.
We propose a novel selective contrastive learning framework for unsupervised feature learning.
Experimental results demonstrate the superiority of our method in unsupervised person ReID compared with the state-of-the-arts.
arXiv Detail & Related papers (2020-10-15T09:09:23Z) - Hard Negative Mixing for Contrastive Learning [29.91220669060252]
We argue that an important aspect of contrastive learning, i.e., the effect of hard negatives, has so far been neglected.
We propose hard negative mixing strategies at the feature level, that can be computed on-the-fly with a minimal computational overhead.
arXiv Detail & Related papers (2020-10-02T14:34:58Z) - ReMarNet: Conjoint Relation and Margin Learning for Small-Sample Image
Classification [49.87503122462432]
We introduce a novel neural network termed Relation-and-Margin learning Network (ReMarNet)
Our method assembles two networks of different backbones so as to learn the features that can perform excellently in both of the aforementioned two classification mechanisms.
Experiments on four image datasets demonstrate that our approach is effective in learning discriminative features from a small set of labeled samples.
arXiv Detail & Related papers (2020-06-27T13:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.