CoNe: Contrast Your Neighbours for Supervised Image Classification
- URL: http://arxiv.org/abs/2308.10761v1
- Date: Mon, 21 Aug 2023 14:49:37 GMT
- Title: CoNe: Contrast Your Neighbours for Supervised Image Classification
- Authors: Mingkai Zheng, Shan You, Lang Huang, Xiu Su, Fei Wang, Chen Qian,
Xiaogang Wang, Chang Xu
- Abstract summary: Contrast Your Neighbours (CoNe) is a learning framework for supervised image classification.
CoNe employs the features of its similar neighbors as anchors to generate more adaptive and refined targets.
Our CoNe achieves 80.8% Top-1 accuracy on ImageNet with ResNet-50, which surpasses the recent Timm training recipe.
- Score: 62.12074282211957
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image classification is a longstanding problem in computer vision and machine
learning research. Most recent works (e.g. SupCon , Triplet, and max-margin)
mainly focus on grouping the intra-class samples aggressively and compactly,
with the assumption that all intra-class samples should be pulled tightly
towards their class centers. However, such an objective will be very hard to
achieve since it ignores the intra-class variance in the dataset. (i.e.
different instances from the same class can have significant differences).
Thus, such a monotonous objective is not sufficient. To provide a more
informative objective, we introduce Contrast Your Neighbours (CoNe) - a simple
yet practical learning framework for supervised image classification.
Specifically, in CoNe, each sample is not only supervised by its class center
but also directly employs the features of its similar neighbors as anchors to
generate more adaptive and refined targets. Moreover, to further boost the
performance, we propose ``distributional consistency" as a more informative
regularization to enable similar instances to have a similar probability
distribution. Extensive experimental results demonstrate that CoNe achieves
state-of-the-art performance across different benchmark datasets, network
architectures, and settings. Notably, even without a complicated training
recipe, our CoNe achieves 80.8\% Top-1 accuracy on ImageNet with ResNet-50,
which surpasses the recent Timm training recipe (80.4\%). Code and pre-trained
models are available at
\href{https://github.com/mingkai-zheng/CoNe}{https://github.com/mingkai-zheng/CoNe}.
Related papers
- Meta Co-Training: Two Views are Better than One [4.050257210426548]
We present Meta Co-Training which is an extension of the successful Meta Pseudo Labels approach to two views.
Our method achieves new state-of-the-art performance on ImageNet-10% with very few training resources.
arXiv Detail & Related papers (2023-11-29T21:11:58Z) - Adaptive Prototypical Networks [2.964978357715084]
Prototypical network for Few shot learning tries to learn an embedding function in the encoder that embeds images with similar features close to one another.
We propose an approach that intuitively pushes the embeddings of each of the classes away from the others in the meta-testing phase.
This is achieved by training the encoder network for classification using the support set samples and labels of the new task.
arXiv Detail & Related papers (2022-11-22T18:45:58Z) - Masked Unsupervised Self-training for Zero-shot Image Classification [98.23094305347709]
Masked Unsupervised Self-Training (MUST) is a new approach which leverages two different and complimentary sources of supervision: pseudo-labels and raw images.
MUST improves upon CLIP by a large margin and narrows the performance gap between unsupervised and supervised classification.
arXiv Detail & Related papers (2022-06-07T02:03:06Z) - Chaos is a Ladder: A New Theoretical Understanding of Contrastive
Learning via Augmentation Overlap [64.60460828425502]
We propose a new guarantee on the downstream performance of contrastive learning.
Our new theory hinges on the insight that the support of different intra-class samples will become more overlapped under aggressive data augmentations.
We propose an unsupervised model selection metric ARC that aligns well with downstream accuracy.
arXiv Detail & Related papers (2022-03-25T05:36:26Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z) - Bag of Instances Aggregation Boosts Self-supervised Learning [122.61914701794296]
We propose a simple but effective distillation strategy for unsupervised learning.
Our method, termed as BINGO, targets at transferring the relationship learned by the teacher to the student.
BINGO achieves new state-of-the-art performance on small scale models.
arXiv Detail & Related papers (2021-07-04T17:33:59Z) - SimPLE: Similar Pseudo Label Exploitation for Semi-Supervised
Classification [24.386165255835063]
A common classification task situation is where one has a large amount of data available for training, but only a small portion is with class labels.
The goal of semi-supervised training, in this context, is to improve classification accuracy by leverage information from a large amount of unlabeled data.
We propose a novel unsupervised objective that focuses on the less studied relationship between the high confidence unlabeled data that are similar to each other.
Our proposed SimPLE algorithm shows significant performance gains over previous algorithms on CIFAR-100 and Mini-ImageNet, and is on par with the state-of-the-art methods
arXiv Detail & Related papers (2021-03-30T23:48:06Z) - Self-Supervised Classification Network [3.8073142980733]
Self-supervised end-to-end classification neural network learns labels and representations simultaneously.
First unsupervised end-to-end classification network to perform well on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2021-03-19T19:29:42Z) - Unsupervised Feature Learning by Cross-Level Instance-Group
Discrimination [68.83098015578874]
We integrate between-instance similarity into contrastive learning, not directly by instance grouping, but by cross-level discrimination.
CLD effectively brings unsupervised learning closer to natural data and real-world applications.
New state-of-the-art on self-supervision, semi-supervision, and transfer learning benchmarks, and beats MoCo v2 and SimCLR on every reported performance.
arXiv Detail & Related papers (2020-08-09T21:13:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.