Enhancing Contrastive Learning with Efficient Combinatorial Positive
Pairing
- URL: http://arxiv.org/abs/2401.05730v1
- Date: Thu, 11 Jan 2024 08:18:30 GMT
- Title: Enhancing Contrastive Learning with Efficient Combinatorial Positive
Pairing
- Authors: Jaeill Kim, Duhun Hwang, Eunjung Lee, Jangwon Suh, Jimyeong Kim,
Wonjong Rhee
- Abstract summary: We propose a general multi-view strategy that can improve learning speed and performance of any contrastive or non-contrastive method.
In case of ImageNet-100, ECPP boosted SimCLR outperforms supervised learning.
- Score: 2.7961972519572442
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the past few years, contrastive learning has played a central role for the
success of visual unsupervised representation learning. Around the same time,
high-performance non-contrastive learning methods have been developed as well.
While most of the works utilize only two views, we carefully review the
existing multi-view methods and propose a general multi-view strategy that can
improve learning speed and performance of any contrastive or non-contrastive
method. We first analyze CMC's full-graph paradigm and empirically show that
the learning speed of $K$-views can be increased by $_{K}\mathrm{C}_{2}$ times
for small learning rate and early training. Then, we upgrade CMC's full-graph
by mixing views created by a crop-only augmentation, adopting small-size views
as in SwAV multi-crop, and modifying the negative sampling. The resulting
multi-view strategy is called ECPP (Efficient Combinatorial Positive Pairing).
We investigate the effectiveness of ECPP by applying it to SimCLR and assessing
the linear evaluation performance for CIFAR-10 and ImageNet-100. For each
benchmark, we achieve a state-of-the-art performance. In case of ImageNet-100,
ECPP boosted SimCLR outperforms supervised learning.
Related papers
- MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments [72.6405488990753]
Self-supervised learning can be used for mitigating the greedy needs of Vision Transformer networks.
We propose a single-stage and standalone method, MOCA, which unifies both desired properties.
We achieve new state-of-the-art results on low-shot settings and strong experimental results in various evaluation protocols.
arXiv Detail & Related papers (2023-07-18T15:46:20Z) - ScoreCL: Augmentation-Adaptive Contrastive Learning via Score-Matching Function [14.857965612960475]
Self-supervised contrastive learning (CL) has achieved state-of-the-art performance in representation learning.
We show the generality of our method, referred to as ScoreCL, by consistently improving various CL methods.
arXiv Detail & Related papers (2023-06-07T05:59:20Z) - Weighted Ensemble Self-Supervised Learning [67.24482854208783]
Ensembling has proven to be a powerful technique for boosting model performance.
We develop a framework that permits data-dependent weighted cross-entropy losses.
Our method outperforms both in multiple evaluation metrics on ImageNet-1K.
arXiv Detail & Related papers (2022-11-18T02:00:17Z) - Crafting Better Contrastive Views for Siamese Representation Learning [20.552194081238248]
We propose ContrastiveCrop, which could effectively generate better crops for Siamese representation learning.
A semantic-aware object localization strategy is proposed within the training process in a fully unsupervised manner.
As a plug-and-play and framework-agnostic module, ContrastiveCrop consistently improves SimCLR, MoCo, BYOL, SimSiam by 0.4% 2.0% classification accuracy.
arXiv Detail & Related papers (2022-02-07T15:09:00Z) - QK Iteration: A Self-Supervised Representation Learning Algorithm for
Image Similarity [0.0]
We present a new contrastive self-supervised representation learning algorithm in the context of Copy Detection in the 2021 Image Similarity Challenge hosted by Facebook AI Research.
Our algorithms achieved a micro-AP score of 0.3401 on the Phase 1 leaderboard, significantly improving over the baseline $mu$AP of 0.1556.
arXiv Detail & Related papers (2021-11-15T18:01:05Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z) - Revisiting Contrastive Methods for Unsupervised Learning of Visual
Representations [78.12377360145078]
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection.
In this paper, we first study how biases in the dataset affect existing methods.
We show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets.
arXiv Detail & Related papers (2021-06-10T17:59:13Z) - Beyond Single Instance Multi-view Unsupervised Representation Learning [21.449132256091662]
We impose more accurate instance discrimination capability by measuring the joint similarity between two randomly sampled instances.
We believe that learning joint similarity helps to improve the performance when encoded features are distributed more evenly in the latent space.
arXiv Detail & Related papers (2020-11-26T15:43:27Z) - Dense Contrastive Learning for Self-Supervised Visual Pre-Training [102.15325936477362]
We present dense contrastive learning, which implements self-supervised learning by optimizing a pairwise contrastive (dis)similarity loss at the pixel level between two views of input images.
Compared to the baseline method MoCo-v2, our method introduces negligible computation overhead (only 1% slower)
arXiv Detail & Related papers (2020-11-18T08:42:32Z) - A Simple Framework for Contrastive Learning of Visual Representations [116.37752766922407]
This paper presents SimCLR: a simple framework for contrastive learning of visual representations.
We show that composition of data augmentations plays a critical role in defining effective predictive tasks.
We are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet.
arXiv Detail & Related papers (2020-02-13T18:50:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.