Revisiting Contrastive Learning through the Lens of Neighborhood
Component Analysis: an Integrated Framework
- URL: http://arxiv.org/abs/2112.04468v1
- Date: Wed, 8 Dec 2021 18:54:11 GMT
- Title: Revisiting Contrastive Learning through the Lens of Neighborhood
Component Analysis: an Integrated Framework
- Authors: Ching-Yun Ko, Jeet Mohapatra, Sijia Liu, Pin-Yu Chen, Luca Daniel,
Lily Weng
- Abstract summary: We show a new methodology to design integrated contrastive losses that could simultaneously achieve good accuracy and robustness on downstream tasks.
With the integrated framework, we achieve up to 6% improvement on the standard accuracy and 17% improvement on the adversarial accuracy.
- Score: 70.84906094606072
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a seminal tool in self-supervised representation learning, contrastive
learning has gained unprecedented attention in recent years. In essence,
contrastive learning aims to leverage pairs of positive and negative samples
for representation learning, which relates to exploiting neighborhood
information in a feature space. By investigating the connection between
contrastive learning and neighborhood component analysis (NCA), we provide a
novel stochastic nearest neighbor viewpoint of contrastive learning and
subsequently propose a series of contrastive losses that outperform the
existing ones. Under our proposed framework, we show a new methodology to
design integrated contrastive losses that could simultaneously achieve good
accuracy and robustness on downstream tasks. With the integrated framework, we
achieve up to 6\% improvement on the standard accuracy and 17\% improvement on
the adversarial accuracy.
Related papers
- Time-Series Contrastive Learning against False Negatives and Class Imbalance [17.43801009251228]
We conduct theoretical analysis and find they have overlooked the fundamental issues: false negatives and class imbalance inherent in the InfoNCE loss-based framework.
We introduce a straightforward modification grounded in the SimCLR framework, universally to models engaged in the instance discrimination task.
We perform semi-supervised consistency classification and enhance the representative ability of minority classes.
arXiv Detail & Related papers (2023-12-19T08:38:03Z) - Learning Transferable Adversarial Robust Representations via Multi-view
Consistency [57.73073964318167]
We propose a novel meta-adversarial multi-view representation learning framework with dual encoders.
We demonstrate the effectiveness of our framework on few-shot learning tasks from unseen domains.
arXiv Detail & Related papers (2022-10-19T11:48:01Z) - Contrastive Bayesian Analysis for Deep Metric Learning [30.21464199249958]
We develop a contrastive Bayesian analysis to characterize and model the posterior probabilities of image labels conditioned by their features similarity.
This contrastive Bayesian analysis leads to a new loss function for deep metric learning.
Our experimental results and ablation studies demonstrate that the proposed contrastive Bayesian metric learning method significantly improves the performance of deep metric learning.
arXiv Detail & Related papers (2022-10-10T02:24:21Z) - PointACL:Adversarial Contrastive Learning for Robust Point Clouds
Representation under Adversarial Attack [73.3371797787823]
Adversarial contrastive learning (ACL) is considered an effective way to improve the robustness of pre-trained models.
We present our robust aware loss function to train self-supervised contrastive learning framework adversarially.
We validate our method, PointACL on downstream tasks, including 3D classification and 3D segmentation with multiple datasets.
arXiv Detail & Related papers (2022-09-14T22:58:31Z) - Robust Contrastive Learning against Noisy Views [79.71880076439297]
We propose a new contrastive loss function that is robust against noisy views.
We show that our approach provides consistent improvements over the state-of-the-art image, video, and graph contrastive learning benchmarks.
arXiv Detail & Related papers (2022-01-12T05:24:29Z) - Incremental False Negative Detection for Contrastive Learning [95.68120675114878]
We introduce a novel incremental false negative detection for self-supervised contrastive learning.
During contrastive learning, we discuss two strategies to explicitly remove the detected false negatives.
Our proposed method outperforms other self-supervised contrastive learning frameworks on multiple benchmarks within a limited compute.
arXiv Detail & Related papers (2021-06-07T15:29:14Z) - Contrastive Domain Adaptation [4.822598110892847]
We propose to extend contrastive learning to a new domain adaptation setting.
Contrastive learning learns by comparing and contrasting positive and negative pairs of samples in an unsupervised setting.
We have developed a variation of a recently proposed contrastive learning framework that helps tackle the domain adaptation problem.
arXiv Detail & Related papers (2021-03-26T13:55:19Z) - Towards Robust Graph Contrastive Learning [7.193373053157517]
We introduce a new method that increases the adversarial robustness of the learned representations.
We evaluate the learned representations in a preliminary set of experiments, obtaining promising results.
arXiv Detail & Related papers (2021-02-25T18:55:15Z) - Robust Pre-Training by Adversarial Contrastive Learning [120.33706897927391]
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness.
We improve robustness-aware self-supervised pre-training by learning representations consistent under both data augmentations and adversarial perturbations.
arXiv Detail & Related papers (2020-10-26T04:44:43Z) - On Mutual Information in Contrastive Learning for Visual Representations [19.136685699971864]
unsupervised, "contrastive" learning algorithms in vision have been shown to learn representations that perform remarkably well on transfer tasks.
We show that this family of algorithms maximizes a lower bound on the mutual information between two or more "views" of an image.
We find that the choice of negative samples and views are critical to the success of these algorithms.
arXiv Detail & Related papers (2020-05-27T04:21:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.