Robust Contrastive Learning Using Negative Samples with Diminished
Semantics
- URL: http://arxiv.org/abs/2110.14189v1
- Date: Wed, 27 Oct 2021 05:38:00 GMT
- Title: Robust Contrastive Learning Using Negative Samples with Diminished
Semantics
- Authors: Songwei Ge, Shlok Mishra, Haohan Wang, Chun-Liang Li, David Jacobs
- Abstract summary: We show that by generating carefully designed negative samples, contrastive learning can learn more robust representations.
We develop two methods, texture-based and patch-based augmentations, to generate negative samples.
We also analyze our method and the generated texture-based samples, showing that texture features are indispensable in classifying particular ImageNet classes.
- Score: 23.38896719740166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised learning has recently made exceptional progress because of the
development of more effective contrastive learning methods. However, CNNs are
prone to depend on low-level features that humans deem non-semantic. This
dependency has been conjectured to induce a lack of robustness to image
perturbations or domain shift. In this paper, we show that by generating
carefully designed negative samples, contrastive learning can learn more robust
representations with less dependence on such features. Contrastive learning
utilizes positive pairs that preserve semantic information while perturbing
superficial features in the training images. Similarly, we propose to generate
negative samples in a reversed way, where only the superfluous instead of the
semantic features are preserved. We develop two methods, texture-based and
patch-based augmentations, to generate negative samples. These samples achieve
better generalization, especially under out-of-domain settings. We also analyze
our method and the generated texture-based samples, showing that texture
features are indispensable in classifying particular ImageNet classes and
especially finer classes. We also show that model bias favors texture and shape
features differently under different test settings. Our code, trained models,
and ImageNet-Texture dataset can be found at
https://github.com/SongweiGe/Contrastive-Learning-with-Non-Semantic-Negatives.
Related papers
- Generating Enhanced Negatives for Training Language-Based Object Detectors [86.1914216335631]
We propose to leverage the vast knowledge built into modern generative models to automatically build negatives that are more relevant to the original data.
Specifically, we use large-language-models to generate negative text descriptions, and text-to-image diffusion models to also generate corresponding negative images.
Our experimental analysis confirms the relevance of the generated negative data, and its use in language-based detectors improves performance on two complex benchmarks.
arXiv Detail & Related papers (2023-12-29T23:04:00Z) - Unrestricted Adversarial Samples Based on Non-semantic Feature Clusters
Substitution [1.8782750537161608]
We introduce "unrestricted" perturbations that create adversarial samples by using spurious relations learned by model training.
Specifically, we find feature clusters in non-semantic features that are strongly correlated with model judgment results.
We create adversarial samples by using them to replace the corresponding feature clusters in the target image.
arXiv Detail & Related papers (2022-08-31T07:42:36Z) - Exploring Negatives in Contrastive Learning for Unpaired Image-to-Image
Translation [12.754320302262533]
We introduce a new negative Pruning technology for Unpaired image-to-image Translation (PUT) by sparsifying and ranking the patches.
The proposed algorithm is efficient, flexible and enables the model to learn essential information between corresponding patches stably.
arXiv Detail & Related papers (2022-04-23T08:31:18Z) - Negative Evidence Matters in Interpretable Histology Image
Classification [22.709305584896295]
weakly-supervised learning methods allow CNN classifiers to jointly classify an image, and yield the regions of interest associated with the predicted class.
This problem is known to be more challenging with histology images than with natural ones.
We propose a simple yet efficient method based on a composite loss function that leverages information from the fully negative samples.
arXiv Detail & Related papers (2022-01-07T13:26:18Z) - Incremental False Negative Detection for Contrastive Learning [95.68120675114878]
We introduce a novel incremental false negative detection for self-supervised contrastive learning.
During contrastive learning, we discuss two strategies to explicitly remove the detected false negatives.
Our proposed method outperforms other self-supervised contrastive learning frameworks on multiple benchmarks within a limited compute.
arXiv Detail & Related papers (2021-06-07T15:29:14Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z) - An Empirical Study of the Collapsing Problem in Semi-Supervised 2D Human
Pose Estimation [80.02124918255059]
Semi-supervised learning aims to boost the accuracy of a model by exploring unlabeled images.
We learn two networks to mutually teach each other.
The more reliable predictions on easy images in each network are used to teach the other network to learn about the corresponding hard images.
arXiv Detail & Related papers (2020-11-25T03:29:52Z) - Boosting Contrastive Self-Supervised Learning with False Negative
Cancellation [40.71224235172881]
A fundamental problem in contrastive learning is mitigating the effects of false negatives.
We propose novel approaches to identify false negatives, as well as two strategies to mitigate their effect.
Our method exhibits consistent improvements over existing contrastive learning-based methods.
arXiv Detail & Related papers (2020-11-23T22:17:21Z) - Shape-Texture Debiased Neural Network Training [50.6178024087048]
Convolutional Neural Networks are often biased towards either texture or shape, depending on the training dataset.
We develop an algorithm for shape-texture debiased learning.
Experiments show that our method successfully improves model performance on several image recognition benchmarks.
arXiv Detail & Related papers (2020-10-12T19:16:12Z) - Delving into Inter-Image Invariance for Unsupervised Visual
Representations [108.33534231219464]
We present a study to better understand the role of inter-image invariance learning.
Online labels converge faster than offline labels.
Semi-hard negative samples are more reliable and unbiased than hard negative samples.
arXiv Detail & Related papers (2020-08-26T17:44:23Z) - SCE: Scalable Network Embedding from Sparsest Cut [20.08464038805681]
Large-scale network embedding is to learn a latent representation for each node in an unsupervised manner.
A key of success to such contrastive learning methods is how to draw positive and negative samples.
In this paper, we propose SCE for unsupervised network embedding only using negative samples for training.
arXiv Detail & Related papers (2020-06-30T03:18:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.