Intriguing Properties of Contrastive Losses
- URL: http://arxiv.org/abs/2011.02803v3
- Date: Sat, 23 Oct 2021 18:25:17 GMT
- Title: Intriguing Properties of Contrastive Losses
- Authors: Ting Chen and Calvin Luo and Lala Li
- Abstract summary: We study three intriguing properties of contrastive learning.
We study if instance-based contrastive learning can learn well on images with multiple objects present.
We show that, for contrastive learning, a few bits of easy-to-learn shared features can suppress, and even fully prevent, the learning of other sets of competing features.
- Score: 12.953112189125411
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study three intriguing properties of contrastive learning. First, we
generalize the standard contrastive loss to a broader family of losses, and we
find that various instantiations of the generalized loss perform similarly
under the presence of a multi-layer non-linear projection head. Second, we
study if instance-based contrastive learning (with a global image
representation) can learn well on images with multiple objects present. We find
that meaningful hierarchical local features can be learned despite the fact
that these objectives operate on global instance-level features. Finally, we
study the phenomenon of feature suppression among competing features shared
across augmented views, such as "color distribution" vs "object class". We
construct datasets with explicit and controllable competing features, and show
that, for contrastive learning, a few bits of easy-to-learn shared features can
suppress, and even fully prevent, the learning of other sets of competing
features. In scenarios where there are multiple objects in an image, the
dominant object would suppress the learning of smaller objects. Existing
contrastive learning methods critically rely on data augmentation to favor
certain sets of features over others, and could suffer from learning saturation
for scenarios where existing augmentations cannot fully address the feature
suppression. This poses open challenges to existing contrastive learning
techniques.
Related papers
- High-Discriminative Attribute Feature Learning for Generalized Zero-Shot Learning [54.86882315023791]
We propose an innovative approach called High-Discriminative Attribute Feature Learning for Generalized Zero-Shot Learning (HDAFL)
HDAFL utilizes multiple convolutional kernels to automatically learn discriminative regions highly correlated with attributes in images.
We also introduce a Transformer-based attribute discrimination encoder to enhance the discriminative capability among attributes.
arXiv Detail & Related papers (2024-04-07T13:17:47Z) - Matching Multiple Perspectives for Efficient Representation Learning [0.0]
We present an approach that combines self-supervised learning with a multi-perspective matching technique.
We show that the availability of multiple views of the same object combined with a variety of self-supervised pretraining algorithms can lead to improved object classification performance.
arXiv Detail & Related papers (2022-08-16T10:33:13Z) - Self-Supervised Visual Representation Learning with Semantic Grouping [50.14703605659837]
We tackle the problem of learning visual representations from unlabeled scene-centric data.
We propose contrastive learning from data-driven semantic slots, namely SlotCon, for joint semantic grouping and representation learning.
arXiv Detail & Related papers (2022-05-30T17:50:59Z) - Towards Self-Supervised Learning of Global and Object-Centric
Representations [4.36572039512405]
We discuss key aspects of learning structured object-centric representations with self-supervision.
We validate our insights through several experiments on the CLEVR dataset.
arXiv Detail & Related papers (2022-03-11T15:18:47Z) - Object-aware Contrastive Learning for Debiased Scene Representation [74.30741492814327]
We develop a novel object-aware contrastive learning framework that localizes objects in a self-supervised manner.
We also introduce two data augmentations based on ContraCAM, object-aware random crop and background mixup, which reduce contextual and background biases during contrastive self-supervised learning.
arXiv Detail & Related papers (2021-07-30T19:24:07Z) - Contrastive Learning based Hybrid Networks for Long-Tailed Image
Classification [31.647639786095993]
We propose a novel hybrid network structure composed of a supervised contrastive loss to learn image representations and a cross-entropy loss to learn classifiers.
Experiments on three long-tailed classification datasets demonstrate the advantage of the proposed contrastive learning based hybrid networks in long-tailed classification.
arXiv Detail & Related papers (2021-03-26T05:22:36Z) - Hard Negative Mixing for Contrastive Learning [29.91220669060252]
We argue that an important aspect of contrastive learning, i.e., the effect of hard negatives, has so far been neglected.
We propose hard negative mixing strategies at the feature level, that can be computed on-the-fly with a minimal computational overhead.
arXiv Detail & Related papers (2020-10-02T14:34:58Z) - What Should Not Be Contrastive in Contrastive Learning [110.14159883496859]
We introduce a contrastive learning framework which does not require prior knowledge of specific, task-dependent invariances.
Our model learns to capture varying and invariant factors for visual representations by constructing separate embedding spaces.
We use a multi-head network with a shared backbone which captures information across each augmentation and alone outperforms all baselines on downstream tasks.
arXiv Detail & Related papers (2020-08-13T03:02:32Z) - Learning the Redundancy-free Features for Generalized Zero-Shot Object
Recognition [28.08885682748527]
Zero-shot object recognition aims to transfer the object recognition ability among the semantically related categories.
In this paper, we learn the redundancy-free features for generalized zero-shot learning.
The results show that our redundancy-free feature based generalized zero-shot learning (RFF-GZSL) approach can achieve competitive results compared with the state-of-the-arts.
arXiv Detail & Related papers (2020-06-16T05:53:25Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.