Adaptive Soft Contrastive Learning
- URL: http://arxiv.org/abs/2207.11163v1
- Date: Fri, 22 Jul 2022 16:01:07 GMT
- Title: Adaptive Soft Contrastive Learning
- Authors: Chen Feng, Ioannis Patras
- Abstract summary: This paper proposes an adaptive method that introduces soft inter-sample relations, namely Adaptive Soft Contrastive Learning (ASCL)
As an effective and concise plug-in module for existing self-supervised learning frameworks, ASCL achieves the best performance on several benchmarks.
- Score: 19.45520684918576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning has recently achieved great success in
representation learning without human annotations. The dominant method -- that
is contrastive learning, is generally based on instance discrimination tasks,
i.e., individual samples are treated as independent categories. However,
presuming all the samples are different contradicts the natural grouping of
similar samples in common visual datasets, e.g., multiple views of the same
dog. To bridge the gap, this paper proposes an adaptive method that introduces
soft inter-sample relations, namely Adaptive Soft Contrastive Learning (ASCL).
More specifically, ASCL transforms the original instance discrimination task
into a multi-instance soft discrimination task, and adaptively introduces
inter-sample relations. As an effective and concise plug-in module for existing
self-supervised learning frameworks, ASCL achieves the best performance on
several benchmarks in terms of both performance and efficiency. Code is
available at https://github.com/MrChenFeng/ASCL_ICPR2022.
Related papers
- Active Learning Principles for In-Context Learning with Large Language
Models [65.09970281795769]
This paper investigates how Active Learning algorithms can serve as effective demonstration selection methods for in-context learning.
We show that in-context example selection through AL prioritizes high-quality examples that exhibit low uncertainty and bear similarity to the test examples.
arXiv Detail & Related papers (2023-05-23T17:16:04Z) - Compositional Exemplars for In-context Learning [21.961094715261133]
Large pretrained language models (LMs) have shown impressive In-Context Learning (ICL) ability.
We propose CEIL (Compositional Exemplars for In-context Learning) to model the interaction between the given input and in-context examples.
We validate CEIL on 12 classification and generation datasets from 7 distinct NLP tasks, including sentiment analysis, paraphrase detection, natural language inference, commonsense reasoning, open-domain question answering, code generation, and semantic parsing.
arXiv Detail & Related papers (2023-02-11T14:02:08Z) - Self-Adaptive In-Context Learning: An Information Compression
Perspective for In-Context Example Selection and Ordering [15.3566963926257]
This paper advocates a new principle for in-context learning (ICL): self-adaptive in-context learning.
The self-adaption mechanism is introduced to help each sample find an in-context example permutation that can derive the correct prediction.
Our self-adaptive ICL method achieves a 40% relative improvement over the common practice setting.
arXiv Detail & Related papers (2022-12-20T15:55:21Z) - A Lagrangian Duality Approach to Active Learning [119.36233726867992]
We consider the batch active learning problem, where only a subset of the training data is labeled.
We formulate the learning problem using constrained optimization, where each constraint bounds the performance of the model on labeled samples.
We show, via numerical experiments, that our proposed approach performs similarly to or better than state-of-the-art active learning methods.
arXiv Detail & Related papers (2022-02-08T19:18:49Z) - Contrastive Learning with Adversarial Examples [79.39156814887133]
Contrastive learning (CL) is a popular technique for self-supervised learning (SSL) of visual representations.
This paper introduces a new family of adversarial examples for constrastive learning and using these examples to define a new adversarial training algorithm for SSL, denoted as CLAE.
arXiv Detail & Related papers (2020-10-22T20:45:10Z) - Unsupervised Feature Learning by Cross-Level Instance-Group
Discrimination [68.83098015578874]
We integrate between-instance similarity into contrastive learning, not directly by instance grouping, but by cross-level discrimination.
CLD effectively brings unsupervised learning closer to natural data and real-world applications.
New state-of-the-art on self-supervision, semi-supervision, and transfer learning benchmarks, and beats MoCo v2 and SimCLR on every reported performance.
arXiv Detail & Related papers (2020-08-09T21:13:13Z) - Boosting Few-Shot Learning With Adaptive Margin Loss [109.03665126222619]
This paper proposes an adaptive margin principle to improve the generalization ability of metric-based meta-learning approaches for few-shot learning problems.
Extensive experiments demonstrate that the proposed method can boost the performance of current metric-based meta-learning approaches.
arXiv Detail & Related papers (2020-05-28T07:58:41Z) - DiVA: Diverse Visual Feature Aggregation for Deep Metric Learning [83.48587570246231]
Visual Similarity plays an important role in many computer vision applications.
Deep metric learning (DML) is a powerful framework for learning such similarities.
We propose and study multiple complementary learning tasks, targeting conceptually different data relationships.
We learn a single model to aggregate their training signals, resulting in strong generalization and state-of-the-art performance.
arXiv Detail & Related papers (2020-04-28T12:26:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.