K-Shot Contrastive Learning of Visual Features with Multiple Instance
Augmentations
- URL: http://arxiv.org/abs/2007.13310v2
- Date: Mon, 1 Feb 2021 08:00:58 GMT
- Title: K-Shot Contrastive Learning of Visual Features with Multiple Instance
Augmentations
- Authors: Haohang Xu, Hongkai Xiong, Guo-Jun Qi
- Abstract summary: $K$-Shot Contrastive Learning is proposed to investigate sample variations within individual instances.
It aims to combine the advantages of inter-instance discrimination by learning discriminative features to distinguish between different instances.
Experiment results demonstrate the proposed $K$-shot contrastive learning achieves superior performances to the state-of-the-art unsupervised methods.
- Score: 67.46036826589467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose the $K$-Shot Contrastive Learning (KSCL) of visual
features by applying multiple augmentations to investigate the sample
variations within individual instances. It aims to combine the advantages of
inter-instance discrimination by learning discriminative features to
distinguish between different instances, as well as intra-instance variations
by matching queries against the variants of augmented samples over instances.
Particularly, for each instance, it constructs an instance subspace to model
the configuration of how the significant factors of variations in $K$-shot
augmentations can be combined to form the variants of augmentations. Given a
query, the most relevant variant of instances is then retrieved by projecting
the query onto their subspaces to predict the positive instance class. This
generalizes the existing contrastive learning that can be viewed as a special
one-shot case. An eigenvalue decomposition is performed to configure instance
subspaces, and the embedding network can be trained end-to-end through the
differentiable subspace configuration. Experiment results demonstrate the
proposed $K$-shot contrastive learning achieves superior performances to the
state-of-the-art unsupervised methods.
Related papers
- CLIP Adaptation by Intra-modal Overlap Reduction [1.2277343096128712]
We analyse the intra-modal overlap in image space in terms of embedding representation.
We train a lightweight adapter on a generic set of samples from the Google Open Images dataset.
arXiv Detail & Related papers (2024-09-17T16:40:58Z) - Contributing Dimension Structure of Deep Feature for Coreset Selection [26.759457501199822]
Coreset selection seeks to choose a subset of crucial training samples for efficient learning.
Sample selection hinges on two main aspects: a sample's representation in enhancing performance and the role of sample diversity in averting overfitting.
Existing methods typically measure both the representation and diversity of data based on similarity metrics.
arXiv Detail & Related papers (2024-01-29T14:47:26Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Explicitly Modeling the Discriminability for Instance-Aware Visual
Object Tracking [13.311777431243296]
We propose a novel Instance-Aware Tracker (IAT) to excavate the discriminability of feature representations.
We implement two variants of the proposed IAT, including a video-level one and an object-level one.
Both versions achieve leading results against state-of-the-art methods while running at 30FPS.
arXiv Detail & Related papers (2021-10-28T11:24:01Z) - Episode Adaptive Embedding Networks for Few-shot Learning [4.328767711595871]
We propose emphEpisode Adaptive Embedding Network (EAEN) to learn episode-specific embeddings of instances.
EAEN significantly improves classification accuracy about $10%$ to $20%$ in different settings over the state-of-the-art methods.
arXiv Detail & Related papers (2021-06-17T11:29:33Z) - Toward Scalable and Unified Example-based Explanation and Outlier
Detection [128.23117182137418]
We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for their prediction.
We show that our prototype-based networks beyond similarity kernels deliver meaningful explanations and promising outlier detection results without compromising classification accuracy.
arXiv Detail & Related papers (2020-11-11T05:58:17Z) - Contrastive Learning with Adversarial Examples [79.39156814887133]
Contrastive learning (CL) is a popular technique for self-supervised learning (SSL) of visual representations.
This paper introduces a new family of adversarial examples for constrastive learning and using these examples to define a new adversarial training algorithm for SSL, denoted as CLAE.
arXiv Detail & Related papers (2020-10-22T20:45:10Z) - Joint Contrastive Learning with Infinite Possibilities [114.45811348666898]
This paper explores useful modifications of the recent development in contrastive learning via novel probabilistic modeling.
We derive a particular form of contrastive loss named Joint Contrastive Learning (JCL)
arXiv Detail & Related papers (2020-09-30T16:24:21Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.