Exemplar-Based Contrastive Self-Supervised Learning with Few-Shot Class
Incremental Learning
- URL: http://arxiv.org/abs/2202.02601v1
- Date: Sat, 5 Feb 2022 17:14:07 GMT
- Title: Exemplar-Based Contrastive Self-Supervised Learning with Few-Shot Class
Incremental Learning
- Authors: Daniel T. Chang
- Abstract summary: In human learning, supervised learning of concepts based on exemplars takes place within the larger context of contrastive self-supervised learning (CSSL) based on unlabeled and labeled data.
A major benefit of the extensions is that exemplar-based CSSL, with supervised finetuning, supports few-shot class incremental learning (CIL)
- Score: 0.8722210937404288
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans are capable of learning new concepts from only a few (labeled)
exemplars, incrementally and continually. This happens within the context that
we can differentiate among the exemplars, and between the exemplars and large
amounts of other data (unlabeled and labeled). This suggests, in human
learning, supervised learning of concepts based on exemplars takes place within
the larger context of contrastive self-supervised learning (CSSL) based on
unlabeled and labeled data. We discuss extending CSSL (1) to be based mainly on
exemplars and only secondly on data augmentation, and (2) to apply to both
unlabeled data (a large amount is available in general) and labeled data (a few
exemplars can be obtained with valuable supervised knowledge). A major benefit
of the extensions is that exemplar-based CSSL, with supervised finetuning,
supports few-shot class incremental learning (CIL). Specifically, we discuss
exemplar-based CSSL including: nearest-neighbor CSSL, neighborhood CSSL with
supervised pretraining, and exemplar CSSL with supervised finetuning. We
further discuss using exemplar-based CSSL to facilitate few-shot learning and,
in particular, few-shot CIL.
Related papers
- Learning at a Glance: Towards Interpretable Data-limited Continual Semantic Segmentation via Semantic-Invariance Modelling [21.114359437665364]
Continual semantic segmentation (CSS) based on incremental learning (IL) is a great endeavour in developing human-like segmentation models.
Current CSS approaches encounter challenges in the trade-off between preserving old knowledge and learning new ones.
We present Learning at a Glance (LAG), an efficient, robust, human-like and interpretable approach for CSS.
arXiv Detail & Related papers (2024-07-22T07:17:52Z) - Multi-Label Knowledge Distillation [86.03990467785312]
We propose a novel multi-label knowledge distillation method.
On one hand, it exploits the informative semantic knowledge from the logits by dividing the multi-label learning problem into a set of binary classification problems.
On the other hand, it enhances the distinctiveness of the learned feature representations by leveraging the structural information of label-wise embeddings.
arXiv Detail & Related papers (2023-08-12T03:19:08Z) - Active Self-Supervised Learning: A Few Low-Cost Relationships Are All
You Need [34.013568381942775]
Self-Supervised Learning (SSL) has emerged as the solution of choice to learn transferable representations from unlabeled data.
In this work, we formalize and generalize this principle through Positive Active Learning (PAL) where an oracle queries semantic relationships between samples.
First, it unveils a theoretically grounded learning framework beyond SSL, based on similarity graphs, that can be extended to tackle supervised and semi-supervised learning depending on the employed oracle.
Second, it provides a consistent algorithm to embed a priori knowledge, e.g. some observed labels, into any SSL losses without any change in the training pipeline.
arXiv Detail & Related papers (2023-03-27T14:44:39Z) - Larger language models do in-context learning differently [93.90674531127559]
In-context learning (ICL) in language models is affected by semantic priors versus input-label mappings.
We investigate two setups-ICL with flipped labels and ICL with semantically-unrelated labels.
arXiv Detail & Related papers (2023-03-07T12:24:17Z) - Learning with Partial Labels from Semi-supervised Perspective [28.735185883881172]
Partial Label (PL) learning refers to the task of learning from partially labeled data.
We propose a novel PL learning method, namely Partial Label learning with Semi-Supervised Perspective (PLSP)
PLSP significantly outperforms the existing PL baseline methods, especially on high ambiguity levels.
arXiv Detail & Related papers (2022-11-24T15:12:16Z) - Multiple Instance Learning via Iterative Self-Paced Supervised
Contrastive Learning [22.07044031105496]
Learning representations for individual instances when only bag-level labels are available is a challenge in multiple instance learning (MIL)
We propose a novel framework, Iterative Self-paced Supervised Contrastive Learning for MIL Representations (ItS2CLR)
It improves the learned representation by exploiting instance-level pseudo labels derived from the bag-level labels.
arXiv Detail & Related papers (2022-10-17T21:43:32Z) - Concept Representation Learning with Contrastive Self-Supervised
Learning [0.6091702876917281]
Concept-oriented deep learning (CODL) is a general approach to meet the future challenges for deep learning.
We discuss major aspects of concept representation learning using Contrastive Self-supervised Learning (CSSL)
arXiv Detail & Related papers (2021-12-10T17:16:23Z) - CoDiM: Learning with Noisy Labels via Contrastive Semi-Supervised
Learning [58.107679606345165]
Noisy label learning, semi-supervised learning, and contrastive learning are three different strategies for designing learning processes requiring less annotation cost.
We propose CSSL, a unified Contrastive Semi-Supervised Learning algorithm, and CoDiM, a novel algorithm for learning with noisy labels.
arXiv Detail & Related papers (2021-11-23T04:56:40Z) - Rich Semantics Improve Few-shot Learning [49.11659525563236]
We show that by using 'class-level' language descriptions, that can be acquired with minimal annotation cost, we can improve the few-shot learning performance.
We develop a Transformer based forward and backward encoding mechanism to relate visual and semantic tokens.
arXiv Detail & Related papers (2021-04-26T16:48:27Z) - Graph-based Semi-supervised Learning: A Comprehensive Review [51.26862262550445]
Semi-supervised learning (SSL) has tremendous value in practice due to its ability to utilize both labeled data and unlabelled data.
An important class of SSL methods is to naturally represent data as graphs, which corresponds to graph-based semi-supervised learning (GSSL) methods.
GSSL methods have demonstrated their advantages in various domains due to their uniqueness of structure, the universality of applications, and their scalability to large scale data.
arXiv Detail & Related papers (2021-02-26T05:11:09Z) - Isometric Propagation Network for Generalized Zero-shot Learning [72.02404519815663]
A popular strategy is to learn a mapping between the semantic space of class attributes and the visual space of images based on the seen classes and their data.
We propose Isometric propagation Network (IPN), which learns to strengthen the relation between classes within each space and align the class dependency in the two spaces.
IPN achieves state-of-the-art performance on three popular Zero-shot learning benchmarks.
arXiv Detail & Related papers (2021-02-03T12:45:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.