Isolation and Impartial Aggregation: A Paradigm of Incremental Learning
without Interference
- URL: http://arxiv.org/abs/2211.15969v1
- Date: Tue, 29 Nov 2022 06:57:48 GMT
- Title: Isolation and Impartial Aggregation: A Paradigm of Incremental Learning
without Interference
- Authors: Yabin Wang and Zhiheng Ma and Zhiwu Huang and Yaowei Wang and Zhou Su
and Xiaopeng Hong
- Abstract summary: This paper focuses on the prevalent performance imbalance in the stages of incremental learning.
We propose a stage-isolation based incremental learning framework.
We evaluate the proposed method on four large benchmarks.
- Score: 61.11137714507445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper focuses on the prevalent performance imbalance in the stages of
incremental learning. To avoid obvious stage learning bottlenecks, we propose a
brand-new stage-isolation based incremental learning framework, which leverages
a series of stage-isolated classifiers to perform the learning task of each
stage without the interference of others. To be concrete, to aggregate multiple
stage classifiers as a uniform one impartially, we first introduce a
temperature-controlled energy metric for indicating the confidence score levels
of the stage classifiers. We then propose an anchor-based energy
self-normalization strategy to ensure the stage classifiers work at the same
energy level. Finally, we design a voting-based inference augmentation strategy
for robust inference. The proposed method is rehearsal free and can work for
almost all continual learning scenarios. We evaluate the proposed method on
four large benchmarks. Extensive results demonstrate the superiority of the
proposed method in setting up new state-of-the-art overall performance.
\emph{Code is available at} \url{https://github.com/iamwangyabin/ESN}.
Related papers
- Maximally Separated Active Learning [32.98415531556376]
We propose an active learning method that utilizes fixed equiangular hyperspherical points as class prototypes.
We demonstrate strong performance over existing active learning techniques across five benchmark datasets.
arXiv Detail & Related papers (2024-11-26T14:02:43Z) - BECLR: Batch Enhanced Contrastive Few-Shot Learning [1.450405446885067]
Unsupervised few-shot learning aspires to bridge this gap by discarding the reliance on annotations at training time.
We propose a novel Dynamic Clustered mEmory (DyCE) module to promote a highly separable latent representation space.
We then tackle the, somehow overlooked yet critical, issue of sample bias at the few-shot inference stage.
arXiv Detail & Related papers (2024-02-04T10:52:43Z) - Class-Incremental Mixture of Gaussians for Deep Continual Learning [15.49323098362628]
We propose end-to-end incorporation of the mixture of Gaussians model into the continual learning framework.
We show that our model can effectively learn in memory-free scenarios with fixed extractors.
arXiv Detail & Related papers (2023-07-09T04:33:19Z) - Iterative Forward Tuning Boosts In-Context Learning in Language Models [88.25013390669845]
In this study, we introduce a novel two-stage framework to boost in-context learning in large language models (LLMs)
Specifically, our framework delineates the ICL process into two distinct stages: Deep-Thinking and test stages.
The Deep-Thinking stage incorporates a unique attention mechanism, i.e., iterative enhanced attention, which enables multiple rounds of information accumulation.
arXiv Detail & Related papers (2023-05-22T13:18:17Z) - Tackling Online One-Class Incremental Learning by Removing Negative
Contrasts [12.048166025000976]
Distinct from other continual learning settings the learner is presented new samples only once.
ER-AML achieved strong performance in this setting by applying an asymmetric loss based on contrastive learning to the incoming data and replayed data.
We adapt a recently proposed approach from self-supervised learning to the supervised learning setting, unlocking the constraint on contrasts.
arXiv Detail & Related papers (2022-03-24T19:17:29Z) - Hybrid Dynamic Contrast and Probability Distillation for Unsupervised
Person Re-Id [109.1730454118532]
Unsupervised person re-identification (Re-Id) has attracted increasing attention due to its practical application in the read-world video surveillance system.
We present the hybrid dynamic cluster contrast and probability distillation algorithm.
It formulates the unsupervised Re-Id problem into an unified local-to-global dynamic contrastive learning and self-supervised probability distillation framework.
arXiv Detail & Related papers (2021-09-29T02:56:45Z) - A Framework using Contrastive Learning for Classification with Noisy
Labels [1.2891210250935146]
We propose a framework using contrastive learning as a pre-training task to perform image classification in the presence of noisy labels.
Recent strategies such as pseudo-labeling, sample selection with Gaussian Mixture models, weighted supervised contrastive learning have been combined into a fine-tuning phase following the pre-training.
arXiv Detail & Related papers (2021-04-19T18:51:22Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - Self-supervised Text-independent Speaker Verification using Prototypical
Momentum Contrastive Learning [58.14807331265752]
We show that better speaker embeddings can be learned by momentum contrastive learning.
We generalize the self-supervised framework to a semi-supervised scenario where only a small portion of the data is labeled.
arXiv Detail & Related papers (2020-12-13T23:23:39Z) - Robust Imitation Learning from Noisy Demonstrations [81.67837507534001]
We show that robust imitation learning can be achieved by optimizing a classification risk with a symmetric loss.
We propose a new imitation learning method that effectively combines pseudo-labeling with co-training.
Experimental results on continuous-control benchmarks show that our method is more robust compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-20T10:41:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.