Tackling Online One-Class Incremental Learning by Removing Negative
Contrasts
- URL: http://arxiv.org/abs/2203.13307v1
- Date: Thu, 24 Mar 2022 19:17:29 GMT
- Title: Tackling Online One-Class Incremental Learning by Removing Negative
Contrasts
- Authors: Nader Asadi, Sudhir Mudur, Eugene Belilovsky
- Abstract summary: Distinct from other continual learning settings the learner is presented new samples only once.
ER-AML achieved strong performance in this setting by applying an asymmetric loss based on contrastive learning to the incoming data and replayed data.
We adapt a recently proposed approach from self-supervised learning to the supervised learning setting, unlocking the constraint on contrasts.
- Score: 12.048166025000976
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work studies the supervised online continual learning setting where a
learner receives a stream of data whose class distribution changes over time.
Distinct from other continual learning settings the learner is presented new
samples only once and must distinguish between all seen classes. A number of
successful methods in this setting focus on storing and replaying a subset of
samples alongside incoming data in a computationally efficient manner. One
recent proposal ER-AML achieved strong performance in this setting by applying
an asymmetric loss based on contrastive learning to the incoming data and
replayed data. However, a key ingredient of the proposed method is avoiding
contrasts between incoming data and stored data, which makes it impractical for
the setting where only one new class is introduced in each phase of the stream.
In this work we adapt a recently proposed approach (\textit{BYOL}) from
self-supervised learning to the supervised learning setting, unlocking the
constraint on contrasts. We then show that supplementing this with additional
regularization on class prototypes yields a new method that achieves strong
performance in the one-class incremental learning setting and is competitive
with the top performing methods in the multi-class incremental setting.
Related papers
- Covariance-based Space Regularization for Few-shot Class Incremental Learning [25.435192867105552]
Few-shot Class Incremental Learning (FSCIL) requires the model to continually learn new classes with limited labeled data.
Due to the limited data in incremental sessions, models are prone to overfitting new classes and suffering catastrophic forgetting of base classes.
Recent advancements resort to prototype-based approaches to constrain the base class distribution and learn discriminative representations of new classes.
arXiv Detail & Related papers (2024-11-02T08:03:04Z) - Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Multi-View Class Incremental Learning [57.14644913531313]
Multi-view learning (MVL) has gained great success in integrating information from multiple perspectives of a dataset to improve downstream task performance.
This paper investigates a novel paradigm called multi-view class incremental learning (MVCIL), where a single model incrementally classifies new classes from a continual stream of views.
arXiv Detail & Related papers (2023-06-16T08:13:41Z) - Prototype-Sample Relation Distillation: Towards Replay-Free Continual
Learning [14.462797749666992]
We propose a holistic approach to jointly learn the representation and class prototypes.
We propose a novel distillation loss that constrains class prototypes to maintain relative similarities as compared to new task data.
This method yields state-of-the-art performance in the task-incremental setting.
arXiv Detail & Related papers (2023-03-26T16:35:45Z) - Continual Learning with Optimal Transport based Mixture Model [17.398605698033656]
We propose an online mixture model learning approach based on nice properties of the mature optimal transport theory (OT-MM)
Our proposed method can significantly outperform the current state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-30T06:40:29Z) - Lightweight Conditional Model Extrapolation for Streaming Data under
Class-Prior Shift [27.806085423595334]
We introduce LIMES, a new method for learning with non-stationary streaming data.
We learn a single set of model parameters from which a specific classifier for any specific data distribution is derived.
Experiments on a set of exemplary tasks using Twitter data show that LIMES achieves higher accuracy than alternative approaches.
arXiv Detail & Related papers (2022-06-10T15:19:52Z) - Class Incremental Online Streaming Learning [40.97848249237289]
We propose a novel approach for the class-incremental learning in an emphonline streaming setting to address these challenges.
The proposed approach leverages implicit and explicit dual weight regularization and experience replay.
Also, we propose an efficient online memory replay and replacement buffer strategy that significantly boosts the model's performance.
arXiv Detail & Related papers (2021-10-20T19:24:31Z) - Improving Calibration for Long-Tailed Recognition [68.32848696795519]
We propose two methods to improve calibration and performance in such scenarios.
For dataset bias due to different samplers, we propose shifted batch normalization.
Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets.
arXiv Detail & Related papers (2021-04-01T13:55:21Z) - Contrastive Prototype Learning with Augmented Embeddings for Few-Shot
Learning [58.2091760793799]
We propose a novel contrastive prototype learning with augmented embeddings (CPLAE) model.
With a class prototype as an anchor, CPL aims to pull the query samples of the same class closer and those of different classes further away.
Extensive experiments on several benchmarks demonstrate that our proposed CPLAE achieves new state-of-the-art.
arXiv Detail & Related papers (2021-01-23T13:22:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.