Class Incremental Online Streaming Learning
- URL: http://arxiv.org/abs/2110.10741v1
- Date: Wed, 20 Oct 2021 19:24:31 GMT
- Title: Class Incremental Online Streaming Learning
- Authors: Soumya Banerjee, Vinay Kumar Verma, Toufiq Parag, Maneesh Singh, Vinay
P. Namboodiri
- Abstract summary: We propose a novel approach for the class-incremental learning in an emphonline streaming setting to address these challenges.
The proposed approach leverages implicit and explicit dual weight regularization and experience replay.
Also, we propose an efficient online memory replay and replacement buffer strategy that significantly boosts the model's performance.
- Score: 40.97848249237289
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A wide variety of methods have been developed to enable lifelong learning in
conventional deep neural networks. However, to succeed, these methods require a
`batch' of samples to be available and visited multiple times during training.
While this works well in a static setting, these methods continue to suffer in
a more realistic situation where data arrives in \emph{online streaming
manner}. We empirically demonstrate that the performance of current approaches
degrades if the input is obtained as a stream of data with the following
restrictions: $(i)$ each instance comes one at a time and can be seen only
once, and $(ii)$ the input data violates the i.i.d assumption, i.e., there can
be a class-based correlation. We propose a novel approach (CIOSL) for the
class-incremental learning in an \emph{online streaming setting} to address
these challenges. The proposed approach leverages implicit and explicit dual
weight regularization and experience replay. The implicit regularization is
leveraged via the knowledge distillation, while the explicit regularization
incorporates a novel approach for parameter regularization by learning the
joint distribution of the buffer replay and the current sample. Also, we
propose an efficient online memory replay and replacement buffer strategy that
significantly boosts the model's performance. Extensive experiments and
ablation on challenging datasets show the efficacy of the proposed method.
Related papers
- Diffusion-Driven Data Replay: A Novel Approach to Combat Forgetting in Federated Class Continual Learning [13.836798036474143]
Key challenge in Federated Class Continual Learning is catastrophic forgetting.
We propose a novel method of data replay based on diffusion models.
Our method significantly outperforms existing baselines.
arXiv Detail & Related papers (2024-09-02T10:07:24Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Offline RL with No OOD Actions: In-Sample Learning via Implicit Value
Regularization [90.9780151608281]
In-sample learning (IQL) improves the policy by quantile regression using only data samples.
We make a key finding that the in-sample learning paradigm arises under the textitImplicit Value Regularization (IVR) framework.
We propose two practical algorithms, Sparse $Q$-learning (EQL) and Exponential $Q$-learning (EQL), which adopt the same value regularization used in existing works.
arXiv Detail & Related papers (2023-03-28T08:30:01Z) - Real-Time Evaluation in Online Continual Learning: A New Hope [104.53052316526546]
We evaluate current Continual Learning (CL) methods with respect to their computational costs.
A simple baseline outperforms state-of-the-art CL methods under this evaluation.
This surprisingly suggests that the majority of existing CL literature is tailored to a specific class of streams that is not practical.
arXiv Detail & Related papers (2023-02-02T12:21:10Z) - Streaming LifeLong Learning With Any-Time Inference [36.3326483579511]
We propose a novel lifelong learning approach, which is streaming, i.e., a single input sample arrives in each time step, single pass, class-incremental, and subject to be evaluated at any moment.
We additionally propose an implicit regularizer in the form of snap-shot self-distillation, which effectively minimizes the forgetting further.
Our empirical evaluations and ablations demonstrate that the proposed method outperforms the prior works by large margins.
arXiv Detail & Related papers (2023-01-27T18:09:19Z) - Tackling Online One-Class Incremental Learning by Removing Negative
Contrasts [12.048166025000976]
Distinct from other continual learning settings the learner is presented new samples only once.
ER-AML achieved strong performance in this setting by applying an asymmetric loss based on contrastive learning to the incoming data and replayed data.
We adapt a recently proposed approach from self-supervised learning to the supervised learning setting, unlocking the constraint on contrasts.
arXiv Detail & Related papers (2022-03-24T19:17:29Z) - The Challenges of Continuous Self-Supervised Learning [40.941767578622745]
Self-supervised learning (SSL) aims to eliminate one of the major bottlenecks in representation learning - the need for human annotations.
We show that a direct application of current methods to such continuous setup is inefficient both computationally and in the amount of data required.
We propose the use of replay buffers as an approach to alleviate the issues of inefficiency and temporal correlations.
arXiv Detail & Related papers (2022-03-23T20:05:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.