Online Continual Learning via Multiple Deep Metric Learning and
Uncertainty-guided Episodic Memory Replay -- 3rd Place Solution for ICCV 2021
Workshop SSLAD Track 3A Continual Object Classification
- URL: http://arxiv.org/abs/2111.02757v1
- Date: Thu, 4 Nov 2021 11:16:42 GMT
- Title: Online Continual Learning via Multiple Deep Metric Learning and
Uncertainty-guided Episodic Memory Replay -- 3rd Place Solution for ICCV 2021
Workshop SSLAD Track 3A Continual Object Classification
- Authors: Muhammad Rifki Kurniawan, Xing Wei, Yihong Gong
- Abstract summary: Non-stationarity in online continual learning potentially brings about catastrophic forgetting in neural networks.
Our proposed method achieves considerable generalization with average mean class accuracy (AMCA) 64.01% on validation and 64.53% AMCA on test set.
- Score: 41.35216156491142
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online continual learning in the wild is a very difficult task in machine
learning. Non-stationarity in online continual learning potentially brings
about catastrophic forgetting in neural networks. Specifically, online
continual learning for autonomous driving with SODA10M dataset exhibits extra
problems on extremely long-tailed distribution with continuous distribution
shift. To address these problems, we propose multiple deep metric
representation learning via both contrastive and supervised contrastive
learning alongside soft labels distillation to improve model generalization.
Moreover, we exploit modified class-balanced focal loss for sensitive
penalization in class imbalanced and hard-easy samples. We also store some
samples under guidance of uncertainty metric for rehearsal and perform online
and periodical memory updates. Our proposed method achieves considerable
generalization with average mean class accuracy (AMCA) 64.01% on validation and
64.53% AMCA on test set.
Related papers
- Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models [79.28821338925947]
Domain-Class Incremental Learning is a realistic but challenging continual learning scenario.
To handle these diverse tasks, pre-trained Vision-Language Models (VLMs) are introduced for their strong generalizability.
This incurs a new problem: the knowledge encoded in the pre-trained VLMs may be disturbed when adapting to new tasks, compromising their inherent zero-shot ability.
Existing methods tackle it by tuning VLMs with knowledge distillation on extra datasets, which demands heavy overhead.
We propose the Distribution-aware Interference-free Knowledge Integration (DIKI) framework, retaining pre-trained knowledge of
arXiv Detail & Related papers (2024-07-07T12:19:37Z) - New metrics for analyzing continual learners [27.868967961503962]
Continual Learning (CL) poses challenges to standard learning algorithms.
This stability-plasticity dilemma remains central to CL and multiple metrics have been proposed to adequately measure stability and plasticity separately.
We propose new metrics that account for the task's increasing difficulty.
arXiv Detail & Related papers (2023-09-01T13:53:33Z) - CBA: Improving Online Continual Learning via Continual Bias Adaptor [44.1816716207484]
We propose a Continual Bias Adaptor to augment the classifier network to adapt to catastrophic distribution change during training.
In the testing stage, CBA can be removed which introduces no additional cost and memory overhead.
We theoretically reveal the reason why the proposed method can effectively alleviate catastrophic distribution shifts.
arXiv Detail & Related papers (2023-08-14T04:03:51Z) - Kaizen: Practical Self-supervised Continual Learning with Continual
Fine-tuning [21.36130180647864]
Retraining a model from scratch to adapt to newly generated data is time-consuming and inefficient.
We introduce a training architecture that is able to mitigate catastrophic forgetting.
Kaizen significantly outperforms previous SSL models in competitive vision benchmarks.
arXiv Detail & Related papers (2023-03-30T09:08:57Z) - Scalable Adversarial Online Continual Learning [11.6720677621333]
This paper proposes a scalable adversarial continual learning (SCALE) method.
It puts forward a parameter generator transforming common features into task-specific features and a single discriminator in the adversarial game to induce common features.
It outperforms prominent baselines with noticeable margins in both accuracy and execution time.
arXiv Detail & Related papers (2022-09-04T08:05:40Z) - Online Continual Learning on a Contaminated Data Stream with Blurry Task
Boundaries [17.43350151320054]
A large body of continual learning (CL) methods assumes data streams with clean labels, and online learning scenarios under noisy data streams are yet underexplored.
We consider a more practical CL task setup of an online learning from blurry data stream with corrupted labels, where existing CL methods struggle.
We propose a novel strategy to manage and use the memory by a unified approach of label noise aware diverse sampling and robust learning with semi-supervised learning.
arXiv Detail & Related papers (2022-03-29T08:52:45Z) - Recursive Least-Squares Estimator-Aided Online Learning for Visual
Tracking [58.14267480293575]
We propose a simple yet effective online learning approach for few-shot online adaptation without requiring offline training.
It allows an in-built memory retention mechanism for the model to remember the knowledge about the object seen before.
We evaluate our approach based on two networks in the online learning families for tracking, i.e., multi-layer perceptrons in RT-MDNet and convolutional neural networks in DiMP.
arXiv Detail & Related papers (2021-12-28T06:51:18Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z) - Task-agnostic Continual Learning with Hybrid Probabilistic Models [75.01205414507243]
We propose HCL, a Hybrid generative-discriminative approach to Continual Learning for classification.
The flow is used to learn the data distribution, perform classification, identify task changes, and avoid forgetting.
We demonstrate the strong performance of HCL on a range of continual learning benchmarks such as split-MNIST, split-CIFAR, and SVHN-MNIST.
arXiv Detail & Related papers (2021-06-24T05:19:26Z) - Self-Damaging Contrastive Learning [92.34124578823977]
Unlabeled data in reality is commonly imbalanced and shows a long-tail distribution.
This paper proposes a principled framework called Self-Damaging Contrastive Learning to automatically balance the representation learning without knowing the classes.
Our experiments show that SDCLR significantly improves not only overall accuracies but also balancedness.
arXiv Detail & Related papers (2021-06-06T00:04:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.