RECALL+: Adversarial Web-based Replay for Continual Learning in Semantic
Segmentation
- URL: http://arxiv.org/abs/2309.10479v2
- Date: Sat, 17 Feb 2024 02:05:51 GMT
- Title: RECALL+: Adversarial Web-based Replay for Continual Learning in Semantic
Segmentation
- Authors: Chang Liu, Giulia Rizzoli, Francesco Barbato, Andrea Maracani, Marco
Toldo, Umberto Michieli, Yi Niu and Pietro Zanuttigh
- Abstract summary: We extend our previous approach (RECALL) and tackle forgetting by exploiting unsupervised web-crawled data.
Experimental results show that this enhanced approach achieves remarkable results, particularly when the incremental scenario spans multiple steps.
- Score: 27.308426315113707
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Catastrophic forgetting of previous knowledge is a critical issue in
continual learning typically handled through various regularization strategies.
However, existing methods struggle especially when several incremental steps
are performed. In this paper, we extend our previous approach (RECALL) and
tackle forgetting by exploiting unsupervised web-crawled data to retrieve
examples of old classes from online databases. In contrast to the original
methodology, which did not incorporate an assessment of web-based data, the
present work proposes two advanced techniques: an adversarial approach and an
adaptive threshold strategy. These methods are utilized to meticulously choose
samples from web data that exhibit strong statistical congruence with the no
longer available training data. Furthermore, we improved the pseudo-labeling
scheme to achieve a more accurate labeling of web data that also considers
classes being learned in the current step. Experimental results show that this
enhanced approach achieves remarkable results, particularly when the
incremental scenario spans multiple steps.
Related papers
- Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Continual Learning with Pre-Trained Models: A Survey [61.97613090666247]
Continual Learning aims to overcome the catastrophic forgetting of former knowledge when learning new ones.
This paper presents a comprehensive survey of the latest advancements in PTM-based CL.
arXiv Detail & Related papers (2024-01-29T18:27:52Z) - Learning to Learn for Few-shot Continual Active Learning [9.283518682371756]
Continual learning strives to ensure stability in solving previously seen tasks while demonstrating plasticity in a novel domain.
Recent advances in continual learning are mostly confined to a supervised learning setting, especially in NLP domain.
We exploit meta-learning and propose a method, called Meta-Continual Active Learning.
arXiv Detail & Related papers (2023-11-07T05:22:11Z) - Label-efficient Time Series Representation Learning: A Review [19.218833228063392]
Label-efficient time series representation learning is crucial for deploying deep learning models in real-world applications.
To address the scarcity of labeled time series data, various strategies, e.g., transfer learning, self-supervised learning, and semi-supervised learning, have been developed.
We introduce a novel taxonomy for the first time, categorizing existing approaches as in-domain or cross-domain, based on their reliance on external data sources.
arXiv Detail & Related papers (2023-02-13T15:12:15Z) - SURF: Semi-supervised Reward Learning with Data Augmentation for
Feedback-efficient Preference-based Reinforcement Learning [168.89470249446023]
We present SURF, a semi-supervised reward learning framework that utilizes a large amount of unlabeled samples with data augmentation.
In order to leverage unlabeled samples for reward learning, we infer pseudo-labels of the unlabeled samples based on the confidence of the preference predictor.
Our experiments demonstrate that our approach significantly improves the feedback-efficiency of the preference-based method on a variety of locomotion and robotic manipulation tasks.
arXiv Detail & Related papers (2022-03-18T16:50:38Z) - Reinforced Meta Active Learning [11.913086438671357]
We present an online stream-based meta active learning method which learns on the fly an informativeness measure directly from the data.
The method is based on reinforcement learning and combines episodic policy search and a contextual bandits approach.
We demonstrate on several real datasets that this method learns to select training samples more efficiently than existing state-of-the-art methods.
arXiv Detail & Related papers (2022-03-09T08:36:54Z) - On Modality Bias Recognition and Reduction [70.69194431713825]
We study the modality bias problem in the context of multi-modal classification.
We propose a plug-and-play loss function method, whereby the feature space for each label is adaptively learned.
Our method yields remarkable performance improvements compared with the baselines.
arXiv Detail & Related papers (2022-02-25T13:47:09Z) - Recursive Least-Squares Estimator-Aided Online Learning for Visual
Tracking [58.14267480293575]
We propose a simple yet effective online learning approach for few-shot online adaptation without requiring offline training.
It allows an in-built memory retention mechanism for the model to remember the knowledge about the object seen before.
We evaluate our approach based on two networks in the online learning families for tracking, i.e., multi-layer perceptrons in RT-MDNet and convolutional neural networks in DiMP.
arXiv Detail & Related papers (2021-12-28T06:51:18Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z) - An EM Framework for Online Incremental Learning of Semantic Segmentation [37.94734474090863]
We propose an incremental learning strategy that can adapt deep segmentation models without catastrophic forgetting, using a streaming input data with pixel annotations on the novel classes only.
We validate our approach on the PASCAL VOC 2012 and ADE20K datasets, and the results demonstrate its superior performance over the existing incremental methods.
arXiv Detail & Related papers (2021-08-08T11:30:09Z) - Ask-n-Learn: Active Learning via Reliable Gradient Representations for
Image Classification [29.43017692274488]
Deep predictive models rely on human supervision in the form of labeled training data.
We propose Ask-n-Learn, an active learning approach based on gradient embeddings obtained using the pesudo-labels estimated in each of the algorithm.
arXiv Detail & Related papers (2020-09-30T05:19:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.