Online Class-Incremental Continual Learning with Adversarial Shapley
Value
- URL: http://arxiv.org/abs/2009.00093v4
- Date: Mon, 22 Mar 2021 20:03:10 GMT
- Title: Online Class-Incremental Continual Learning with Adversarial Shapley
Value
- Authors: Dongsub Shim, Zheda Mai, Jihwan Jeong, Scott Sanner, Hyunwoo Kim,
Jongseong Jang
- Abstract summary: In this paper, we focus on the online class-incremental setting where a model needs to learn new classes continually from an online data stream.
To this end, we contribute a novel Adversarial Shapley value scoring method that scores memory data samples according to their ability to preserve latent decision boundaries.
Overall, we observe that our proposed ASER method provides competitive or improved performance compared to state-of-the-art replay-based continual learning methods on a variety of datasets.
- Score: 28.921534209869105
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As image-based deep learning becomes pervasive on every device, from cell
phones to smart watches, there is a growing need to develop methods that
continually learn from data while minimizing memory footprint and power
consumption. While memory replay techniques have shown exceptional promise for
this task of continual learning, the best method for selecting which buffered
images to replay is still an open question. In this paper, we specifically
focus on the online class-incremental setting where a model needs to learn new
classes continually from an online data stream. To this end, we contribute a
novel Adversarial Shapley value scoring method that scores memory data samples
according to their ability to preserve latent decision boundaries for
previously observed classes (to maintain learning stability and avoid
forgetting) while interfering with latent decision boundaries of current
classes being learned (to encourage plasticity and optimal learning of new
class boundaries). Overall, we observe that our proposed ASER method provides
competitive or improved performance compared to state-of-the-art replay-based
continual learning methods on a variety of datasets.
Related papers
- VERSE: Virtual-Gradient Aware Streaming Lifelong Learning with Anytime
Inference [36.61783715563126]
Streaming lifelong learning is a challenging setting of lifelong learning with the goal of continuous learning without forgetting.
We introduce a novel approach to lifelong learning, which is streaming (observes each training example only once)
We propose a novel emphvirtual gradients based approach for continual representation learning which adapts to each new example while also generalizing well on past data to prevent catastrophic forgetting.
arXiv Detail & Related papers (2023-09-15T07:54:49Z) - Multi-View Class Incremental Learning [57.14644913531313]
Multi-view learning (MVL) has gained great success in integrating information from multiple perspectives of a dataset to improve downstream task performance.
This paper investigates a novel paradigm called multi-view class incremental learning (MVCIL), where a single model incrementally classifies new classes from a continual stream of views.
arXiv Detail & Related papers (2023-06-16T08:13:41Z) - A baseline on continual learning methods for video action recognition [15.157938674002793]
Continual learning aims to solve long-standing limitations of classic supervisedly-trained models.
We present a benchmark of state-of-the-art continual learning methods on video action recognition.
arXiv Detail & Related papers (2023-04-20T14:20:43Z) - PIVOT: Prompting for Video Continual Learning [50.80141083993668]
We introduce PIVOT, a novel method that leverages extensive knowledge in pre-trained models from the image domain.
Our experiments show that PIVOT improves state-of-the-art methods by a significant 27% on the 20-task ActivityNet setup.
arXiv Detail & Related papers (2022-12-09T13:22:27Z) - A Memory Transformer Network for Incremental Learning [64.0410375349852]
We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.
Despite the straightforward problem formulation, the naive application of classification models to class-incremental learning results in the "catastrophic forgetting" of previously seen classes.
One of the most successful existing methods has been the use of a memory of exemplars, which overcomes the issue of catastrophic forgetting by saving a subset of past data into a memory bank and utilizing it to prevent forgetting when training future tasks.
arXiv Detail & Related papers (2022-10-10T08:27:28Z) - Bypassing Logits Bias in Online Class-Incremental Learning with a
Generative Framework [15.345043222622158]
We focus on online class-incremental learning setting in which new classes emerge over time.
Almost all existing methods are replay-based with a softmax classifier.
We propose a novel generative framework based on the feature space.
arXiv Detail & Related papers (2022-05-19T06:54:20Z) - vCLIMB: A Novel Video Class Incremental Learning Benchmark [53.90485760679411]
We introduce vCLIMB, a novel video continual learning benchmark.
vCLIMB is a standardized test-bed to analyze catastrophic forgetting of deep models in video continual learning.
We propose a temporal consistency regularization that can be applied on top of memory-based continual learning methods.
arXiv Detail & Related papers (2022-01-23T22:14:17Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z) - Incremental Learning from Low-labelled Stream Data in Open-Set Video
Face Recognition [0.0]
We propose a novel incremental learning approach which combines a deep features encoder with an Open-Set Dynamic Ensembles of SVM.
Our method can use unsupervised operational data to enhance recognition.
Results show a benefit of up to 15% F1-score increase respect to non-adaptive state-of-the-art methods.
arXiv Detail & Related papers (2020-12-17T13:28:13Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.