Gradient-Matching Coresets for Rehearsal-Based Continual Learning
- URL: http://arxiv.org/abs/2203.14544v1
- Date: Mon, 28 Mar 2022 07:37:17 GMT
- Title: Gradient-Matching Coresets for Rehearsal-Based Continual Learning
- Authors: Lukas Balles, Giovanni Zappella, C\'edric Archambeau
- Abstract summary: The goal of continual learning (CL) is to efficiently update a machine learning model with new data without forgetting previously-learned knowledge.
Most widely-used CL methods rely on a rehearsal memory of data points to be reused while training on new data.
We devise a coreset selection method for rehearsal-based continual learning.
- Score: 6.243028964381449
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of continual learning (CL) is to efficiently update a machine
learning model with new data without forgetting previously-learned knowledge.
Most widely-used CL methods rely on a rehearsal memory of data points to be
reused while training on new data. Curating such a rehearsal memory to maintain
a small, informative subset of all the data seen so far is crucial to the
success of these methods. We devise a coreset selection method for
rehearsal-based continual learning. Our method is based on the idea of gradient
matching: The gradients induced by the coreset should match, as closely as
possible, those induced by the original training dataset. Inspired by the
neural tangent kernel theory, we perform this gradient matching across the
model's initialization distribution, allowing us to extract a coreset without
having to train the model first. We evaluate the method on a wide range of
continual learning scenarios and demonstrate that it improves the performance
of rehearsal-based CL methods compared to competing memory management
strategies such as reservoir sampling.
Related papers
- An Effective Dynamic Gradient Calibration Method for Continual Learning [11.555822066922508]
Continual learning (CL) is a fundamental topic in machine learning, where the goal is to train a model with continuously incoming data and tasks.
Due to the memory limit, we cannot store all the historical data, and therefore confront the catastrophic forgetting'' problem.
We develop an effective algorithm to calibrate the gradient in each updating step of the model.
arXiv Detail & Related papers (2024-07-30T16:30:09Z) - Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - DST-Det: Simple Dynamic Self-Training for Open-Vocabulary Object Detection [72.25697820290502]
This work introduces a straightforward and efficient strategy to identify potential novel classes through zero-shot classification.
We refer to this approach as the self-training strategy, which enhances recall and accuracy for novel classes without requiring extra annotations, datasets, and re-training.
Empirical evaluations on three datasets, including LVIS, V3Det, and COCO, demonstrate significant improvements over the baseline performance.
arXiv Detail & Related papers (2023-10-02T17:52:24Z) - Active Learning Guided by Efficient Surrogate Learners [25.52920030051264]
Re-training a deep learning model each time a single data point receives a new label is impractical.
We introduce a new active learning algorithm that harnesses the power of a Gaussian process surrogate in conjunction with the neural network principal learner.
Our proposed model adeptly updates the surrogate learner for every new data instance, enabling it to emulate and capitalize on the continuous learning dynamics of the neural network.
arXiv Detail & Related papers (2023-01-07T01:35:25Z) - A Memory Transformer Network for Incremental Learning [64.0410375349852]
We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.
Despite the straightforward problem formulation, the naive application of classification models to class-incremental learning results in the "catastrophic forgetting" of previously seen classes.
One of the most successful existing methods has been the use of a memory of exemplars, which overcomes the issue of catastrophic forgetting by saving a subset of past data into a memory bank and utilizing it to prevent forgetting when training future tasks.
arXiv Detail & Related papers (2022-10-10T08:27:28Z) - Making Look-Ahead Active Learning Strategies Feasible with Neural
Tangent Kernels [6.372625755672473]
We propose a new method for approximating active learning acquisition strategies that are based on retraining with hypothetically-labeled candidate data points.
Although this is usually infeasible with deep networks, we use the neural tangent kernel to approximate the result of retraining.
arXiv Detail & Related papers (2022-06-25T06:13:27Z) - A Framework and Benchmark for Deep Batch Active Learning for Regression [2.093287944284448]
We study active learning methods that adaptively select batches of unlabeled data for labeling.
We present a framework for constructing such methods out of (network-dependent) base kernels, kernel transformations, and selection methods.
Our proposed method outperforms the state-of-the-art on our benchmark, scales to large data sets, and works out-of-the-box without adjusting the network architecture or training code.
arXiv Detail & Related papers (2022-03-17T16:11:36Z) - Gradient-matching coresets for continual learning [8.525080112374374]
We devise a coreset selection method based on the idea of gradient matching.
We evaluate the method in the context of continual learning, where it can be used to curate a rehearsal memory.
arXiv Detail & Related papers (2021-12-09T16:34:44Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.