On the Soft-Subnetwork for Few-shot Class Incremental Learning
- URL: http://arxiv.org/abs/2209.07529v1
- Date: Thu, 15 Sep 2022 04:54:02 GMT
- Title: On the Soft-Subnetwork for Few-shot Class Incremental Learning
- Authors: Haeyong Kang, Jaehong Yoon, Sultan Rizky Hikmawan Madjid, Sung Ju
Hwang, Chang D. Yoo
- Abstract summary: We propose a few-shot class incremental learning (FSCIL) method referred to as emphSoft-SubNetworks (SoftNet).
Our objective is to learn a sequence of sessions incrementally, where each session only includes a few training instances per class while preserving the knowledge of the previously learned ones.
We provide comprehensive empirical validations demonstrating that our SoftNet effectively tackles the few-shot incremental learning problem by surpassing the performance of state-of-the-art baselines over benchmark datasets.
- Score: 67.0373924836107
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inspired by Regularized Lottery Ticket Hypothesis (RLTH), which hypothesizes
that there exist smooth (non-binary) subnetworks within a dense network that
achieve the competitive performance of the dense network, we propose a few-shot
class incremental learning (FSCIL) method referred to as \emph{Soft-SubNetworks
(SoftNet)}. Our objective is to learn a sequence of sessions incrementally,
where each session only includes a few training instances per class while
preserving the knowledge of the previously learned ones. SoftNet jointly learns
the model weights and adaptive non-binary soft masks at a base training session
in which each mask consists of the major and minor subnetwork; the former aims
to minimize catastrophic forgetting during training, and the latter aims to
avoid overfitting to a few samples in each new training session. We provide
comprehensive empirical validations demonstrating that our SoftNet effectively
tackles the few-shot incremental learning problem by surpassing the performance
of state-of-the-art baselines over benchmark datasets.
Related papers
- Soft-TransFormers for Continual Learning [27.95463327680678]
We propose a novel fully fine-tuned continual learning (CL) method referred to as Soft-TransFormers (Soft-TF)
Soft-TF sequentially learns and selects an optimal soft-network or subnetwork for each task.
In inference, the identified task-adaptive network of Soft-TF masks the parameters of the pre-trained network.
arXiv Detail & Related papers (2024-11-25T03:52:47Z) - Continual Learning: Forget-free Winning Subnetworks for Video Representations [75.40220771931132]
Winning Subnetwork (WSN) in terms of task performance is considered for various continual learning tasks.
It leverages pre-existing weights from dense networks to achieve efficient learning in Task Incremental Learning (TIL) and Task-agnostic Incremental Learning (TaIL) scenarios.
The use of Fourier Subneural Operator (FSO) within WSN is considered for Video Incremental Learning (VIL)
arXiv Detail & Related papers (2023-12-19T09:11:49Z) - Complementary Learning Subnetworks for Parameter-Efficient
Class-Incremental Learning [40.13416912075668]
We propose a rehearsal-free CIL approach that learns continually via the synergy between two Complementary Learning Subnetworks.
Our method achieves competitive results against state-of-the-art methods, especially in accuracy gain, memory cost, training efficiency, and task-order.
arXiv Detail & Related papers (2023-06-21T01:43:25Z) - Forget-free Continual Learning with Soft-Winning SubNetworks [67.0373924836107]
We investigate two proposed continual learning methods which sequentially learn and select adaptive binary- (WSN) and non-binary Soft-Subnetworks (SoftNet) for each task.
WSN and SoftNet jointly learn the regularized model weights and task-adaptive non-binary masks ofworks associated with each task.
In Task Incremental Learning (TIL), binary masks spawned per winning ticket are encoded into one N-bit binary digit mask, then compressed using Huffman coding for a sub-linear increase in network capacity to the number of tasks.
arXiv Detail & Related papers (2023-03-27T07:53:23Z) - Training Your Sparse Neural Network Better with Any Mask [106.134361318518]
Pruning large neural networks to create high-quality, independently trainable sparse masks is desirable.
In this paper we demonstrate an alternative opportunity: one can customize the sparse training techniques to deviate from the default dense network training protocols.
Our new sparse training recipe is generally applicable to improving training from scratch with various sparse masks.
arXiv Detail & Related papers (2022-06-26T00:37:33Z) - Self-Supervised Learning for Binary Networks by Joint Classifier
Training [11.612308609123566]
We propose a self-supervised learning method for binary networks.
For better training of the binary network, we propose a feature similarity loss, a dynamic balancing scheme of loss terms, and modified multi-stage training.
Our empirical validations show that BSSL outperforms self-supervised learning baselines for binary networks in various downstream tasks and outperforms supervised pretraining in certain tasks.
arXiv Detail & Related papers (2021-10-17T15:38:39Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z) - Semantic Drift Compensation for Class-Incremental Learning [48.749630494026086]
Class-incremental learning of deep networks sequentially increases the number of classes to be classified.
We propose a new method to estimate the drift, called semantic drift, of features and compensate for it without the need of any exemplars.
arXiv Detail & Related papers (2020-04-01T13:31:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.