RSAC: Regularized Subspace Approximation Classifier for Lightweight
Continuous Learning
- URL: http://arxiv.org/abs/2007.01480v1
- Date: Fri, 3 Jul 2020 03:38:06 GMT
- Title: RSAC: Regularized Subspace Approximation Classifier for Lightweight
Continuous Learning
- Authors: Chih-Hsing Ho, Shang-Ho (Lawrence) Tsai
- Abstract summary: Continuous learning seeks to perform the learning on the data that arrives from time to time.
In this work, a novel training algorithm, regularized subspace approximation classifier (RSAC) is proposed to achieve lightweight continuous learning.
Extensive experiments show that RSAC is more efficient than prior continuous learning works and outperforms these works on various experimental settings.
- Score: 0.9137554315375922
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continuous learning seeks to perform the learning on the data that arrives
from time to time. While prior works have demonstrated several possible
solutions, these approaches require excessive training time as well as memory
usage. This is impractical for applications where time and storage are
constrained, such as edge computing. In this work, a novel training algorithm,
regularized subspace approximation classifier (RSAC), is proposed to achieve
lightweight continuous learning. RSAC contains a feature reduction module and
classifier module with regularization. Extensive experiments show that RSAC is
more efficient than prior continuous learning works and outperforms these works
on various experimental settings.
Related papers
- Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation [121.0693322732454]
Contrastive Language-Image Pretraining (CLIP) has gained popularity for its remarkable zero-shot capacity.
Recent research has focused on developing efficient fine-tuning methods to enhance CLIP's performance in downstream tasks.
We revisit a classical algorithm, Gaussian Discriminant Analysis (GDA), and apply it to the downstream classification of CLIP.
arXiv Detail & Related papers (2024-02-06T15:45:27Z) - RLSAC: Reinforcement Learning enhanced Sample Consensus for End-to-End
Robust Estimation [74.47709320443998]
We propose RLSAC, a novel Reinforcement Learning enhanced SAmple Consensus framework for end-to-end robust estimation.
RLSAC employs a graph neural network to utilize both data and memory features to guide exploring directions for sampling the next minimum set.
Our experimental results demonstrate that RLSAC can learn from features to gradually explore a better hypothesis.
arXiv Detail & Related papers (2023-08-10T03:14:19Z) - Novel Batch Active Learning Approach and Its Application to Synthetic
Aperture Radar Datasets [7.381841249558068]
Recent gains have been made using sequential active learning for synthetic aperture radar (SAR) data arXiv:2204.00005.
We developed a novel, two-part approach for batch active learning: Dijkstra's Annulus Core-Set (DAC) for core-set generation and LocalMax for batch sampling.
The batch active learning process that combines DAC and LocalMax achieves nearly identical accuracy as sequential active learning but is more efficient, proportional to the batch size.
arXiv Detail & Related papers (2023-07-19T23:25:21Z) - Computationally Budgeted Continual Learning: What Does Matter? [128.0827987414154]
Continual Learning (CL) aims to sequentially train models on streams of incoming data that vary in distribution by preserving previous knowledge while adapting to new data.
Current CL literature focuses on restricted access to previously seen data, while imposing no constraints on the computational budget for training.
We revisit this problem with a large-scale benchmark and analyze the performance of traditional CL approaches in a compute-constrained setting.
arXiv Detail & Related papers (2023-03-20T14:50:27Z) - SimCS: Simulation for Domain Incremental Online Continual Segmentation [60.18777113752866]
Existing continual learning approaches mostly focus on image classification in the class-incremental setup.
We propose SimCS, a parameter-free method complementary to existing ones that uses simulated data to regularize continual learning.
arXiv Detail & Related papers (2022-11-29T14:17:33Z) - Continuous Episodic Control [7.021281655855703]
This paper introduces Continuous Episodic Control ( CEC), a novel non-parametric episodic memory algorithm for sequential decision making in problems with a continuous action space.
Results on several sparse-reward continuous control environments show that our proposed method learns faster than state-of-the-art model-free RL and memory-augmented RL algorithms, while maintaining good long-run performance as well.
arXiv Detail & Related papers (2022-11-28T09:48:42Z) - AdaS: Adaptive Scheduling of Stochastic Gradients [50.80697760166045]
We introduce the notions of textit"knowledge gain" and textit"mapping condition" and propose a new algorithm called Adaptive Scheduling (AdaS)
Experimentation reveals that, using the derived metrics, AdaS exhibits: (a) faster convergence and superior generalization over existing adaptive learning methods; and (b) lack of dependence on a validation set to determine when to stop training.
arXiv Detail & Related papers (2020-06-11T16:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.