Continual Learning For On-Device Environmental Sound Classification
- URL: http://arxiv.org/abs/2207.07429v2
- Date: Mon, 18 Jul 2022 07:54:14 GMT
- Title: Continual Learning For On-Device Environmental Sound Classification
- Authors: Yang Xiao, Xubo Liu, James King, Arshdeep Singh, Eng Siong Chng, Mark
D. Plumbley, Wenwu Wang
- Abstract summary: We propose a simple and efficient continual learning method for on-device environmental sound classification.
Our method selects the historical data for the training by measuring the per-sample classification uncertainty.
- Score: 63.81276321857279
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continuously learning new classes without catastrophic forgetting is a
challenging problem for on-device environmental sound classification given the
restrictions on computation resources (e.g., model size, running memory). To
address this issue, we propose a simple and efficient continual learning
method. Our method selects the historical data for the training by measuring
the per-sample classification uncertainty. Specifically, we measure the
uncertainty by observing how the classification probability of data fluctuates
against the parallel perturbations added to the classifier embedding. In this
way, the computation cost can be significantly reduced compared with adding
perturbation to the raw data. Experimental results on the DCASE 2019 Task 1 and
ESC-50 dataset show that our proposed method outperforms baseline continual
learning methods on classification accuracy and computational efficiency,
indicating our method can efficiently and incrementally learn new classes
without the catastrophic forgetting problem for on-device environmental sound
classification.
Related papers
- Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - An Adaptive Cost-Sensitive Learning and Recursive Denoising Framework for Imbalanced SVM Classification [12.986535715303331]
Category imbalance is one of the most popular and important issues in the domain of classification.
Emotion classification model trained on imbalanced datasets easily leads to unreliable prediction.
arXiv Detail & Related papers (2024-03-13T09:43:14Z) - A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation [121.0693322732454]
Contrastive Language-Image Pretraining (CLIP) has gained popularity for its remarkable zero-shot capacity.
Recent research has focused on developing efficient fine-tuning methods to enhance CLIP's performance in downstream tasks.
We revisit a classical algorithm, Gaussian Discriminant Analysis (GDA), and apply it to the downstream classification of CLIP.
arXiv Detail & Related papers (2024-02-06T15:45:27Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Complementary Labels Learning with Augmented Classes [22.460256396941528]
Complementary Labels Learning (CLL) arises in many real-world tasks such as private questions classification and online learning.
We propose a novel problem setting called Complementary Labels Learning with Augmented Classes (CLLAC)
By using unlabeled data, we propose an unbiased estimator of classification risk for CLLAC, which is guaranteed to be provably consistent.
arXiv Detail & Related papers (2022-11-19T13:55:27Z) - CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning [55.733193075728096]
Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
arXiv Detail & Related papers (2022-02-11T13:49:51Z) - Self-Certifying Classification by Linearized Deep Assignment [65.0100925582087]
We propose a novel class of deep predictors for classifying metric data on graphs within PAC-Bayes risk certification paradigm.
Building on the recent PAC-Bayes literature and data-dependent priors, this approach enables learning posterior distributions on the hypothesis space.
arXiv Detail & Related papers (2022-01-26T19:59:14Z) - Active Weighted Aging Ensemble for Drifted Data Stream Classification [2.277447144331876]
Concept drift destabilizes the performance of the classification model and seriously degrades its quality.
The proposed method has been evaluated through computer experiments using both real and generated data streams.
The results confirm the high quality of the proposed algorithm over state-of-the-art methods.
arXiv Detail & Related papers (2021-12-19T13:52:53Z) - Few-Shot Incremental Learning with Continually Evolved Classifiers [46.278573301326276]
Few-shot class-incremental learning (FSCIL) aims to design machine learning algorithms that can continually learn new concepts from a few data points.
The difficulty lies in that limited data from new classes not only lead to significant overfitting issues but also exacerbate the notorious catastrophic forgetting problems.
We propose a Continually Evolved CIF ( CEC) that employs a graph model to propagate context information between classifiers for adaptation.
arXiv Detail & Related papers (2021-04-07T10:54:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.