DeCoR: Defy Knowledge Forgetting by Predicting Earlier Audio Codes
- URL: http://arxiv.org/abs/2305.18441v1
- Date: Mon, 29 May 2023 02:25:03 GMT
- Title: DeCoR: Defy Knowledge Forgetting by Predicting Earlier Audio Codes
- Authors: Xilin Jiang, Yinghao Aaron Li, Nima Mesgarani
- Abstract summary: Lifelong audio feature extraction involves learning new sound classes incrementally.
optimizing the model only on new data can lead to catastrophic forgetting of previously learned tasks.
This paper introduces a new approach to continual audio representation learning called DeCoR.
- Score: 16.96483269023065
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lifelong audio feature extraction involves learning new sound classes
incrementally, which is essential for adapting to new data distributions over
time. However, optimizing the model only on new data can lead to catastrophic
forgetting of previously learned tasks, which undermines the model's ability to
perform well over the long term. This paper introduces a new approach to
continual audio representation learning called DeCoR. Unlike other methods that
store previous data, features, or models, DeCoR indirectly distills knowledge
from an earlier model to the latest by predicting quantization indices from a
delayed codebook. We demonstrate that DeCoR improves acoustic scene
classification accuracy and integrates well with continual self-supervised
representation learning. Our approach introduces minimal storage and
computation overhead, making it a lightweight and efficient solution for
continual learning.
Related papers
- Learning to Learn without Forgetting using Attention [5.6739565497512405]
Continual learning (CL) refers to the ability to continually learn over time by accommodating new knowledge while retaining previously learned experience.
Current machine learning methods are highly prone to overwrite previously learned patterns and thus forget past experience.
Since hand-crafting effective update mechanisms is difficult, we propose meta-learning a transformer-based to enhance CL.
arXiv Detail & Related papers (2024-08-06T14:25:23Z) - Beyond Prompt Learning: Continual Adapter for Efficient Rehearsal-Free Continual Learning [22.13331870720021]
We propose a beyond prompt learning approach to the RFCL task, called Continual Adapter (C-ADA)
C-ADA flexibly extends specific weights in CAL to learn new knowledge for each task and freezes old weights to preserve prior knowledge.
Our approach achieves significantly improved performance and training speed, outperforming the current state-of-the-art (SOTA) method.
arXiv Detail & Related papers (2024-07-14T17:40:40Z) - Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Class incremental learning with probability dampening and cascaded gated classifier [4.285597067389559]
We propose a novel incremental regularisation approach called Margin Dampening and Cascaded Scaling.
The first combines a soft constraint and a knowledge distillation approach to preserve past knowledge while allowing forgetting new patterns.
We empirically show that our approach performs well on multiple benchmarks well-established baselines.
arXiv Detail & Related papers (2024-02-02T09:33:07Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Complementary Learning Subnetworks for Parameter-Efficient
Class-Incremental Learning [40.13416912075668]
We propose a rehearsal-free CIL approach that learns continually via the synergy between two Complementary Learning Subnetworks.
Our method achieves competitive results against state-of-the-art methods, especially in accuracy gain, memory cost, training efficiency, and task-order.
arXiv Detail & Related papers (2023-06-21T01:43:25Z) - Knowledge Diffusion for Distillation [53.908314960324915]
The representation gap between teacher and student is an emerging topic in knowledge distillation (KD)
We state that the essence of these methods is to discard the noisy information and distill the valuable information in the feature.
We propose a novel KD method dubbed DiffKD, to explicitly denoise and match features using diffusion models.
arXiv Detail & Related papers (2023-05-25T04:49:34Z) - A Memory Transformer Network for Incremental Learning [64.0410375349852]
We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.
Despite the straightforward problem formulation, the naive application of classification models to class-incremental learning results in the "catastrophic forgetting" of previously seen classes.
One of the most successful existing methods has been the use of a memory of exemplars, which overcomes the issue of catastrophic forgetting by saving a subset of past data into a memory bank and utilizing it to prevent forgetting when training future tasks.
arXiv Detail & Related papers (2022-10-10T08:27:28Z) - Class-Incremental Learning by Knowledge Distillation with Adaptive
Feature Consolidation [39.97128550414934]
We present a novel class incremental learning approach based on deep neural networks.
It continually learns new tasks with limited memory for storing examples in the previous tasks.
Our algorithm is based on knowledge distillation and provides a principled way to maintain the representations of old models.
arXiv Detail & Related papers (2022-04-02T16:30:04Z) - Always Be Dreaming: A New Approach for Data-Free Class-Incremental
Learning [73.24988226158497]
We consider the high-impact problem of Data-Free Class-Incremental Learning (DFCIL)
We propose a novel incremental distillation strategy for DFCIL, contributing a modified cross-entropy training and importance-weighted feature distillation.
Our method results in up to a 25.1% increase in final task accuracy (absolute difference) compared to SOTA DFCIL methods for common class-incremental benchmarks.
arXiv Detail & Related papers (2021-06-17T17:56:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.