Energy-based Latent Aligner for Incremental Learning
- URL: http://arxiv.org/abs/2203.14952v1
- Date: Mon, 28 Mar 2022 17:57:25 GMT
- Title: Energy-based Latent Aligner for Incremental Learning
- Authors: K J Joseph, Salman Khan, Fahad Shahbaz Khan, Rao Muhammad Anwer,
Vineeth N Balasubramanian
- Abstract summary: Deep learning models tend to forget their earlier knowledge while incrementally learning new tasks.
This behavior emerges because the parameter updates optimized for the new tasks may not align well with the updates suitable for older tasks.
We propose ELI: Energy-based Latent Aligner for Incremental Learning.
- Score: 83.0135278697976
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning models tend to forget their earlier knowledge while
incrementally learning new tasks. This behavior emerges because the parameter
updates optimized for the new tasks may not align well with the updates
suitable for older tasks. The resulting latent representation mismatch causes
forgetting. In this work, we propose ELI: Energy-based Latent Aligner for
Incremental Learning, which first learns an energy manifold for the latent
representations such that previous task latents will have low energy and the
current task latents have high energy values. This learned manifold is used to
counter the representational shift that happens during incremental learning.
The implicit regularization that is offered by our proposed methodology can be
used as a plug-and-play module in existing incremental learning methodologies.
We validate this through extensive evaluation on CIFAR-100, ImageNet subset,
ImageNet 1k and Pascal VOC datasets. We observe consistent improvement when ELI
is added to three prominent methodologies in class-incremental learning, across
multiple incremental settings. Further, when added to the state-of-the-art
incremental object detector, ELI provides over 5% improvement in detection
accuracy, corroborating its effectiveness and complementary advantage to
existing art.
Related papers
- CODE-CL: COnceptor-Based Gradient Projection for DEep Continual Learning [7.573297026523597]
We introduce COnceptor-based gradient projection for DEep Continual Learning (CODE-CL)
CODE-CL encodes directional importance within the input space of past tasks, allowing new knowledge integration in directions modulated by $1-S$.
We analyze task overlap using conceptor-based representations to identify highly correlated tasks.
arXiv Detail & Related papers (2024-11-21T22:31:06Z) - LW2G: Learning Whether to Grow for Prompt-based Continual Learning [15.766350352592331]
Recent Prompt-based Continual Learning (PCL) has achieved remarkable performance with Pre-Trained Models (PTMs)
We propose a plug-in module in the former stage to textbfLearn Whether to Grow (LW2G) based on the disparities between tasks.
Inspired by Gradient Projection Continual Learning, our LW2G develops a metric called Hinder Forward Capability (HFC) to measure the hindrance imposed on learning new tasks.
arXiv Detail & Related papers (2024-09-27T15:55:13Z) - Evolving Knowledge Mining for Class Incremental Segmentation [113.59611699693092]
Class Incremental Semantic (CISS) has been a trend recently due to its great significance in real-world applications.
We propose a novel method, Evolving kNowleDge minING, employing a frozen backbone.
We evaluate our method on two widely used benchmarks and consistently demonstrate new state-of-the-art performance.
arXiv Detail & Related papers (2023-06-03T07:03:15Z) - Adversarial Auto-Augment with Label Preservation: A Representation
Learning Principle Guided Approach [95.74102207187545]
We show that a prior-free autonomous data augmentation's objective can be derived from a representation learning principle.
We then propose a practical surrogate to the objective that can be efficiently optimized and integrated seamlessly into existing methods.
arXiv Detail & Related papers (2022-11-02T02:02:51Z) - FOSTER: Feature Boosting and Compression for Class-Incremental Learning [52.603520403933985]
Deep neural networks suffer from catastrophic forgetting when learning new categories.
We propose a novel two-stage learning paradigm FOSTER, empowering the model to learn new categories adaptively.
arXiv Detail & Related papers (2022-04-10T11:38:33Z) - DIODE: Dilatable Incremental Object Detection [15.59425584971872]
Conventional deep learning models lack the capability of preserving previously learned knowledge.
We propose a dilatable incremental object detector (DIODE) for multi-step incremental detection tasks.
Our method achieves up to 6.4% performance improvement by increasing the number of parameters by just 1.2% for each newly learned task.
arXiv Detail & Related papers (2021-08-12T09:45:57Z) - iTAML: An Incremental Task-Agnostic Meta-learning Approach [123.10294801296926]
Humans can continuously learn new knowledge as their experience grows.
Previous learning in deep neural networks can quickly fade out when they are trained on a new task.
We introduce a novel meta-learning approach that seeks to maintain an equilibrium between all encountered tasks.
arXiv Detail & Related papers (2020-03-25T21:42:48Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.