TIDo: Source-free Task Incremental Learning in Non-stationary
Environments
- URL: http://arxiv.org/abs/2301.12055v1
- Date: Sat, 28 Jan 2023 02:19:45 GMT
- Title: TIDo: Source-free Task Incremental Learning in Non-stationary
Environments
- Authors: Abhinit Kumar Ambastha, Leong Tze Yun
- Abstract summary: Updating a model-based agent to learn new target tasks requires us to store past training data.
Few-shot task incremental learning methods overcome the limitation of labeled target datasets.
We propose a one-shot task incremental learning approach that can adapt to non-stationary source and target tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work presents an incremental learning approach for autonomous agents to
learn new tasks in a non-stationary environment. Updating a DNN model-based
agent to learn new target tasks requires us to store past training data and
needs a large labeled target task dataset. Few-shot task incremental learning
methods overcome the limitation of labeled target datasets by adapting trained
models to learn private target classes using a few labeled representatives and
a large unlabeled target dataset. However, the methods assume that the source
and target tasks are stationary. We propose a one-shot task incremental
learning approach that can adapt to non-stationary source and target tasks. Our
approach minimizes adversarial discrepancy between the model's feature space
and incoming incremental data to learn an updated hypothesis. We also use
distillation loss to reduce catastrophic forgetting of previously learned
tasks. Finally, we use Gaussian prototypes to generate exemplar instances
eliminating the need to store past training data. Unlike current work in task
incremental learning, our model can learn both source and target task updates
incrementally. We evaluate our method on various problem settings for
incremental object detection and disease prediction model update. We evaluate
our approach by measuring the performance of shared class and target private
class prediction. Our results show that our approach achieved improved
performance compared to existing state-of-the-art task incremental learning
methods.
Related papers
- Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - $α$VIL: Learning to Leverage Auxiliary Tasks for Multitask Learning [3.809702129519642]
Multitask Learning aims to train a range of (usually related) tasks with the help of a shared model.
It becomes important to estimate the positive or negative influence auxiliary tasks will have on the target.
We propose a novel method called $alpha$Variable Learning ($alpha$VIL) that is able to adjust task weights dynamically during model training.
arXiv Detail & Related papers (2024-05-13T14:12:33Z) - Task-Distributionally Robust Data-Free Meta-Learning [99.56612787882334]
Data-Free Meta-Learning (DFML) aims to efficiently learn new tasks by leveraging multiple pre-trained models without requiring their original training data.
For the first time, we reveal two major challenges hindering their practical deployments: Task-Distribution Shift ( TDS) and Task-Distribution Corruption (TDC)
arXiv Detail & Related papers (2023-11-23T15:46:54Z) - Clustering-based Domain-Incremental Learning [4.835091081509403]
Key challenge in continual learning is the so-called "catastrophic forgetting problem"
We propose an online clustering-based approach on a dynamically updated finite pool of samples or gradients.
We demonstrate the effectiveness of the proposed strategy and its promising performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-09-21T13:49:05Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - An Exploration of Data Efficiency in Intra-Dataset Task Transfer for
Dialog Understanding [65.75873687351553]
This study explores the effects of varying quantities of target task training data on sequential transfer learning in the dialog domain.
Unintuitively, our data shows that often target task training data size has minimal effect on how sequential transfer learning performs compared to the same model without transfer learning.
arXiv Detail & Related papers (2022-10-21T04:36:46Z) - Improving Meta-Learning Generalization with Activation-Based
Early-Stopping [12.299371455015239]
Meta-Learning algorithms for few-shot learning aim to train neural networks capable of generalizing to novel tasks using only a few examples.
Early-stopping is critical for performance, halting model training when it reaches optimal generalization to the new task distribution.
This is problematic in few-shot transfer learning settings, where the meta-test set comes from a different target dataset.
arXiv Detail & Related papers (2022-08-03T22:55:45Z) - Rectification-based Knowledge Retention for Continual Learning [49.1447478254131]
Deep learning models suffer from catastrophic forgetting when trained in an incremental learning setting.
We propose a novel approach to address the task incremental learning problem, which involves training a model on new tasks that arrive in an incremental manner.
Our approach can be used in both the zero-shot and non zero-shot task incremental learning settings.
arXiv Detail & Related papers (2021-03-30T18:11:30Z) - Meta-Regularization by Enforcing Mutual-Exclusiveness [0.8057006406834467]
We propose a regularization technique for meta-learning models that gives the model designer more control over the information flow during meta-training.
Our proposed regularization function shows an accuracy boost of $sim$ $36%$ on the Omniglot dataset.
arXiv Detail & Related papers (2021-01-24T22:57:19Z) - Goal-Aware Prediction: Learning to Model What Matters [105.43098326577434]
One of the fundamental challenges in using a learned forward dynamics model is the mismatch between the objective of the learned model and that of the downstream planner or policy.
We propose to direct prediction towards task relevant information, enabling the model to be aware of the current task and encouraging it to only model relevant quantities of the state space.
We find that our method more effectively models the relevant parts of the scene conditioned on the goal, and as a result outperforms standard task-agnostic dynamics models and model-free reinforcement learning.
arXiv Detail & Related papers (2020-07-14T16:42:59Z) - Few Is Enough: Task-Augmented Active Meta-Learning for Brain Cell
Classification [8.998976678920236]
We propose a tAsk-auGmented actIve meta-LEarning (AGILE) method to efficiently adapt Deep Neural Networks to new tasks.
AGILE combines a meta-learning algorithm with a novel task augmentation technique which we use to generate an initial adaptive model.
We show that the proposed task-augmented meta-learning framework can learn to classify new cell types after a single gradient step.
arXiv Detail & Related papers (2020-07-09T18:03:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.