t-DGR: A Trajectory-Based Deep Generative Replay Method for Continual Learning in Decision Making
- URL: http://arxiv.org/abs/2401.02576v2
- Date: Mon, 17 Jun 2024 16:04:27 GMT
- Title: t-DGR: A Trajectory-Based Deep Generative Replay Method for Continual Learning in Decision Making
- Authors: William Yue, Bo Liu, Peter Stone,
- Abstract summary: We propose a simple, scalable, and non-autoregressive method for continual learning in decision-making tasks.
We evaluate our method on Continual World benchmarks and find that our approach achieves state-of-the-art performance.
- Score: 34.02510598090704
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep generative replay has emerged as a promising approach for continual learning in decision-making tasks. This approach addresses the problem of catastrophic forgetting by leveraging the generation of trajectories from previously encountered tasks to augment the current dataset. However, existing deep generative replay methods for continual learning rely on autoregressive models, which suffer from compounding errors in the generated trajectories. In this paper, we propose a simple, scalable, and non-autoregressive method for continual learning in decision-making tasks using a generative model that generates task samples conditioned on the trajectory timestep. We evaluate our method on Continual World benchmarks and find that our approach achieves state-of-the-art performance on the average success rate metric among continual learning methods. Code is available at https://github.com/WilliamYue37/t-DGR.
Related papers
- Stable Continual Reinforcement Learning via Diffusion-based Trajectory Replay [28.033367285923465]
Reinforcement Learning (RL) aims to equip the agent with the capability to address a series of sequentially presented decision-making tasks.
This paper introduces a novel continual RL algorithm DISTR that employs a diffusion model to memorize the high-return trajectory distribution of each encountered task.
Considering the impracticality of replaying all past data each time, a prioritization mechanism is proposed to prioritize the trajectory replay of pivotal tasks.
arXiv Detail & Related papers (2024-11-16T14:03:23Z) - Diffusion Generative Flow Samplers: Improving learning signals through
partial trajectory optimization [87.21285093582446]
Diffusion Generative Flow Samplers (DGFS) is a sampling-based framework where the learning process can be tractably broken down into short partial trajectory segments.
Our method takes inspiration from the theory developed for generative flow networks (GFlowNets)
arXiv Detail & Related papers (2023-10-04T09:39:05Z) - Clustering-based Domain-Incremental Learning [4.835091081509403]
Key challenge in continual learning is the so-called "catastrophic forgetting problem"
We propose an online clustering-based approach on a dynamically updated finite pool of samples or gradients.
We demonstrate the effectiveness of the proposed strategy and its promising performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-09-21T13:49:05Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Continual Learning with Dirichlet Generative-based Rehearsal [22.314195832409755]
We present Dirichlet Continual Learning, a novel generative-based rehearsal strategy for task-oriented dialogue systems.
We also introduce Jensen-Shannon Knowledge Distillation (JSKD), a robust logit-based knowledge distillation method.
Our experiments confirm the efficacy of our approach in both intent detection and slot-filling tasks, outperforming state-of-the-art methods.
arXiv Detail & Related papers (2023-09-13T12:30:03Z) - Mode-Aware Continual Learning for Conditional Generative Adversarial
Networks [27.28511396131235]
We introduce a new continual learning approach for conditional generative adversarial networks.
First, the generator produces samples of existing modes for subsequent replay.
The discriminator is then used to compute the mode similarity measure.
A label for the target mode is generated and given as a weighted average of the labels within this set.
arXiv Detail & Related papers (2023-05-19T03:00:31Z) - DLCFT: Deep Linear Continual Fine-Tuning for General Incremental
Learning [29.80680408934347]
We propose an alternative framework to incremental learning where we continually fine-tune the model from a pre-trained representation.
Our method takes advantage of linearization technique of a pre-trained neural network for simple and effective continual learning.
We show that our method can be applied to general continual learning settings, we evaluate our method in data-incremental, task-incremental, and class-incremental learning problems.
arXiv Detail & Related papers (2022-08-17T06:58:14Z) - Imitating, Fast and Slow: Robust learning from demonstrations via
decision-time planning [96.72185761508668]
Planning at Test-time (IMPLANT) is a new meta-algorithm for imitation learning.
We demonstrate that IMPLANT significantly outperforms benchmark imitation learning approaches on standard control environments.
arXiv Detail & Related papers (2022-04-07T17:16:52Z) - Text Generation with Efficient (Soft) Q-Learning [91.47743595382758]
Reinforcement learning (RL) offers a more flexible solution by allowing users to plug in arbitrary task metrics as reward.
We introduce a new RL formulation for text generation from the soft Q-learning perspective.
We apply the approach to a wide range of tasks, including learning from noisy/negative examples, adversarial attacks, and prompt generation.
arXiv Detail & Related papers (2021-06-14T18:48:40Z) - Model-Augmented Actor-Critic: Backpropagating through Paths [81.86992776864729]
Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator.
We show how to make more effective use of the model by exploiting its differentiability.
arXiv Detail & Related papers (2020-05-16T19:18:10Z) - A Neural Dirichlet Process Mixture Model for Task-Free Continual
Learning [48.87397222244402]
We propose an expansion-based approach for task-free continual learning.
Our model successfully performs task-free continual learning for both discriminative and generative tasks.
arXiv Detail & Related papers (2020-01-03T02:07:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.