Knowledge Retention for Continual Model-Based Reinforcement Learning
- URL: http://arxiv.org/abs/2503.04256v4
- Date: Fri, 06 Jun 2025 02:59:52 GMT
- Title: Knowledge Retention for Continual Model-Based Reinforcement Learning
- Authors: Yixiang Sun, Haotian Fu, Michael Littman, George Konidaris,
- Abstract summary: DRAGO is a novel approach for continual model-based reinforcement learning.<n>DRAGO comprises two key components: Synthetic Experience Rehearsal and Regaining Memories Through Exploration.<n> Empirical evaluations demonstrate that DRAGO is able to preserve knowledge across tasks, achieving superior performance in various continual learning scenarios.
- Score: 11.5581880507344
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose DRAGO, a novel approach for continual model-based reinforcement learning aimed at improving the incremental development of world models across a sequence of tasks that differ in their reward functions but not the state space or dynamics. DRAGO comprises two key components: Synthetic Experience Rehearsal, which leverages generative models to create synthetic experiences from past tasks, allowing the agent to reinforce previously learned dynamics without storing data, and Regaining Memories Through Exploration, which introduces an intrinsic reward mechanism to guide the agent toward revisiting relevant states from prior tasks. Together, these components enable the agent to maintain a comprehensive and continually developing world model, facilitating more effective learning and adaptation across diverse environments. Empirical evaluations demonstrate that DRAGO is able to preserve knowledge across tasks, achieving superior performance in various continual learning scenarios.
Related papers
- Multi-Agent Model-Based Reinforcement Learning with Joint State-Action Learned Embeddings [10.36125908359289]
We present a novel model-based multi-agent reinforcement learning framework.<n>We design a world model trained with variational auto-encoders and augment the model using the state-action learned embedding.<n>By coupling imagined trajectories with SALE-based action values, the agents acquire a richer understanding of how their choices influence collective outcomes.
arXiv Detail & Related papers (2026-02-13T01:57:21Z) - Building Self-Evolving Agents via Experience-Driven Lifelong Learning: A Framework and Benchmark [57.59000694149105]
We introduce Experience-driven Lifelong Learning (ELL), a framework for building self-evolving agents.<n>ELL is built on four core principles: Experience Exploration, Long-term Memory, Skill Learning and Knowledge Internalization.<n>We also introduce StuLife, a benchmark dataset for ELL that simulates a student's holistic college journey.
arXiv Detail & Related papers (2025-08-26T13:04:28Z) - Self-Controlled Dynamic Expansion Model for Continual Learning [10.447232167638816]
This paper introduces an innovative Self-Controlled Dynamic Expansion Model (SCDEM)
SCDEM orchestrates multiple trainable pre-trained ViT backbones to furnish diverse and semantically enriched representations.
An extensive series of experiments have been conducted to evaluate the proposed methodology's efficacy.
arXiv Detail & Related papers (2025-04-14T15:22:51Z) - Spurious Forgetting in Continual Learning of Language Models [20.0936011355535]
Recent advancements in large language models (LLMs) reveal a perplexing phenomenon in continual learning.<n>Despite extensive training, models experience significant performance declines.<n>This study proposes that such performance drops often reflect a decline in task alignment rather than true knowledge loss.
arXiv Detail & Related papers (2025-01-23T08:09:54Z) - Incrementally Learning Multiple Diverse Data Domains via Multi-Source Dynamic Expansion Model [16.035374682124846]
Continual Learning seeks to develop a model capable of incrementally assimilating new information while retaining prior knowledge.<n>This paper shifts focus to a more complex and realistic learning environment, characterized by data samples sourced from multiple distinct domains.
arXiv Detail & Related papers (2025-01-15T15:49:46Z) - Research on the Online Update Method for Retrieval-Augmented Generation (RAG) Model with Incremental Learning [13.076087281398813]
The proposed method is better than the existing mainstream comparison models in terms of knowledge retention and inference accuracy.<n> Experimental results show that the proposed method is better than the existing mainstream comparison models in terms of knowledge retention and inference accuracy.
arXiv Detail & Related papers (2025-01-13T05:16:14Z) - On the Modeling Capabilities of Large Language Models for Sequential Decision Making [52.128546842746246]
Large pretrained models are showing increasingly better performance in reasoning and planning tasks.
We evaluate their ability to produce decision-making policies, either directly, by generating actions, or indirectly.
In environments with unfamiliar dynamics, we explore how fine-tuning LLMs with synthetic data can significantly improve their reward modeling capabilities.
arXiv Detail & Related papers (2024-10-08T03:12:57Z) - A Retention-Centric Framework for Continual Learning with Guaranteed Model Developmental Safety [75.8161094916476]
In real-world applications, learning-enabled systems often undergo iterative model development to address challenging or emerging tasks.
New or improving existing capabilities may inadvertently lose good capabilities of the old model, also known as catastrophic forgetting.
We propose a retention-centric framework with data-dependent constraints, and study how to continually develop a pretrained CLIP model for acquiring new or improving existing capabilities of image classification.
arXiv Detail & Related papers (2024-10-04T22:34:58Z) - RILe: Reinforced Imitation Learning [60.63173816209543]
RILe is a framework that combines the strengths of imitation learning and inverse reinforcement learning to learn a dense reward function efficiently.<n>Our framework produces high-performing policies in high-dimensional tasks where direct imitation fails to replicate complex behaviors.
arXiv Detail & Related papers (2024-06-12T17:56:31Z) - Building Minimal and Reusable Causal State Abstractions for
Reinforcement Learning [63.58935783293342]
Causal Bisimulation Modeling (CBM) is a method that learns the causal relationships in the dynamics and reward functions for each task to derive a minimal, task-specific abstraction.
CBM's learned implicit dynamics models identify the underlying causal relationships and state abstractions more accurately than explicit ones.
arXiv Detail & Related papers (2024-01-23T05:43:15Z) - Predictive Experience Replay for Continual Visual Control and
Forecasting [62.06183102362871]
We present a new continual learning approach for visual dynamics modeling and explore its efficacy in visual control and forecasting.
We first propose the mixture world model that learns task-specific dynamics priors with a mixture of Gaussians, and then introduce a new training strategy to overcome catastrophic forgetting.
Our model remarkably outperforms the naive combinations of existing continual learning and visual RL algorithms on DeepMind Control and Meta-World benchmarks with continual visual control tasks.
arXiv Detail & Related papers (2023-03-12T05:08:03Z) - Improving Sequential Recommendation Consistency with Self-Supervised
Imitation [31.156591972077162]
We propose a model, SSI, to improve sequential recommendation consistency with Self-Supervised expressiveness.
To take advantage of all three independent aspects of consistency-enhanced knowledge, we establish an integrated imitation learning framework.
Experiments on four real-world datasets show that SSI effectively outperforms the state-of-the-art sequential recommendation methods.
arXiv Detail & Related papers (2021-06-26T14:15:29Z) - Behavior Priors for Efficient Reinforcement Learning [97.81587970962232]
We consider how information and architectural constraints can be combined with ideas from the probabilistic modeling literature to learn behavior priors.
We discuss how such latent variable formulations connect to related work on hierarchical reinforcement learning (HRL) and mutual information and curiosity based objectives.
We demonstrate the effectiveness of our framework by applying it to a range of simulated continuous control domains.
arXiv Detail & Related papers (2020-10-27T13:17:18Z) - Goal-Aware Prediction: Learning to Model What Matters [105.43098326577434]
One of the fundamental challenges in using a learned forward dynamics model is the mismatch between the objective of the learned model and that of the downstream planner or policy.
We propose to direct prediction towards task relevant information, enabling the model to be aware of the current task and encouraging it to only model relevant quantities of the state space.
We find that our method more effectively models the relevant parts of the scene conditioned on the goal, and as a result outperforms standard task-agnostic dynamics models and model-free reinforcement learning.
arXiv Detail & Related papers (2020-07-14T16:42:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.