Lucid Dreaming for Experience Replay: Refreshing Past States with the
Current Policy
- URL: http://arxiv.org/abs/2009.13736v3
- Date: Sat, 3 Apr 2021 23:43:26 GMT
- Title: Lucid Dreaming for Experience Replay: Refreshing Past States with the
Current Policy
- Authors: Yunshu Du, Garrett Warnell, Assefaw Gebremedhin, Peter Stone, Matthew
E. Taylor
- Abstract summary: We introduce Lucid Dreaming for Experience Replay (LiDER), a framework that allows replay experiences to be refreshed by leveraging the agent's current policy.
LiDER consistently improves performance over the baseline in six Atari 2600 games.
- Score: 48.8675653453076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Experience replay (ER) improves the data efficiency of off-policy
reinforcement learning (RL) algorithms by allowing an agent to store and reuse
its past experiences in a replay buffer. While many techniques have been
proposed to enhance ER by biasing how experiences are sampled from the buffer,
thus far they have not considered strategies for refreshing experiences inside
the buffer. In this work, we introduce Lucid Dreaming for Experience Replay
(LiDER), a conceptually new framework that allows replay experiences to be
refreshed by leveraging the agent's current policy. LiDER consists of three
steps: First, LiDER moves an agent back to a past state. Second, from that
state, LiDER then lets the agent execute a sequence of actions by following its
current policy -- as if the agent were "dreaming" about the past and can try
out different behaviors to encounter new experiences in the dream. Third, LiDER
stores and reuses the new experience if it turned out better than what the
agent previously experienced, i.e., to refresh its memories. LiDER is designed
to be easily incorporated into off-policy, multi-worker RL algorithms that use
ER; we present in this work a case study of applying LiDER to an actor-critic
based algorithm. Results show LiDER consistently improves performance over the
baseline in six Atari 2600 games. Our open-source implementation of LiDER and
the data used to generate all plots in this work are available at
github.com/duyunshu/lucid-dreaming-for-exp-replay.
Related papers
- CoPS: Empowering LLM Agents with Provable Cross-Task Experience Sharing [70.25689961697523]
We propose a generalizable algorithm that enhances sequential reasoning by cross-task experience sharing and selection.
Our work bridges the gap between existing sequential reasoning paradigms and validates the effectiveness of leveraging cross-task experiences.
arXiv Detail & Related papers (2024-10-22T03:59:53Z) - OER: Offline Experience Replay for Continual Offline Reinforcement Learning [25.985985377992034]
Continuously learning new skills via a sequence of pre-collected offline datasets is desired for an agent.
In this paper, we formulate a new setting, continual offline reinforcement learning (CORL), where an agent learns a sequence of offline reinforcement learning tasks.
We propose a new model-based experience selection scheme to build the replay buffer, where a transition model is learned to approximate the state distribution.
arXiv Detail & Related papers (2023-05-23T08:16:44Z) - Eventual Discounting Temporal Logic Counterfactual Experience Replay [42.20459462725206]
The standard RL framework can be too myopic to find maximally satisfying policies.
We develop a new value-function based proxy, using a technique we call eventual discounting.
Second, we develop a new experience replay method for generating off-policy data.
arXiv Detail & Related papers (2023-03-03T18:29:47Z) - Look Back When Surprised: Stabilizing Reverse Experience Replay for
Neural Approximation [7.6146285961466]
We consider the recently developed and theoretically rigorous reverse experience replay (RER)
We show via experiments that this has a better performance than techniques like prioritized experience replay (PER) on various tasks.
arXiv Detail & Related papers (2022-06-07T10:42:02Z) - Retrieval-Augmented Reinforcement Learning [63.32076191982944]
We train a network to map a dataset of past experiences to optimal behavior.
The retrieval process is trained to retrieve information from the dataset that may be useful in the current context.
We show that retrieval-augmented R2D2 learns significantly faster than the baseline R2D2 agent and achieves higher scores.
arXiv Detail & Related papers (2022-02-17T02:44:05Z) - Replay For Safety [51.11953997546418]
In experience replay, past transitions are stored in a memory buffer and re-used during learning.
We show that using an appropriate biased sampling scheme can allow us to achieve a emphsafe policy.
arXiv Detail & Related papers (2021-12-08T11:10:57Z) - Revisiting Fundamentals of Experience Replay [91.24213515992595]
We present a systematic and extensive analysis of experience replay in Q-learning methods.
We focus on two fundamental properties: the replay capacity and the ratio of learning updates to experience collected.
arXiv Detail & Related papers (2020-07-13T21:22:17Z) - Experience Replay with Likelihood-free Importance Weights [123.52005591531194]
We propose to reweight experiences based on their likelihood under the stationary distribution of the current policy.
We apply the proposed approach empirically on two competitive methods, Soft Actor Critic (SAC) and Twin Delayed Deep Deterministic policy gradient (TD3)
arXiv Detail & Related papers (2020-06-23T17:17:44Z) - Bootstrapping a DQN Replay Memory with Synthetic Experiences [0.0]
We present an algorithm that creates synthetic experiences in a nondeterministic discrete environment to assist the learner.
The Interpolated Experience Replay is evaluated on the FrozenLake environment and we show that it can support the agent to learn faster and even better than the classic version.
arXiv Detail & Related papers (2020-02-04T15:36:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.