K-percent Evaluation for Lifelong RL
- URL: http://arxiv.org/abs/2404.02113v3
- Date: Sat, 25 May 2024 21:22:21 GMT
- Title: K-percent Evaluation for Lifelong RL
- Authors: Golnaz Mesbahi, Parham Mohammad Panahi, Olya Mastikhina, Martha White, Adam White,
- Abstract summary: In continual or lifelong reinforcement learning, access to the environment should be limited.
We propose a new approach for evaluating lifelong RL agents where only k percent of the experiment data can be used for hyper parameter tuning.
We find agents generally perform poorly when restricted to k-percent tuning, whereas several algorithmic mitigations designed to maintain network plasticity perform surprisingly well.
- Score: 17.952216813295117
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In continual or lifelong reinforcement learning, access to the environment should be limited. If we aspire to design algorithms that can run for long periods, continually adapting to new, unexpected situations, then we must be willing to deploy our agents without tuning their hyperparameters over the agent's entire lifetime. The standard practice in deep RL, and even continual RL, is to assume unfettered access to the deployment environment for the full lifetime of the agent. In this paper, we propose a new approach for evaluating lifelong RL agents where only k percent of the experiment data can be used for hyperparameter tuning. We then conduct an empirical study of DQN and SAC across a variety of continuing and non-stationary domains. We find agents generally perform poorly when restricted to k-percent tuning, whereas several algorithmic mitigations designed to maintain network plasticity perform surprisingly well.
Related papers
- The Cell Must Go On: Agar.io for Continual Reinforcement Learning [9.034912115190034]
Continual reinforcement learning (RL) concerns agents that are expected to learn continually, rather than converge to a policy that is then fixed for evaluation.<n>We introduce AgarCL, a research platform for continual RL that allows for a progression of increasingly sophisticated behaviour.
arXiv Detail & Related papers (2025-05-23T20:09:27Z) - Statistical Guarantees for Lifelong Reinforcement Learning using PAC-Bayesian Theory [37.02104729448692]
EPIC is a novel algorithm designed for lifelong reinforcement learning.
It learns a shared policy distribution, referred to as the textitworld policy, which enables rapid adaptation to new tasks.
Experiments on a variety of environments demonstrate that EPIC significantly outperforms existing methods in lifelong RL.
arXiv Detail & Related papers (2024-11-01T07:01:28Z) - Continuous Control with Coarse-to-fine Reinforcement Learning [15.585706638252441]
We present a framework that trains RL agents to zoom-into a continuous action space in a coarse-to-fine manner.
We introduce a concrete, value-based algorithm within the framework called Coarse-to-fine Q-Network (CQN)
CQN robustly learns to solve real-world manipulation tasks within a few minutes of online training.
arXiv Detail & Related papers (2024-07-10T16:04:08Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - Prompt-Tuning Decision Transformer with Preference Ranking [83.76329715043205]
We propose the Prompt-Tuning DT algorithm to address challenges by using trajectory segments as prompts to guide RL agents in acquiring environmental information.
Our approach involves randomly sampling a Gaussian distribution to fine-tune the elements of the prompt trajectory and using preference ranking function to find the optimization direction.
Our work contributes to the advancement of prompt-tuning approaches in RL, providing a promising direction for optimizing large RL agents for specific preference tasks.
arXiv Detail & Related papers (2023-05-16T17:49:04Z) - Can Agents Run Relay Race with Strangers? Generalization of RL to
Out-of-Distribution Trajectories [88.08381083207449]
We show the prevalence of emphgeneralization failure on controllable states from stranger agents.
We propose a novel method called Self-Trajectory Augmentation (STA), which will reset the environment to the agent's old states according to the Q function during training.
arXiv Detail & Related papers (2023-04-26T10:12:12Z) - Multi-agent Policy Reciprocity with Theoretical Guarantee [24.65151626601257]
We propose a novel multi-agent policy reciprocity (PR) framework, where each agent can fully exploit cross-agent policies even in mismatched states.
Experimental results on discrete and continuous environments demonstrate that PR outperforms various existing RL and transfer RL methods.
arXiv Detail & Related papers (2023-04-12T06:27:10Z) - Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels [112.63440666617494]
Reinforcement learning algorithms can succeed but require large amounts of interactions between the agent and the environment.
We propose a new method to solve it, using unsupervised model-based RL, for pre-training the agent.
We show robust performance on the Real-Word RL benchmark, hinting at resiliency to environment perturbations during adaptation.
arXiv Detail & Related papers (2022-09-24T14:22:29Z) - Optimizing the Long-Term Average Reward for Continuing MDPs: A Technical
Report [117.23323653198297]
We have struck the balance between the information freshness, experienced by users and energy consumed by sensors.
We cast the corresponding status update procedure as a continuing Markov Decision Process (MDP)
To circumvent the curse of dimensionality, we have established a methodology for designing deep reinforcement learning (DRL) algorithms.
arXiv Detail & Related papers (2021-04-13T12:29:55Z) - Continuous Coordination As a Realistic Scenario for Lifelong Learning [6.044372319762058]
We introduce a multi-agent lifelong learning testbed that supports both zero-shot and few-shot settings.
We evaluate several recent MARL methods, and benchmark state-of-the-art LLL algorithms in limited memory and computation.
We empirically show that the agents trained in our setup are able to coordinate well with unseen agents, without any additional assumptions made by previous works.
arXiv Detail & Related papers (2021-03-04T18:44:03Z) - Provably Good Batch Reinforcement Learning Without Great Exploration [51.51462608429621]
Batch reinforcement learning (RL) is important to apply RL algorithms to many high stakes tasks.
Recent algorithms have shown promise but can still be overly optimistic in their expected outcomes.
We show that a small modification to Bellman optimality and evaluation back-up to take a more conservative update can have much stronger guarantees.
arXiv Detail & Related papers (2020-07-16T09:25:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.