Lifelong Hyper-Policy Optimization with Multiple Importance Sampling
Regularization
- URL: http://arxiv.org/abs/2112.06625v1
- Date: Mon, 13 Dec 2021 13:09:49 GMT
- Title: Lifelong Hyper-Policy Optimization with Multiple Importance Sampling
Regularization
- Authors: Pierre Liotet, Francesco Vidaich, Alberto Maria Metelli, Marcello
Restelli
- Abstract summary: We propose an approach which learns a hyper-policy, whose input is time, that outputs the parameters of the policy to be queried at that time.
This hyper-policy is trained to maximize the estimated future performance, efficiently reusing past data by means of importance sampling.
We empirically validate our approach, in comparison with state-of-the-art algorithms, on realistic environments.
- Score: 40.17392342387002
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning in a lifelong setting, where the dynamics continually evolve, is a
hard challenge for current reinforcement learning algorithms. Yet this would be
a much needed feature for practical applications. In this paper, we propose an
approach which learns a hyper-policy, whose input is time, that outputs the
parameters of the policy to be queried at that time. This hyper-policy is
trained to maximize the estimated future performance, efficiently reusing past
data by means of importance sampling, at the cost of introducing a controlled
bias. We combine the future performance estimate with the past performance to
mitigate catastrophic forgetting. To avoid overfitting the collected data, we
derive a differentiable variance bound that we embed as a penalization term.
Finally, we empirically validate our approach, in comparison with
state-of-the-art algorithms, on realistic environments, including water
resource management and trading.
Related papers
- C$^{2}$INet: Realizing Incremental Trajectory Prediction with Prior-Aware Continual Causal Intervention [10.189508227447401]
Trajectory prediction for multi-agents in complex scenarios is crucial for applications like autonomous driving.
Existing methods often overlook environmental biases, which leads to poor generalization.
We propose the Continual Causal Intervention (C$2$INet) method for generalizable multi-agent trajectory prediction.
arXiv Detail & Related papers (2024-11-19T08:01:20Z) - Learning Optimal Deterministic Policies with Stochastic Policy Gradients [62.81324245896716]
Policy gradient (PG) methods are successful approaches to deal with continuous reinforcement learning (RL) problems.
In common practice, convergence (hyper)policies are learned only to deploy their deterministic version.
We show how to tune the exploration level used for learning to optimize the trade-off between the sample complexity and the performance of the deployed deterministic policy.
arXiv Detail & Related papers (2024-05-03T16:45:15Z) - Pessimistic Causal Reinforcement Learning with Mediators for Confounded Offline Data [17.991833729722288]
We propose a novel policy learning algorithm, PESsimistic CAusal Learning (PESCAL)
Our key observation is that, by incorporating auxiliary variables that mediate the effect of actions on system dynamics, it is sufficient to learn a lower bound of the mediator distribution function, instead of the Q-function.
We provide theoretical guarantees for the algorithms we propose, and demonstrate their efficacy through simulations, as well as real-world experiments utilizing offline datasets from a leading ride-hailing platform.
arXiv Detail & Related papers (2024-03-18T14:51:19Z) - Let Offline RL Flow: Training Conservative Agents in the Latent Space of
Normalizing Flows [58.762959061522736]
offline reinforcement learning aims to train a policy on a pre-recorded and fixed dataset without any additional environment interactions.
We build upon recent works on learning policies in latent action spaces and use a special form of Normalizing Flows for constructing a generative model.
We evaluate our method on various locomotion and navigation tasks, demonstrating that our approach outperforms recently proposed algorithms.
arXiv Detail & Related papers (2022-11-20T21:57:10Z) - Latent-Variable Advantage-Weighted Policy Optimization for Offline RL [70.01851346635637]
offline reinforcement learning methods hold the promise of learning policies from pre-collected datasets without the need to query the environment for new transitions.
In practice, offline datasets are often heterogeneous, i.e., collected in a variety of scenarios.
We propose to leverage latent-variable policies that can represent a broader class of policy distributions.
Our method improves the average performance of the next best-performing offline reinforcement learning methods by 49% on heterogeneous datasets.
arXiv Detail & Related papers (2022-03-16T21:17:03Z) - Batch Reinforcement Learning with a Nonparametric Off-Policy Policy
Gradient [34.16700176918835]
Off-policy Reinforcement Learning holds the promise of better data efficiency.
Current off-policy policy gradient methods either suffer from high bias or high variance, delivering often unreliable estimates.
We propose a nonparametric Bellman equation, which can be solved in closed form.
arXiv Detail & Related papers (2020-10-27T13:40:06Z) - Optimizing for the Future in Non-Stationary MDPs [52.373873622008944]
We present a policy gradient algorithm that maximizes a forecast of future performance.
We show that our algorithm, called Prognosticator, is more robust to non-stationarity than two online adaptation techniques.
arXiv Detail & Related papers (2020-05-17T03:41:19Z) - DisCor: Corrective Feedback in Reinforcement Learning via Distribution
Correction [96.90215318875859]
We show that bootstrapping-based Q-learning algorithms do not necessarily benefit from corrective feedback.
We propose a new algorithm, DisCor, which computes an approximation to this optimal distribution and uses it to re-weight the transitions used for training.
arXiv Detail & Related papers (2020-03-16T16:18:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.