Performative Reinforcement Learning
- URL: http://arxiv.org/abs/2207.00046v1
- Date: Thu, 30 Jun 2022 18:26:03 GMT
- Title: Performative Reinforcement Learning
- Authors: Debmalya Mandal, Stelios Triantafyllou, and Goran Radanovic
- Abstract summary: We introduce the concept of performatively stable policy.
We show that repeatedly optimizing this objective converges to a performatively stable policy.
- Score: 8.07595093287034
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce the framework of performative reinforcement learning where the
policy chosen by the learner affects the underlying reward and transition
dynamics of the environment. Following the recent literature on performative
prediction~\cite{Perdomo et. al., 2020}, we introduce the concept of
performatively stable policy. We then consider a regularized version of the
reinforcement learning problem and show that repeatedly optimizing this
objective converges to a performatively stable policy under reasonable
assumptions on the transition dynamics. Our proof utilizes the dual perspective
of the reinforcement learning problem and may be of independent interest in
analyzing the convergence of other algorithms with decision-dependent
environments.
We then extend our results for the setting where the learner just performs
gradient ascent steps instead of fully optimizing the objective, and for the
setting where the learner has access to a finite number of trajectories from
the changed environment. For both the settings, we leverage the dual
formulation of performative reinforcement learning and establish convergence to
a stable solution. Finally, through extensive experiments on a grid-world
environment, we demonstrate the dependence of convergence on various parameters
e.g. regularization, smoothness, and the number of samples.
Related papers
- Independence Constrained Disentangled Representation Learning from Epistemological Perspective [13.51102815877287]
Disentangled Representation Learning aims to improve the explainability of deep learning methods by training a data encoder that identifies semantically meaningful latent variables in the data generation process.
There is no consensus regarding the objective of disentangled representation learning.
We propose a novel method for disentangled representation learning by employing an integration of mutual information constraint and independence constraint.
arXiv Detail & Related papers (2024-09-04T13:00:59Z) - A Unifying Framework for Action-Conditional Self-Predictive Reinforcement Learning [48.59516337905877]
Learning a good representation is a crucial challenge for Reinforcement Learning (RL) agents.
Recent work has developed theoretical insights into these algorithms.
We take a step towards bridging the gap between theory and practice by analyzing an action-conditional self-predictive objective.
arXiv Detail & Related papers (2024-06-04T07:22:12Z) - Learning Optimal Deterministic Policies with Stochastic Policy Gradients [62.81324245896716]
Policy gradient (PG) methods are successful approaches to deal with continuous reinforcement learning (RL) problems.
In common practice, convergence (hyper)policies are learned only to deploy their deterministic version.
We show how to tune the exploration level used for learning to optimize the trade-off between the sample complexity and the performance of the deployed deterministic policy.
arXiv Detail & Related papers (2024-05-03T16:45:15Z) - Generative Intrinsic Optimization: Intrinsic Control with Model Learning [5.439020425819001]
Future sequence represents the outcome after executing the action into the environment.
Explicit outcomes may vary across state, return, or trajectory serving different purposes such as credit assignment or imitation learning.
We propose a policy scheme that seamlessly incorporates the mutual information, ensuring convergence to the optimal policy.
arXiv Detail & Related papers (2023-10-12T07:50:37Z) - Generalization Across Observation Shifts in Reinforcement Learning [13.136140831757189]
We extend the bisimulation framework to account for context dependent observation shifts.
Specifically, we focus on the simulator based learning setting and use alternate observations to learn a representation space.
This allows us to deploy the agent to varying observation settings during test time and generalize to unseen scenarios.
arXiv Detail & Related papers (2023-06-07T16:49:03Z) - A Regularized Implicit Policy for Offline Reinforcement Learning [54.7427227775581]
offline reinforcement learning enables learning from a fixed dataset, without further interactions with the environment.
We propose a framework that supports learning a flexible yet well-regularized fully-implicit policy.
Experiments and ablation study on the D4RL dataset validate our framework and the effectiveness of our algorithmic designs.
arXiv Detail & Related papers (2022-02-19T20:22:04Z) - Dynamic Regret Analysis for Online Meta-Learning [0.0]
The online meta-learning framework has arisen as a powerful tool for the continual lifelong learning setting.
This formulation involves two levels: outer level which learns meta-learners and inner level which learns task-specific models.
We establish performance in terms of dynamic regret which handles changing environments from a global prospective.
We carry out our analyses in a setting, and in expectation prove a logarithmic local dynamic regret which explicitly depends on the total number of iterations.
arXiv Detail & Related papers (2021-09-29T12:12:59Z) - Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with
Latent Confounders [62.54431888432302]
We study an OPE problem in an infinite-horizon, ergodic Markov decision process with unobserved confounders.
We show how, given only a latent variable model for states and actions, policy value can be identified from off-policy data.
arXiv Detail & Related papers (2020-07-27T22:19:01Z) - Inverse Reinforcement Learning from a Gradient-based Learner [41.8663538249537]
Inverse Reinforcement Learning addresses the problem of inferring an expert's reward function from demonstrations.
In this paper, we propose a new algorithm for this setting, in which the goal is to recover the reward function being optimized by an agent.
arXiv Detail & Related papers (2020-07-15T16:41:00Z) - Deep Reinforcement Learning amidst Lifelong Non-Stationarity [67.24635298387624]
We show that an off-policy RL algorithm can reason about and tackle lifelong non-stationarity.
Our method leverages latent variable models to learn a representation of the environment from current and past experiences.
We also introduce several simulation environments that exhibit lifelong non-stationarity, and empirically find that our approach substantially outperforms approaches that do not reason about environment shift.
arXiv Detail & Related papers (2020-06-18T17:34:50Z) - Non-Stationary Off-Policy Optimization [50.41335279896062]
We study the novel problem of off-policy optimization in piecewise-stationary contextual bandits.
In the offline learning phase, we partition logged data into categorical latent states and learn a near-optimal sub-policy for each state.
In the online deployment phase, we adaptively switch between the learned sub-policies based on their performance.
arXiv Detail & Related papers (2020-06-15T09:16:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.