Hallucinated Adversarial Control for Conservative Offline Policy
Evaluation
- URL: http://arxiv.org/abs/2303.01076v2
- Date: Fri, 26 May 2023 07:52:30 GMT
- Title: Hallucinated Adversarial Control for Conservative Offline Policy
Evaluation
- Authors: Jonas Rothfuss, Bhavya Sukhija, Tobias Birchler, Parnian Kassraie,
Andreas Krause
- Abstract summary: We study the problem of conservative off-policy evaluation (COPE) where given an offline dataset of environment interactions, we seek to obtain a (tight) lower bound on a policy's performance.
We introduce HAMBO, which builds on an uncertainty-aware learned model of the transition dynamics.
We prove that the resulting COPE estimates are valid lower bounds, and, under regularity conditions, show their convergence to the true expected return.
- Score: 64.94009515033984
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We study the problem of conservative off-policy evaluation (COPE) where given
an offline dataset of environment interactions, collected by other agents, we
seek to obtain a (tight) lower bound on a policy's performance. This is crucial
when deciding whether a given policy satisfies certain minimal
performance/safety criteria before it can be deployed in the real world. To
this end, we introduce HAMBO, which builds on an uncertainty-aware learned
model of the transition dynamics. To form a conservative estimate of the
policy's performance, HAMBO hallucinates worst-case trajectories that the
policy may take, within the margin of the models' epistemic confidence regions.
We prove that the resulting COPE estimates are valid lower bounds, and, under
regularity conditions, show their convergence to the true expected return.
Finally, we discuss scalable variants of our approach based on Bayesian Neural
Networks and empirically demonstrate that they yield reliable and tight lower
bounds in various continuous control environments.
Related papers
- Importance-Weighted Offline Learning Done Right [16.4989952150404]
We study the problem of offline policy optimization in contextual bandit problems.
The goal is to learn a near-optimal policy based on a dataset of decision data collected by a suboptimal behavior policy.
We show that a simple alternative approach based on the "implicit exploration" estimator of citet2015 yields performance guarantees that are superior in nearly all possible terms to all previous results.
arXiv Detail & Related papers (2023-09-27T16:42:10Z) - Matrix Estimation for Offline Reinforcement Learning with Low-Rank
Structure [10.968373699696455]
We consider offline Reinforcement Learning (RL), where the agent does not interact with the environment and must rely on offline data collected using a behavior policy.
Previous works provide policy evaluation guarantees when the target policy to be evaluated is covered by the behavior policy.
We propose an offline policy evaluation algorithm that leverages the low-rank structure to estimate the values of uncovered state-action pairs.
arXiv Detail & Related papers (2023-05-24T23:49:06Z) - Offline Policy Evaluation and Optimization under Confounding [35.778917456294046]
We map out the landscape of offline policy evaluation for confounded MDPs.
We characterize settings where consistent value estimates are provably not achievable.
We present new algorithms for offline policy improvement and prove local convergence guarantees.
arXiv Detail & Related papers (2022-11-29T20:45:08Z) - Conformal Off-Policy Prediction in Contextual Bandits [54.67508891852636]
Conformal off-policy prediction can output reliable predictive intervals for the outcome under a new target policy.
We provide theoretical finite-sample guarantees without making any additional assumptions beyond the standard contextual bandit setup.
arXiv Detail & Related papers (2022-06-09T10:39:33Z) - Model-Free and Model-Based Policy Evaluation when Causality is Uncertain [7.858296711223292]
In off-policy evaluation, there may exist unobserved variables that both impact the dynamics and are used by the unknown behavior policy.
We develop worst-case bounds to assess sensitivity to these unobserved confounders in finite horizons.
We show that a model-based approach with robust MDPs gives sharper lower bounds by exploiting domain knowledge about the dynamics.
arXiv Detail & Related papers (2022-04-02T23:40:15Z) - Offline Policy Selection under Uncertainty [113.57441913299868]
We consider offline policy selection as learning preferences over a set of policy prospects given a fixed experience dataset.
Access to the full distribution over one's belief of the policy value enables more flexible selection algorithms under a wider range of downstream evaluation metrics.
We show how BayesDICE may be used to rank policies with respect to any arbitrary downstream policy selection metric.
arXiv Detail & Related papers (2020-12-12T23:09:21Z) - Reliable Off-policy Evaluation for Reinforcement Learning [53.486680020852724]
In a sequential decision-making problem, off-policy evaluation estimates the expected cumulative reward of a target policy.
We propose a novel framework that provides robust and optimistic cumulative reward estimates using one or multiple logged data.
arXiv Detail & Related papers (2020-11-08T23:16:19Z) - Provably Good Batch Reinforcement Learning Without Great Exploration [51.51462608429621]
Batch reinforcement learning (RL) is important to apply RL algorithms to many high stakes tasks.
Recent algorithms have shown promise but can still be overly optimistic in their expected outcomes.
We show that a small modification to Bellman optimality and evaluation back-up to take a more conservative update can have much stronger guarantees.
arXiv Detail & Related papers (2020-07-16T09:25:54Z) - Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement
Learning [70.01650994156797]
Off- evaluation of sequential decision policies from observational data is necessary in batch reinforcement learning such as education healthcare.
We develop an approach that estimates the bounds of a given policy.
We prove convergence to the sharp bounds as we collect more confounded data.
arXiv Detail & Related papers (2020-02-11T16:18:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.