Doubly Robust Interval Estimation for Optimal Policy Evaluation in
Online Learning
- URL: http://arxiv.org/abs/2110.15501v1
- Date: Fri, 29 Oct 2021 02:38:54 GMT
- Title: Doubly Robust Interval Estimation for Optimal Policy Evaluation in
Online Learning
- Authors: Hengrui Cai, Ye Shen, Rui Song
- Abstract summary: This paper aims to overcome difficulties in policy evaluation for online learning.
We derive the probability of exploration that quantifies the probability of exploring the non-optimal actions.
We develop the doubly robust interval estimation (DREAM) method to infer the value under the estimated optimal policy.
- Score: 8.318053414415903
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Evaluating the performance of an ongoing policy plays a vital role in many
areas such as medicine and economics, to provide crucial instruction on the
early-stop of the online experiment and timely feedback from the environment.
Policy evaluation in online learning thus attracts increasing attention by
inferring the mean outcome of the optimal policy (i.e., the value) in
real-time. Yet, such a problem is particularly challenging due to the dependent
data generated in the online environment, the unknown optimal policy, and the
complex exploration and exploitation trade-off in the adaptive experiment. In
this paper, we aim to overcome these difficulties in policy evaluation for
online learning. We explicitly derive the probability of exploration that
quantifies the probability of exploring the non-optimal actions under commonly
used bandit algorithms. We use this probability to conduct valid inference on
the online conditional mean estimator under each action and develop the doubly
robust interval estimation (DREAM) method to infer the value under the
estimated optimal policy in online learning. The proposed value estimator
provides double protection on the consistency and is asymptotically normal with
a Wald-type confidence interval provided. Extensive simulations and real data
applications are conducted to demonstrate the empirical validity of the
proposed DREAM method.
Related papers
- Pessimistic Causal Reinforcement Learning with Mediators for Confounded Offline Data [17.991833729722288]
We propose a novel policy learning algorithm, PESsimistic CAusal Learning (PESCAL)
Our key observation is that, by incorporating auxiliary variables that mediate the effect of actions on system dynamics, it is sufficient to learn a lower bound of the mediator distribution function, instead of the Q-function.
We provide theoretical guarantees for the algorithms we propose, and demonstrate their efficacy through simulations, as well as real-world experiments utilizing offline datasets from a leading ride-hailing platform.
arXiv Detail & Related papers (2024-03-18T14:51:19Z) - Positivity-free Policy Learning with Observational Data [8.293758599118618]
This study introduces a novel positivity-free (stochastic) policy learning framework.
We propose incremental propensity score policies to adjust propensity score values instead of assigning fixed values to treatments.
This paper provides a thorough exploration of the theoretical guarantees associated with policy learning and validates the proposed framework's finite-sample performance.
arXiv Detail & Related papers (2023-10-10T19:47:27Z) - Statistically Efficient Variance Reduction with Double Policy Estimation
for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning [53.97273491846883]
We propose DPE: an RL algorithm that blends offline sequence modeling and offline reinforcement learning with Double Policy Estimation.
We validate our method in multiple tasks of OpenAI Gym with D4RL benchmarks.
arXiv Detail & Related papers (2023-08-28T20:46:07Z) - Uncertainty-Aware Instance Reweighting for Off-Policy Learning [63.31923483172859]
We propose a Uncertainty-aware Inverse Propensity Score estimator (UIPS) for improved off-policy learning.
Experiment results on synthetic and three real-world recommendation datasets demonstrate the advantageous sample efficiency of the proposed UIPS estimator.
arXiv Detail & Related papers (2023-03-11T11:42:26Z) - Offline Policy Evaluation and Optimization under Confounding [35.778917456294046]
We map out the landscape of offline policy evaluation for confounded MDPs.
We characterize settings where consistent value estimates are provably not achievable.
We present new algorithms for offline policy improvement and prove local convergence guarantees.
arXiv Detail & Related papers (2022-11-29T20:45:08Z) - Towards Robust Off-policy Learning for Runtime Uncertainty [28.425951919439783]
Off-policy learning plays a pivotal role in optimizing and evaluating policies prior to the online deployment.
runtime uncertainty cannot be learned from the logged data due to its abnormality and rareness nature.
We bring runtime-uncertainty robustness to three major off-policy learning methods: the inverse propensity score method, reward-model method, and doubly robust method.
arXiv Detail & Related papers (2022-02-27T10:51:02Z) - Benchmarks for Deep Off-Policy Evaluation [152.28569758144022]
We present a collection of policies that can be used for benchmarking off-policy evaluation.
The goal of our benchmark is to provide a standardized measure of progress that is motivated from a set of principles.
We provide open-source access to our data and code to foster future research in this area.
arXiv Detail & Related papers (2021-03-30T18:09:33Z) - Reliable Off-policy Evaluation for Reinforcement Learning [53.486680020852724]
In a sequential decision-making problem, off-policy evaluation estimates the expected cumulative reward of a target policy.
We propose a novel framework that provides robust and optimistic cumulative reward estimates using one or multiple logged data.
arXiv Detail & Related papers (2020-11-08T23:16:19Z) - CoinDICE: Off-Policy Confidence Interval Estimation [107.86876722777535]
We study high-confidence behavior-agnostic off-policy evaluation in reinforcement learning.
We show in a variety of benchmarks that the confidence interval estimates are tighter and more accurate than existing methods.
arXiv Detail & Related papers (2020-10-22T12:39:11Z) - Distributionally Robust Batch Contextual Bandits [20.667213458836734]
Policy learning using historical observational data is an important problem that has found widespread applications.
Existing literature rests on the crucial assumption that the future environment where the learned policy will be deployed is the same as the past environment.
In this paper, we lift this assumption and aim to learn a distributionally robust policy with incomplete observational data.
arXiv Detail & Related papers (2020-06-10T03:11:40Z) - Interpretable Off-Policy Evaluation in Reinforcement Learning by
Highlighting Influential Transitions [48.91284724066349]
Off-policy evaluation in reinforcement learning offers the chance of using observational data to improve future outcomes in domains such as healthcare and education.
Traditional measures such as confidence intervals may be insufficient due to noise, limited data and confounding.
We develop a method that could serve as a hybrid human-AI system, to enable human experts to analyze the validity of policy evaluation estimates.
arXiv Detail & Related papers (2020-02-10T00:26:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.