CoinDICE: Off-Policy Confidence Interval Estimation
- URL: http://arxiv.org/abs/2010.11652v1
- Date: Thu, 22 Oct 2020 12:39:11 GMT
- Title: CoinDICE: Off-Policy Confidence Interval Estimation
- Authors: Bo Dai, Ofir Nachum, Yinlam Chow, Lihong Li, Csaba Szepesv\'ari and
Dale Schuurmans
- Abstract summary: We study high-confidence behavior-agnostic off-policy evaluation in reinforcement learning.
We show in a variety of benchmarks that the confidence interval estimates are tighter and more accurate than existing methods.
- Score: 107.86876722777535
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study high-confidence behavior-agnostic off-policy evaluation in
reinforcement learning, where the goal is to estimate a confidence interval on
a target policy's value, given only access to a static experience dataset
collected by unknown behavior policies. Starting from a function space
embedding of the linear program formulation of the $Q$-function, we obtain an
optimization problem with generalized estimating equation constraints. By
applying the generalized empirical likelihood method to the resulting
Lagrangian, we propose CoinDICE, a novel and efficient algorithm for computing
confidence intervals. Theoretically, we prove the obtained confidence intervals
are valid, in both asymptotic and finite-sample regimes. Empirically, we show
in a variety of benchmarks that the confidence interval estimates are tighter
and more accurate than existing methods.
Related papers
- Statistical Inference for Temporal Difference Learning with Linear Function Approximation [62.69448336714418]
Temporal Difference (TD) learning, arguably the most widely used for policy evaluation, serves as a natural framework for this purpose.
In this paper, we study the consistency properties of TD learning with Polyak-Ruppert averaging and linear function approximation, and obtain three significant improvements over existing results.
arXiv Detail & Related papers (2024-10-21T15:34:44Z) - High Confidence Level Inference is Almost Free using Parallel Stochastic
Optimization [16.38026811561888]
This paper introduces a novel inference method focused on constructing confidence intervals with efficient computation and fast convergence to the nominal level.
Our method requires minimal additional computation and memory beyond the standard updating of estimates, making the inference process almost cost-free.
arXiv Detail & Related papers (2024-01-17T17:11:45Z) - Optimal Learning via Moderate Deviations Theory [4.6930976245638245]
We develop a systematic construction of highly accurate confidence intervals by using a moderate deviation principle-based approach.
It is shown that the proposed confidence intervals are statistically optimal in the sense that they satisfy criteria regarding exponential accuracy, minimality, consistency, mischaracterization probability, and eventual uniformly most accurate (UMA) property.
arXiv Detail & Related papers (2023-05-23T19:57:57Z) - Bellman Residual Orthogonalization for Offline Reinforcement Learning [53.17258888552998]
We introduce a new reinforcement learning principle that approximates the Bellman equations by enforcing their validity only along a test function space.
We exploit this principle to derive confidence intervals for off-policy evaluation, as well as to optimize over policies within a prescribed policy class.
arXiv Detail & Related papers (2022-03-24T01:04:17Z) - Variance-Aware Off-Policy Evaluation with Linear Function Approximation [85.75516599931632]
We study the off-policy evaluation problem in reinforcement learning with linear function approximation.
We propose an algorithm, VA-OPE, which uses the estimated variance of the value function to reweight the Bellman residual in Fitted Q-Iteration.
arXiv Detail & Related papers (2021-06-22T17:58:46Z) - Non-asymptotic Confidence Intervals of Off-policy Evaluation: Primal and
Dual Bounds [21.520045697447372]
Off-policy evaluation (OPE) is the task of estimating the expected reward of a given policy based on offline data previously collected under different policies.
This work considers the problem of constructing non-asymptotic confidence intervals in infinite-horizon off-policy evaluation.
We develop a practical algorithm through a primal-dual optimization-based approach.
arXiv Detail & Related papers (2021-03-09T22:31:20Z) - Reliable Off-policy Evaluation for Reinforcement Learning [53.486680020852724]
In a sequential decision-making problem, off-policy evaluation estimates the expected cumulative reward of a target policy.
We propose a novel framework that provides robust and optimistic cumulative reward estimates using one or multiple logged data.
arXiv Detail & Related papers (2020-11-08T23:16:19Z) - Accountable Off-Policy Evaluation With Kernel Bellman Statistics [29.14119984573459]
We consider off-policy evaluation (OPE), which evaluates the performance of a new policy from observed data collected from previous experiments.
Due to the limited information from off-policy data, it is highly desirable to construct rigorous confidence intervals, not just point estimation.
We propose a new variational framework which reduces the problem of calculating tight confidence bounds in OPE.
arXiv Detail & Related papers (2020-08-15T07:24:38Z) - GenDICE: Generalized Offline Estimation of Stationary Values [108.17309783125398]
We show that effective estimation can still be achieved in important applications.
Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions.
The resulting algorithm, GenDICE, is straightforward and effective.
arXiv Detail & Related papers (2020-02-21T00:27:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.