On the Optimality of Batch Policy Optimization Algorithms
- URL: http://arxiv.org/abs/2104.02293v1
- Date: Tue, 6 Apr 2021 05:23:20 GMT
- Title: On the Optimality of Batch Policy Optimization Algorithms
- Authors: Chenjun Xiao, Yifan Wu, Tor Lattimore, Bo Dai, Jincheng Mei, Lihong
Li, Csaba Szepesvari, Dale Schuurmans
- Abstract summary: Batch policy optimization considers leveraging existing data for policy construction before interacting with an environment.
We show that any confidence-adjusted index algorithm is minimax optimal, whether it be optimistic, pessimistic or neutral.
We introduce a new weighted-minimax criterion that considers the inherent difficulty of optimal value prediction.
- Score: 106.89498352537682
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Batch policy optimization considers leveraging existing data for policy
construction before interacting with an environment. Although interest in this
problem has grown significantly in recent years, its theoretical foundations
remain under-developed. To advance the understanding of this problem, we
provide three results that characterize the limits and possibilities of batch
policy optimization in the finite-armed stochastic bandit setting. First, we
introduce a class of confidence-adjusted index algorithms that unifies
optimistic and pessimistic principles in a common framework, which enables a
general analysis. For this family, we show that any confidence-adjusted index
algorithm is minimax optimal, whether it be optimistic, pessimistic or neutral.
Our analysis reveals that instance-dependent optimality, commonly used to
establish optimality of on-line stochastic bandit algorithms, cannot be
achieved by any algorithm in the batch setting. In particular, for any
algorithm that performs optimally in some environment, there exists another
environment where the same algorithm suffers arbitrarily larger regret.
Therefore, to establish a framework for distinguishing algorithms, we introduce
a new weighted-minimax criterion that considers the inherent difficulty of
optimal value prediction. We demonstrate how this criterion can be used to
justify commonly used pessimistic principles for batch policy optimization.
Related papers
- e-COP : Episodic Constrained Optimization of Policies [12.854752753529151]
We present the first policy optimization algorithm for constrained Reinforcement Learning (RL) in episodic (finite horizon) settings.
We show that our algorithm has similar or better performance than SoTA (non-episodic) algorithms adapted for the episodic setting.
arXiv Detail & Related papers (2024-06-13T20:12:09Z) - Bayesian Safe Policy Learning with Chance Constrained Optimization: Application to Military Security Assessment during the Vietnam War [0.0]
We investigate whether it would have been possible to improve a security assessment algorithm employed during the Vietnam War.
This empirical application raises several methodological challenges that frequently arise in high-stakes algorithmic decision-making.
arXiv Detail & Related papers (2023-07-17T20:59:50Z) - Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time
Guarantees [56.848265937921354]
Inverse reinforcement learning (IRL) aims to recover the reward function and the associated optimal policy.
Many algorithms for IRL have an inherently nested structure.
We develop a novel single-loop algorithm for IRL that does not compromise reward estimation accuracy.
arXiv Detail & Related papers (2022-10-04T17:13:45Z) - Off-Policy Evaluation with Policy-Dependent Optimization Response [90.28758112893054]
We develop a new framework for off-policy evaluation with a textitpolicy-dependent linear optimization response.
We construct unbiased estimators for the policy-dependent estimand by a perturbation method.
We provide a general algorithm for optimizing causal interventions.
arXiv Detail & Related papers (2022-02-25T20:25:37Z) - Understanding the Effect of Stochasticity in Policy Optimization [86.7574122154668]
We show that the preferability of optimization methods depends critically on whether exact gradients are used.
Second, to explain these findings we introduce the concept of committal rate for policy optimization.
Third, we show that in the absence of external oracle information, there is an inherent trade-off between exploiting geometry to accelerate convergence versus achieving optimality almost surely.
arXiv Detail & Related papers (2021-10-29T06:35:44Z) - Soft-Robust Algorithms for Batch Reinforcement Learning [36.78967245470449]
In reinforcement learning, robust decision-making problems with limited data are usually computed by the percentile criterion.
We show that the percentile criterion is non- theoretical as it is difficult to optimize and ignores the mean performance.
We propose and analyze two algorithms to approximately optimize the percentile criterion.
arXiv Detail & Related papers (2020-11-30T01:36:16Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - The Importance of Pessimism in Fixed-Dataset Policy Optimization [32.22700716592194]
We study worst-case guarantees on the expected return of fixed-dataset policy optimization algorithms.
For naive approaches, the possibility of erroneous value overestimation leads to a difficult-to-satisfy requirement.
We show why pessimistic algorithms can achieve good performance even when the dataset is not informative of every policy.
arXiv Detail & Related papers (2020-09-15T00:18:34Z) - Stochastic Optimization Forests [60.523606291705214]
We show how to train forest decision policies by growing trees that choose splits to directly optimize the downstream decision quality, rather than splitting to improve prediction accuracy as in the standard random forest algorithm.
We show that our approximate splitting criteria can reduce running time hundredfold, while achieving performance close to forest algorithms that exactly re-optimize for every candidate split.
arXiv Detail & Related papers (2020-08-17T16:56:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.