Reinforcement Learning with Heterogeneous Data: Estimation and Inference
- URL: http://arxiv.org/abs/2202.00088v1
- Date: Mon, 31 Jan 2022 20:58:47 GMT
- Title: Reinforcement Learning with Heterogeneous Data: Estimation and Inference
- Authors: Elynn Y. Chen, Rui Song, Michael I. Jordan
- Abstract summary: We introduce the K-Heterogeneous Markov Decision Process (K-Hetero MDP) to address sequential decision problems with population heterogeneity.
We propose the Auto-Clustered Policy Evaluation (ACPE) for estimating the value of a given policy, and the Auto-Clustered Policy Iteration (ACPI) for estimating the optimal policy in a given policy class.
We present simulations to support our theoretical findings, and we conduct an empirical study on the standard MIMIC-III dataset.
- Score: 84.72174994749305
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reinforcement Learning (RL) has the promise of providing data-driven support
for decision-making in a wide range of problems in healthcare, education,
business, and other domains. Classical RL methods focus on the mean of the
total return and, thus, may provide misleading results in the setting of the
heterogeneous populations that commonly underlie large-scale datasets. We
introduce the K-Heterogeneous Markov Decision Process (K-Hetero MDP) to address
sequential decision problems with population heterogeneity. We propose the
Auto-Clustered Policy Evaluation (ACPE) for estimating the value of a given
policy, and the Auto-Clustered Policy Iteration (ACPI) for estimating the
optimal policy in a given policy class. Our auto-clustered algorithms can
automatically detect and identify homogeneous sub-populations, while estimating
the Q function and the optimal policy for each sub-population. We establish
convergence rates and construct confidence intervals for the estimators
obtained by the ACPE and ACPI. We present simulations to support our
theoretical findings, and we conduct an empirical study on the standard
MIMIC-III dataset. The latter analysis shows evidence of value heterogeneity
and confirms the advantages of our new method.
Related papers
- Hierarchical and Density-based Causal Clustering [6.082022112101251]
We propose plug-in estimators that are simple and readily implementable using off-the-shelf algorithms.
We go on to study their rate of convergence, and show that the additional cost of causal clustering is essentially the estimation error of the outcome regression functions.
arXiv Detail & Related papers (2024-11-02T14:01:04Z) - Sample Complexity of Preference-Based Nonparametric Off-Policy
Evaluation with Deep Networks [58.469818546042696]
We study the sample efficiency of OPE with human preference and establish a statistical guarantee for it.
By appropriately selecting the size of a ReLU network, we show that one can leverage any low-dimensional manifold structure in the Markov decision process.
arXiv Detail & Related papers (2023-10-16T16:27:06Z) - Offline Policy Evaluation and Optimization under Confounding [35.778917456294046]
We map out the landscape of offline policy evaluation for confounded MDPs.
We characterize settings where consistent value estimates are provably not achievable.
We present new algorithms for offline policy improvement and prove local convergence guarantees.
arXiv Detail & Related papers (2022-11-29T20:45:08Z) - GEC: A Unified Framework for Interactive Decision Making in MDP, POMDP,
and Beyond [101.5329678997916]
We study sample efficient reinforcement learning (RL) under the general framework of interactive decision making.
We propose a novel complexity measure, generalized eluder coefficient (GEC), which characterizes the fundamental tradeoff between exploration and exploitation.
We show that RL problems with low GEC form a remarkably rich class, which subsumes low Bellman eluder dimension problems, bilinear class, low witness rank problems, PO-bilinear class, and generalized regular PSR.
arXiv Detail & Related papers (2022-11-03T16:42:40Z) - Offline Reinforcement Learning with Instrumental Variables in Confounded
Markov Decision Processes [93.61202366677526]
We study the offline reinforcement learning (RL) in the face of unmeasured confounders.
We propose various policy learning methods with the finite-sample suboptimality guarantee of finding the optimal in-class policy.
arXiv Detail & Related papers (2022-09-18T22:03:55Z) - Targeted Optimal Treatment Regime Learning Using Summary Statistics [12.767669486030352]
We consider an ITR estimation problem where the source and target populations may be heterogeneous.
We develop a weighting framework that tailors an ITR for a given target population by leveraging the available summary statistics.
Specifically, we propose a calibrated augmented inverse probability weighted estimator of the value function for the target population and estimate an optimal ITR.
arXiv Detail & Related papers (2022-01-17T06:11:31Z) - Proximal Reinforcement Learning: Efficient Off-Policy Evaluation in
Partially Observed Markov Decision Processes [65.91730154730905]
In applications of offline reinforcement learning to observational data, such as in healthcare or education, a general concern is that observed actions might be affected by unobserved factors.
Here we tackle this by considering off-policy evaluation in a partially observed Markov decision process (POMDP)
We extend the framework of proximal causal inference to our POMDP setting, providing a variety of settings where identification is made possible.
arXiv Detail & Related papers (2021-10-28T17:46:14Z) - Variance-Aware Off-Policy Evaluation with Linear Function Approximation [85.75516599931632]
We study the off-policy evaluation problem in reinforcement learning with linear function approximation.
We propose an algorithm, VA-OPE, which uses the estimated variance of the value function to reweight the Bellman residual in Fitted Q-Iteration.
arXiv Detail & Related papers (2021-06-22T17:58:46Z) - Robust Batch Policy Learning in Markov Decision Processes [0.0]
We study the offline data-driven sequential decision making problem in the framework of Markov decision process (MDP)
We propose to evaluate each policy by a set of the average rewards with respect to distributions centered at the policy induced stationary distribution.
arXiv Detail & Related papers (2020-11-09T04:41:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.