Structural Estimation of Markov Decision Processes in High-Dimensional
State Space with Finite-Time Guarantees
- URL: http://arxiv.org/abs/2210.01282v3
- Date: Fri, 1 Mar 2024 18:31:18 GMT
- Title: Structural Estimation of Markov Decision Processes in High-Dimensional
State Space with Finite-Time Guarantees
- Authors: Siliang Zeng, Mingyi Hong, Alfredo Garcia
- Abstract summary: We consider the task of estimating a structural model of dynamic decisions by a human agent based upon the observable history of implemented actions and visited states.
This problem has an inherent nested structure: in the inner problem, an optimal policy for a given reward function is identified while in the outer problem, a measure of fit is maximized.
We propose a single-loop estimation algorithm with finite time guarantees that is equipped to deal with high-dimensional state spaces.
- Score: 39.287388288477096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the task of estimating a structural model of dynamic decisions by
a human agent based upon the observable history of implemented actions and
visited states. This problem has an inherent nested structure: in the inner
problem, an optimal policy for a given reward function is identified while in
the outer problem, a measure of fit is maximized. Several approaches have been
proposed to alleviate the computational burden of this nested-loop structure,
but these methods still suffer from high complexity when the state space is
either discrete with large cardinality or continuous in high dimensions. Other
approaches in the inverse reinforcement learning (IRL) literature emphasize
policy estimation at the expense of reduced reward estimation accuracy. In this
paper we propose a single-loop estimation algorithm with finite time guarantees
that is equipped to deal with high-dimensional state spaces without
compromising reward estimation accuracy. In the proposed algorithm, each policy
improvement step is followed by a stochastic gradient step for likelihood
maximization. We show that the proposed algorithm converges to a stationary
solution with a finite-time guarantee. Further, if the reward is parameterized
linearly, we show that the algorithm approximates the maximum likelihood
estimator sublinearly. Finally, by using robotics control problems in MuJoCo
and their transfer settings, we show that the proposed algorithm achieves
superior performance compared with other IRL and imitation learning benchmarks.
Related papers
- Deterministic Trajectory Optimization through Probabilistic Optimal Control [3.2771631221674333]
We propose two new algorithms for discrete-time deterministic finite-horizon nonlinear optimal control problems.
Both algorithms are inspired by a novel theoretical paradigm known as probabilistic optimal control.
We show that the application of this algorithm results in a fixed point of probabilistic policies that converge to the deterministic optimal policy.
arXiv Detail & Related papers (2024-07-18T09:17:47Z) - Regularized Q-Learning with Linear Function Approximation [2.765106384328772]
We consider a bi-level optimization formulation of regularized Q-learning with linear functional approximation.
We show that, under certain assumptions, the proposed algorithm converges to a stationary point in the presence of Markovian noise.
arXiv Detail & Related papers (2024-01-26T20:45:40Z) - A Stochastic-Gradient-based Interior-Point Algorithm for Solving Smooth Bound-Constrained Optimization Problems [12.29270365918848]
The proposed algorithm is based on the subject-point unique constraints from other interior-point methods.
It is shown that with a careful balance between the projection, step-size and sequence sequences, the proposed algorithm convergence guarantees in both numerical and deterministic settings.
arXiv Detail & Related papers (2023-04-28T15:30:43Z) - Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time
Guarantees [56.848265937921354]
Inverse reinforcement learning (IRL) aims to recover the reward function and the associated optimal policy.
Many algorithms for IRL have an inherently nested structure.
We develop a novel single-loop algorithm for IRL that does not compromise reward estimation accuracy.
arXiv Detail & Related papers (2022-10-04T17:13:45Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - An Efficient Algorithm for Deep Stochastic Contextual Bandits [10.298368632706817]
In contextual bandit problems, an agent selects an action based on certain observed context to maximize the reward over iterations.
Recently there have been a few studies using a deep neural network (DNN) to predict the expected reward for an action, and is trained by a gradient based method.
arXiv Detail & Related papers (2021-04-12T16:34:43Z) - Average-Reward Off-Policy Policy Evaluation with Function Approximation [66.67075551933438]
We consider off-policy policy evaluation with function approximation in average-reward MDPs.
bootstrapping is necessary and, along with off-policy learning and FA, results in the deadly triad.
We propose two novel algorithms, reproducing the celebrated success of Gradient TD algorithms in the average-reward setting.
arXiv Detail & Related papers (2021-01-08T00:43:04Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - High-Dimensional Robust Mean Estimation via Gradient Descent [73.61354272612752]
We show that the problem of robust mean estimation in the presence of a constant adversarial fraction can be solved by gradient descent.
Our work establishes an intriguing connection between the near non-lemma estimation and robust statistics.
arXiv Detail & Related papers (2020-05-04T10:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.