Learning Infinite-Horizon Average-Reward Markov Decision Processes with
Constraints
- URL: http://arxiv.org/abs/2202.00150v1
- Date: Mon, 31 Jan 2022 23:52:34 GMT
- Title: Learning Infinite-Horizon Average-Reward Markov Decision Processes with
Constraints
- Authors: Liyu Chen, Rahul Jain, Haipeng Luo
- Abstract summary: We study regret for infinite-horizon average-reward Markov Decision Processes (MDPs) under cost constraints.
Our algorithm ensures $widetildeO(sqrtT)$ regret and constant constraint violation for ergodic MDPs.
These are the first set of provable algorithms for weakly communicating MDPs with cost constraints.
- Score: 39.715977181666766
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study regret minimization for infinite-horizon average-reward Markov
Decision Processes (MDPs) under cost constraints. We start by designing a
policy optimization algorithm with carefully designed action-value estimator
and bonus term, and show that for ergodic MDPs, our algorithm ensures
$\widetilde{O}(\sqrt{T})$ regret and constant constraint violation, where $T$
is the total number of time steps. This strictly improves over the algorithm of
(Singh et al., 2020), whose regret and constraint violation are both
$\widetilde{O}(T^{2/3})$. Next, we consider the most general class of weakly
communicating MDPs. Through a finite-horizon approximation, we develop another
algorithm with $\widetilde{O}(T^{2/3})$ regret and constraint violation, which
can be further improved to $\widetilde{O}(\sqrt{T})$ via a simple modification,
albeit making the algorithm computationally inefficient. As far as we know,
these are the first set of provable algorithms for weakly communicating MDPs
with cost constraints.
Related papers
- Best-of-Both-Worlds Policy Optimization for CMDPs with Bandit Feedback [34.7178680288326]
Stradi et al.(2024) proposed the first best-of-both-worlds algorithm for constrained Markov decision processes.
In this paper, we provide the first best-of-both-worlds algorithm for CMDPs with bandit feedback.
Our algorithm is based on a policy optimization approach, which is much more efficient than occupancy-measure-based methods.
arXiv Detail & Related papers (2024-10-03T07:44:40Z) - Achieving Tractable Minimax Optimal Regret in Average Reward MDPs [19.663336027878408]
We present the first tractable algorithm with minimax optimal regret of $widetildemathrmO(sqrtmathrmsp(h*) S A T)$.
Remarkably, our algorithm does not require prior information on $mathrmsp(h*)$.
arXiv Detail & Related papers (2024-06-03T11:53:44Z) - Reinforcement Learning for Infinite-Horizon Average-Reward Linear MDPs via Approximation by Discounted-Reward MDPs [16.49229317664822]
We study the infinite-horizon average-reward reinforcement learning with linear MDPs.
In this paper, we propose that the regret bound of $widetildeO(sqrtT)$ achieves an algorithm that is computationally efficient.
arXiv Detail & Related papers (2024-05-23T20:58:33Z) - Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Refined Regret for Adversarial MDPs with Linear Function Approximation [50.00022394876222]
We consider learning in an adversarial Decision Process (MDP) where the loss functions can change arbitrarily over $K$ episodes.
This paper provides two algorithms that improve the regret to $tildemathcal O(K2/3)$ in the same setting.
arXiv Detail & Related papers (2023-01-30T14:37:21Z) - Efficient Algorithms for Planning with Participation Constraints [74.74967476995572]
We consider the problem of planning with participation constraints introduced in [Zhang et al., 2022]
In this problem, a principal chooses actions in a decision process, resulting in separate utilities for the principal and the agent.
We provide the first-time exact algorithm for this problem for finite-horizon settings, where previously only an additive $varepsilon$-approximation was known.
arXiv Detail & Related papers (2022-05-16T15:47:41Z) - A Provably Efficient Algorithm for Linear Markov Decision Process with
Low Switching Cost [53.968049198926444]
We present the first algorithm for linear MDP with a low switching cost.
Our algorithm achieves an $widetildeOleft(sqrtd3H4Kright)$ regret bound with a near-optimal $Oleft(d Hlog Kright)$ global switching cost.
arXiv Detail & Related papers (2021-01-02T18:41:27Z) - Learning Infinite-horizon Average-reward MDPs with Linear Function
Approximation [44.374427255708135]
We develop new algorithms for learning Markov Decision Processes in an infinite-horizon average-reward setting with linear function approximation.
Using the optimism principle and assuming that the MDP has a linear structure, we first propose a computationally inefficient algorithm with optimal $widetildeO(sqrtT)$ regret.
Next, taking inspiration from adversarial linear bandits, we develop yet another efficient algorithm with $widetildeO(sqrtT)$ regret.
arXiv Detail & Related papers (2020-07-23T08:23:44Z) - Provably Efficient Model-Free Algorithm for MDPs with Peak Constraints [38.2783003051101]
This paper considers the peak Constrained Markov Decision Process (PCMDP), where the agent chooses the policy to maximize total reward in the finite horizon as well as satisfy constraints at each epoch with probability 1.
We propose a model-free algorithm that converts PCMDP problem to an unconstrained problem and a Q-learning based approach is applied.
arXiv Detail & Related papers (2020-03-11T23:23:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.