The best of both worlds: stochastic and adversarial episodic MDPs with
unknown transition
- URL: http://arxiv.org/abs/2106.04117v1
- Date: Tue, 8 Jun 2021 05:46:35 GMT
- Title: The best of both worlds: stochastic and adversarial episodic MDPs with
unknown transition
- Authors: Tiancheng Jin, Longbo Huang, Haipeng Luo
- Abstract summary: We consider the best-of-both-worlds problem for learning an episodic Markov Decision Process through $T$ episodes.
Recent work by [Jin and Luo, 2020] achieves this goal when the fixed transition is known.
In this work, we resolve this open problem by using the same Follow-the-Regularized-Leader ($textFTRL$) framework together with a set of new techniques.
- Score: 49.78053380710322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the best-of-both-worlds problem for learning an episodic Markov
Decision Process through $T$ episodes, with the goal of achieving
$\widetilde{\mathcal{O}}(\sqrt{T})$ regret when the losses are adversarial and
simultaneously $\mathcal{O}(\text{polylog}(T))$ regret when the losses are
(almost) stochastic. Recent work by [Jin and Luo, 2020] achieves this goal when
the fixed transition is known, and leaves the case of unknown transition as a
major open question. In this work, we resolve this open problem by using the
same Follow-the-Regularized-Leader ($\text{FTRL}$) framework together with a
set of new techniques. Specifically, we first propose a loss-shifting trick in
the $\text{FTRL}$ analysis, which greatly simplifies the approach of [Jin and
Luo, 2020] and already improves their results for the known transition case.
Then, we extend this idea to the unknown transition case and develop a novel
analysis which upper bounds the transition estimation error by (a fraction of)
the regret itself in the stochastic setting, a key property to ensure
$\mathcal{O}(\text{polylog}(T))$ regret.
Related papers
- Near-Optimal Dynamic Regret for Adversarial Linear Mixture MDPs [63.47351876442425]
We study episodic linear mixture MDPs with the unknown transition and adversarial rewards under full-information feedback.
We propose a novel algorithm that combines the benefits of two popular methods: occupancy-measure-based and policy-based.
Our algorithm enjoys an $widetildemathcalO(d sqrtH3 K + sqrtHK(H + barP_K$)$ dynamic regret, where $d$ is the feature dimension.
arXiv Detail & Related papers (2024-11-05T13:55:52Z) - No-Regret Online Reinforcement Learning with Adversarial Losses and
Transitions [39.864174754882754]
Existing online learning algorithms for adversarial Markov Decision Processes achieve $O(sqrtT)$ regret after $T$ rounds of interactions.
This is because it has been shown that adversarial transition functions make no-regret learning impossible.
We propose an algorithm that enjoys $widetildeO(sqrtT + CtextsfP)$ regret where $CtextsfP$ measures how adversarial the transition functions are.
arXiv Detail & Related papers (2023-05-27T06:10:17Z) - Near-Optimal Adversarial Reinforcement Learning with Switching Costs [43.895798638743784]
We show how to develop a provably efficient algorithm for adversarial RL with switching costs.
Our lower bound indicates that, due to the fundamental challenge of switching costs in adversarial RL, the best achieved regret is no longer achievable.
We propose two novel switching-reduced algorithms with regrets that match our lower bound when the transition function is known.
arXiv Detail & Related papers (2023-02-08T23:41:29Z) - Horizon-Free and Variance-Dependent Reinforcement Learning for Latent
Markov Decision Processes [62.90204655228324]
We study regret minimization for reinforcement learning (RL) in Latent Markov Decision Processes (LMDPs) with context in hindsight.
We design a novel model-based algorithmic framework which can be instantiated with both a model-optimistic and a value-optimistic solver.
arXiv Detail & Related papers (2022-10-20T21:32:01Z) - Near-Optimal Regret Bounds for Multi-batch Reinforcement Learning [54.806166861456035]
We study the episodic reinforcement learning (RL) problem modeled by finite-horizon Markov Decision Processes (MDPs) with constraint on the number of batches.
We design a computational efficient algorithm to achieve near-optimal regret of $tildeO(sqrtSAH3Kln (1/delta))$tildeO(cdot) hides logarithmic terms of $(S,A,H,K)$ in $K$ episodes.
Our technical contribution are two-fold: 1) a near-optimal design scheme to explore
arXiv Detail & Related papers (2022-10-15T09:22:22Z) - Double Thompson Sampling in Finite stochastic Games [10.559241770674724]
We consider the trade-off problem between exploration and exploitation under finite discounted Markov Decision Process.
We propose a double Thompson sampling reinforcement learning algorithm(DTS) to solve this kind of problem.
arXiv Detail & Related papers (2022-02-21T06:11:51Z) - Provably Breaking the Quadratic Error Compounding Barrier in Imitation
Learning, Optimally [58.463668865380946]
We study the statistical limits of Imitation Learning in episodic Markov Decision Processes (MDPs) with a state space $mathcalS$.
We establish an upper bound $O(|mathcalS|H3/2/N)$ for the suboptimality using the Mimic-MD algorithm in Rajaraman et al ( 2020)
We show the minimax suboptimality grows as $Omega( H3/2/N)$ when $mathcalS|geq 3$ while the unknown-transition setting suffers from a larger sharp rate
arXiv Detail & Related papers (2021-02-25T15:50:19Z) - Finding the Stochastic Shortest Path with Low Regret: The Adversarial
Cost and Unknown Transition Case [29.99619764839178]
We make significant progress toward the shortest path problem with adversarial costs and unknown transition.
Specifically, we develop algorithms that achieve $widetildeO(sqrtS3A2DT_star K)$ regret for the full-information setting.
We are also the first to consider the most challenging combination: bandit feedback with adversarial costs and unknown transition.
arXiv Detail & Related papers (2021-02-10T06:33:04Z) - Simultaneously Learning Stochastic and Adversarial Episodic MDPs with
Known Transition [38.28925339231888]
We develop the first algorithm with a best-of-both-worlds'' guarantee.
It achieves $mathcalO(log T)$ regret when the losses are adversarial.
More generally, it achieves $tildemathcalO(sqrtC)$ regret in an intermediate setting.
arXiv Detail & Related papers (2020-06-10T01:59:34Z) - Upper Confidence Primal-Dual Reinforcement Learning for CMDP with
Adversarial Loss [145.54544979467872]
We consider online learning for episodically constrained Markov decision processes (CMDPs)
We propose a new emphupper confidence primal-dual algorithm, which only requires the trajectories sampled from the transition model.
Our analysis incorporates a new high-probability drift analysis of Lagrange multiplier processes into the celebrated regret analysis of upper confidence reinforcement learning.
arXiv Detail & Related papers (2020-03-02T05:02:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.