Improved Corruption Robust Algorithms for Episodic Reinforcement
Learning
- URL: http://arxiv.org/abs/2102.06875v1
- Date: Sat, 13 Feb 2021 07:04:23 GMT
- Title: Improved Corruption Robust Algorithms for Episodic Reinforcement
Learning
- Authors: Yifang Chen, Simon S. Du, Kevin Jamieson
- Abstract summary: We study episodic reinforcement learning under unknown adversarial corruptions in both the rewards and the transition probabilities of the underlying system.
We propose new algorithms which, compared to the existing results, achieve strictly better regret bounds in terms of total corruptions.
Our results follow from a general algorithmic framework that combines corruption-robust policy elimination meta-algorithms, and plug-in reward-free exploration sub-algorithms.
- Score: 43.279169081740726
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study episodic reinforcement learning under unknown adversarial
corruptions in both the rewards and the transition probabilities of the
underlying system. We propose new algorithms which, compared to the existing
results in (Lykouris et al., 2020), achieve strictly better regret bounds in
terms of total corruptions for the tabular setting. To be specific, firstly,
our regret bounds depend on more precise numerical values of total rewards
corruptions and transition corruptions, instead of only on the total number of
corrupted episodes. Secondly, our regret bounds are the first of their kind in
the reinforcement learning setting to have the number of corruptions show up
additively with respect to $\sqrt{T}$ rather than multiplicatively. Our results
follow from a general algorithmic framework that combines corruption-robust
policy elimination meta-algorithms, and plug-in reward-free exploration
sub-algorithms. Replacing the meta-algorithm or sub-algorithm may extend the
framework to address other corrupted settings with potentially more structure.
Related papers
- Corruption-Robust Linear Bandits: Minimax Optimality and Gap-Dependent Misspecification [17.288347876319126]
In linear bandits, how can a learner effectively learn when facing corrupted rewards?
We compare two types of corruptions commonly considered: strong corruption, where the corruption level depends on the action chosen by the learner, and weak corruption, where the corruption level does not depend on the action chosen by the learner.
For linear bandits, we fully characterize the gap between the minimax regret under strong and weak corruptions.
arXiv Detail & Related papers (2024-10-10T02:01:46Z) - Corruption-Robust Algorithms with Uncertainty Weighting for Nonlinear
Contextual Bandits and Markov Decision Processes [59.61248760134937]
We propose an efficient algorithm to achieve a regret of $tildeO(sqrtT+zeta)$.
The proposed algorithm relies on the recently developed uncertainty-weighted least-squares regression from linear contextual bandit.
We generalize our algorithm to the episodic MDP setting and first achieve an additive dependence on the corruption level $zeta$.
arXiv Detail & Related papers (2022-12-12T15:04:56Z) - Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial
Corruptions [98.75618795470524]
We study the linear contextual bandit problem in the presence of adversarial corruption.
We propose a new algorithm based on the principle of optimism in the face of uncertainty.
arXiv Detail & Related papers (2022-05-13T17:58:58Z) - A Robust Phased Elimination Algorithm for Corruption-Tolerant Gaussian
Process Bandits [118.22458816174144]
We propose a novel robust elimination-type algorithm that runs in epochs, combines exploration with infrequent switching to select a small subset of actions, and plays each action for multiple time instants.
Our algorithm, GP Robust Phased Elimination (RGP-PE), successfully balances robustness to corruptions with exploration and exploitation.
We perform the first empirical study of robustness in the corrupted GP bandit setting, and show that our algorithm is robust against a variety of adversarial attacks.
arXiv Detail & Related papers (2022-02-03T21:19:36Z) - Linear Contextual Bandits with Adversarial Corruptions [91.38793800392108]
We study the linear contextual bandit problem in the presence of adversarial corruption.
We present a variance-aware algorithm that is adaptive to the level of adversarial contamination $C$.
arXiv Detail & Related papers (2021-10-25T02:53:24Z) - On Optimal Robustness to Adversarial Corruption in Online Decision
Problems [27.68461396741871]
We show that optimal robustness can be expressed by a square-root dependency on the amount of corruption.
For the multi-armed bandit problem, we also provide a nearly tight lower bound up to a logarithmic factor.
arXiv Detail & Related papers (2021-09-22T18:26:45Z) - Corralling Stochastic Bandit Algorithms [54.10645564702416]
We show that the regret of the corralling algorithms is no worse than that of the best algorithm containing the arm with the highest reward.
We show that the gap between the highest reward and other rewards depends on the gap between the highest reward and other rewards.
arXiv Detail & Related papers (2020-06-16T15:33:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.