Online learning in MDPs with linear function approximation and bandit
feedback
- URL: http://arxiv.org/abs/2007.01612v2
- Date: Sat, 12 Jun 2021 19:28:35 GMT
- Title: Online learning in MDPs with linear function approximation and bandit
feedback
- Authors: Gergely Neu and Julia Olkhovskaya
- Abstract summary: We consider an online learning problem where the learner interacts with a Markov decision process in a sequence of episodes.
We show that MDP-LinExp3 is the first provably efficient algorithm for this problem setting.
- Score: 13.32560004325655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider an online learning problem where the learner interacts with a
Markov decision process in a sequence of episodes, where the reward function is
allowed to change between episodes in an adversarial manner and the learner
only gets to observe the rewards associated with its actions. We allow the
state space to be arbitrarily large, but we assume that all action-value
functions can be represented as linear functions in terms of a known
low-dimensional feature map, and that the learner has access to a simulator of
the environment that allows generating trajectories from the true MDP dynamics.
Our main contribution is developing a computationally efficient algorithm that
we call MDP-LinExp3, and prove that its regret is bounded by
$\widetilde{\mathcal{O}}\big(H^2 T^{2/3} (dK)^{1/3}\big)$, where $T$ is the
number of episodes, $H$ is the number of steps in each episode, $K$ is the
number of actions, and $d$ is the dimension of the feature map. We also show
that the regret can be improved to $\widetilde{\mathcal{O}}\big(H^2
\sqrt{TdK}\big)$ under much stronger assumptions on the MDP dynamics. To our
knowledge, MDP-LinExp3 is the first provably efficient algorithm for this
problem setting.
Related papers
- Provably Efficient Reinforcement Learning with Multinomial Logit Function Approximation [67.8414514524356]
We study a new class of MDPs that employs multinomial logit (MNL) function approximation to ensure valid probability distributions over the state space.
introducing the non-linear function raises significant challenges in both computational and statistical efficiency.
We propose an algorithm that achieves the same regret with only $mathcalO(1)$ cost.
arXiv Detail & Related papers (2024-05-27T11:31:54Z) - Improved Algorithm for Adversarial Linear Mixture MDPs with Bandit
Feedback and Unknown Transition [71.33787410075577]
We study reinforcement learning with linear function approximation, unknown transition, and adversarial losses.
We propose a new algorithm that attains an $widetildeO(dsqrtHS3K + sqrtHSAK)$ regret with high probability.
arXiv Detail & Related papers (2024-03-07T15:03:50Z) - Online RL in Linearly $q^\pi$-Realizable MDPs Is as Easy as in Linear
MDPs If You Learn What to Ignore [0.0]
We consider online reinforcement learning in episodic Markov decision processes (MDPs)
We show that the difference between the two classes is the presence of states in linearly $qpi$-realizable MDPs.
We derive a novel (computationally inefficient) learning algorithm for linearly $qpi$-realizable MDPs.
arXiv Detail & Related papers (2023-10-11T18:50:25Z) - Exploring and Learning in Sparse Linear MDPs without Computationally
Intractable Oracles [39.10180309328293]
In this paper we revisit linear MDPs from the perspective of feature selection.
Our main result is the first-time algorithm for this problem.
We show that they do exist and can be computed efficiently via convex programming.
arXiv Detail & Related papers (2023-09-18T03:35:48Z) - Nearly Optimal Regret for Learning Adversarial MDPs with Linear Function
Approximation [92.3161051419884]
We study the reinforcement learning for finite-horizon episodic Markov decision processes with adversarial reward and full information feedback.
We show that it can achieve $tildeO(dHsqrtT)$ regret, where $H$ is the length of the episode.
We also prove a matching lower bound of $tildeOmega(dHsqrtT)$ up to logarithmic factors.
arXiv Detail & Related papers (2021-02-17T18:54:08Z) - Logarithmic Regret for Reinforcement Learning with Linear Function
Approximation [99.59319332864129]
We show that logarithmic regret is attainable under two recently proposed linear MDP assumptions.
To the best of our knowledge, these are the first logarithmic regret bounds for RL with linear function approximation.
arXiv Detail & Related papers (2020-11-23T17:25:00Z) - Provably Efficient Reinforcement Learning for Discounted MDPs with
Feature Mapping [99.59319332864129]
In this paper, we study reinforcement learning for discounted Decision (MDP)
We propose a novel algorithm that makes use of the feature mapping and obtains a $tilde O(dsqrtT/ (1-gamma)2)$ regret.
Our upper and lower bound results together suggest that the proposed reinforcement learning algorithm is near-optimal up to a $ (1-gamma)-0.5$ factor.
arXiv Detail & Related papers (2020-06-23T17:08:54Z) - Learning Near Optimal Policies with Low Inherent Bellman Error [115.16037976819331]
We study the exploration problem with approximate linear action-value functions in episodic reinforcement learning.
We show that exploration is possible using only emphbatch assumptions with an algorithm that achieves the optimal statistical rate for the setting we consider.
arXiv Detail & Related papers (2020-02-29T02:02:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.