Model Selection in Reinforcement Learning with General Function
Approximations
- URL: http://arxiv.org/abs/2207.02992v1
- Date: Wed, 6 Jul 2022 21:52:07 GMT
- Title: Model Selection in Reinforcement Learning with General Function
Approximations
- Authors: Avishek Ghosh and Sayak Ray Chowdhury
- Abstract summary: We consider model selection for Reinforcement Learning environments -- Multi Armed Bandits (MABs) and Markov Decision Processes (MDPs)
In the model selection framework, we do not know the function classes, denoted by $mathcalF$ and $mathcalM$, where the true models lie.
We show that the cumulative regret of our adaptive algorithms match to that of an oracle which knows the correct function classes.
- Score: 10.97775622611135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider model selection for classic Reinforcement Learning (RL)
environments -- Multi Armed Bandits (MABs) and Markov Decision Processes (MDPs)
-- under general function approximations. In the model selection framework, we
do not know the function classes, denoted by $\mathcal{F}$ and $\mathcal{M}$,
where the true models -- reward generating function for MABs and and transition
kernel for MDPs -- lie, respectively. Instead, we are given $M$ nested function
(hypothesis) classes such that true models are contained in at-least one such
class. In this paper, we propose and analyze efficient model selection
algorithms for MABs and MDPs, that \emph{adapt} to the smallest function class
(among the nested $M$ classes) containing the true underlying model. Under a
separability assumption on the nested hypothesis classes, we show that the
cumulative regret of our adaptive algorithms match to that of an oracle which
knows the correct function classes (i.e., $\cF$ and $\cM$) a priori.
Furthermore, for both the settings, we show that the cost of model selection is
an additive term in the regret having weak (logarithmic) dependence on the
learning horizon $T$.
Related papers
- A Complete Characterization of Learnability for Stochastic Noisy Bandits [19.35221816408955]
We study the noisy bandit problem with an unknown reward function $f*$ in a known function class $mathcalF$.
We give a complete characterization of learnability for model classes with arbitrary noise.
We also prove adaptivity is sometimes necessary to achieve the optimal query complexity.
arXiv Detail & Related papers (2024-10-12T17:23:34Z) - Agnostically Learning Multi-index Models with Queries [54.290489524576756]
We study the power of query access for the task of agnostic learning under the Gaussian distribution.
We show that query access gives significant runtime improvements over random examples for agnostically learning MIMs.
arXiv Detail & Related papers (2023-12-27T15:50:47Z) - A Nearly Optimal and Low-Switching Algorithm for Reinforcement Learning
with General Function Approximation [66.26739783789387]
We propose a new algorithm, Monotonic Q-Learning with Upper Confidence Bound (MQL-UCB) for reinforcement learning.
MQL-UCB achieves minimax optimal regret of $tildeO(dsqrtHK)$ when $K$ is sufficiently large and near-optimal policy switching cost.
Our work sheds light on designing provably sample-efficient and deployment-efficient Q-learning with nonlinear function approximation.
arXiv Detail & Related papers (2023-11-26T08:31:57Z) - Value-Distributional Model-Based Reinforcement Learning [59.758009422067]
Quantifying uncertainty about a policy's long-term performance is important to solve sequential decision-making tasks.
We study the problem from a model-based Bayesian reinforcement learning perspective.
We propose Epistemic Quantile-Regression (EQR), a model-based algorithm that learns a value distribution function.
arXiv Detail & Related papers (2023-08-12T14:59:19Z) - Oracle Inequalities for Model Selection in Offline Reinforcement
Learning [105.74139523696284]
We study the problem of model selection in offline RL with value function approximation.
We propose the first model selection algorithm for offline RL that achieves minimax rate-optimal inequalities up to logarithmic factors.
We conclude with several numerical simulations showing it is capable of reliably selecting a good model class.
arXiv Detail & Related papers (2022-11-03T17:32:34Z) - Universal and data-adaptive algorithms for model selection in linear
contextual bandits [52.47796554359261]
We consider the simplest non-trivial instance of model-selection: distinguishing a simple multi-armed bandit problem from a linear contextual bandit problem.
We introduce new algorithms that explore in a data-adaptive manner and provide guarantees of the form $mathcalO(dalpha T1- alpha)$.
Our approach extends to model selection among nested linear contextual bandits under some additional assumptions.
arXiv Detail & Related papers (2021-11-08T18:05:35Z) - Near-Optimal Reward-Free Exploration for Linear Mixture MDPs with
Plug-in Solver [32.212146650873194]
We provide approaches to learn an RL model efficiently without the guidance of a reward signal.
In particular, we take a plug-in solver approach, where we focus on learning a model in the exploration phase.
We show that, by establishing a novel exploration algorithm, the plug-in approach learns a model by taking $tildeO(d2H3/epsilon2)$ interactions with the environment.
arXiv Detail & Related papers (2021-10-07T07:59:50Z) - Model Selection with Near Optimal Rates for Reinforcement Learning with
General Model Classes [27.361399036211694]
We address the problem of model selection for the finite horizon episodic Reinforcement Learning (RL) problem.
In the model selection framework, instead of $mathcalP*$, we are given $M$ nested families of transition kernels.
We show that textttARL-GEN obtains a regret of $TildemathcalO(d_mathcalE*H2+sqrtd_mathcalE* mathbbM* H2 T)$
arXiv Detail & Related papers (2021-07-13T05:00:38Z) - Provably Efficient Reinforcement Learning for Discounted MDPs with
Feature Mapping [99.59319332864129]
In this paper, we study reinforcement learning for discounted Decision (MDP)
We propose a novel algorithm that makes use of the feature mapping and obtains a $tilde O(dsqrtT/ (1-gamma)2)$ regret.
Our upper and lower bound results together suggest that the proposed reinforcement learning algorithm is near-optimal up to a $ (1-gamma)-0.5$ factor.
arXiv Detail & Related papers (2020-06-23T17:08:54Z) - A Model-free Learning Algorithm for Infinite-horizon Average-reward MDPs
with Near-optimal Regret [44.374427255708135]
We propose Exploration Enhanced Q-learning (EE-QL), a model-free algorithm for infinite-horizon average-reward Markov Decision Processes (MDPs)
EE-QL assumes that an online concentrating approximation of the optimal average reward is available.
This is the first model-free learning algorithm that achieves $O(sqrt T)$ regret without the ergodic assumption, and matches the lower bound in terms of $T$ except for logarithmic factors.
arXiv Detail & Related papers (2020-06-08T05:09:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.