Perturbational Complexity by Distribution Mismatch: A Systematic
Analysis of Reinforcement Learning in Reproducing Kernel Hilbert Space
- URL: http://arxiv.org/abs/2111.03469v1
- Date: Fri, 5 Nov 2021 12:46:04 GMT
- Title: Perturbational Complexity by Distribution Mismatch: A Systematic
Analysis of Reinforcement Learning in Reproducing Kernel Hilbert Space
- Authors: Jihao Long, Jiequn Han
- Abstract summary: We analyze reinforcement learning in a general reproducing kernel Hilbert space (RKHS)
We consider a family of Markov decision processes $mathcalM$ of which the reward functions lie in the unit ball of an RKHS.
We show that when the reward functions lie in a high dimensional RKHS, even if the transition probability is known and the action space is finite, it is still possible for RL problems to suffer from the curse of dimensionality.
- Score: 0.76146285961466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing theoretical analysis of reinforcement learning (RL) is limited
to the tabular setting or linear models due to the difficulty in dealing with
function approximation in high dimensional space with an uncertain environment.
This work offers a fresh perspective into this challenge by analyzing RL in a
general reproducing kernel Hilbert space (RKHS). We consider a family of Markov
decision processes $\mathcal{M}$ of which the reward functions lie in the unit
ball of an RKHS and transition probabilities lie in a given arbitrary set. We
define a quantity called perturbational complexity by distribution mismatch
$\Delta_{\mathcal{M}}(\epsilon)$ to characterize the complexity of the
admissible state-action distribution space in response to a perturbation in the
RKHS with scale $\epsilon$. We show that $\Delta_{\mathcal{M}}(\epsilon)$ gives
both the lower bound of the error of all possible algorithms and the upper
bound of two specific algorithms (fitted reward and fitted Q-iteration) for the
RL problem. Hence, the decay of $\Delta_\mathcal{M}(\epsilon)$ with respect to
$\epsilon$ measures the difficulty of the RL problem on $\mathcal{M}$. We
further provide some concrete examples and discuss whether
$\Delta_{\mathcal{M}}(\epsilon)$ decays fast or not in these examples. As a
byproduct, we show that when the reward functions lie in a high dimensional
RKHS, even if the transition probability is known and the action space is
finite, it is still possible for RL problems to suffer from the curse of
dimensionality.
Related papers
- Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Horizon-free Reinforcement Learning in Adversarial Linear Mixture MDPs [72.40181882916089]
We show that our algorithm achieves an $tildeObig((d+log (|mathcalS|2 |mathcalA|))sqrtKbig)$ regret with full-information feedback, where $d$ is the dimension of a known feature mapping linearly parametrizing the unknown transition kernel of the MDP, $K$ is the number of episodes, $|mathcalS|$ and $|mathcalA|$ are the cardinalities of the state and action spaces
arXiv Detail & Related papers (2023-05-15T05:37:32Z) - Fast Rates for Maximum Entropy Exploration [52.946307632704645]
We address the challenge of exploration in reinforcement learning (RL) when the agent operates in an unknown environment with sparse or no rewards.
We study the maximum entropy exploration problem two different types.
For visitation entropy, we propose a game-theoretic algorithm that has $widetildemathcalO(H3S2A/varepsilon2)$ sample complexity.
For the trajectory entropy, we propose a simple algorithm that has a sample of complexity of order $widetildemathcalO(mathrmpoly(S,
arXiv Detail & Related papers (2023-03-14T16:51:14Z) - An $L^2$ Analysis of Reinforcement Learning in High Dimensions with
Kernel and Neural Network Approximation [9.088303226909277]
This paper considers the situation where the function approximation is made using the kernel method or the two-layer neural network model.
We establish an $tildeO(H3|mathcal A|frac14n-frac14)$ bound for the optimal policy with $Hn$ samples.
Even though this result still requires a finite-sized action space, the error bound is independent of the dimensionality of the state space.
arXiv Detail & Related papers (2021-04-15T21:59:03Z) - A Lower Bound for the Sample Complexity of Inverse Reinforcement
Learning [26.384010313580596]
Inverse reinforcement learning (IRL) is the task of finding a reward function that generates a desired optimal policy for a given Markov Decision Process (MDP)
This paper develops an information-theoretic lower bound for the sample complexity of the finite state, finite action IRL problem.
arXiv Detail & Related papers (2021-03-07T20:29:10Z) - Tightening the Dependence on Horizon in the Sample Complexity of
Q-Learning [59.71676469100807]
This work sharpens the sample complexity of synchronous Q-learning to an order of $frac|mathcalS|| (1-gamma)4varepsilon2$ for any $0varepsilon 1$.
Our finding unveils the effectiveness of vanilla Q-learning, which matches that of speedy Q-learning without requiring extra computation and storage.
arXiv Detail & Related papers (2021-02-12T14:22:05Z) - Value Function Approximations via Kernel Embeddings for No-Regret
Reinforcement Learning [10.828727066443909]
We propose an online model-based RL algorithm, namely the CME-RL, that learns representations of transition distributions as embeddings in a kernel Hilbert space.
We demonstrate the efficiency of our algorithm by proving a frequentist (worst-case) regret bound that is of order $tildeObig(Hgamma_NsqrtNbig)$footnote $tildeO(cdot)$ hides only absolute constant and poly-logarithmic factors.
arXiv Detail & Related papers (2020-11-16T11:40:55Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.