A Provably Efficient Sample Collection Strategy for Reinforcement
Learning
- URL: http://arxiv.org/abs/2007.06437v2
- Date: Thu, 18 Nov 2021 15:36:55 GMT
- Title: A Provably Efficient Sample Collection Strategy for Reinforcement
Learning
- Authors: Jean Tarbouriech, Matteo Pirotta, Michal Valko, Alessandro Lazaric
- Abstract summary: One of the challenges in online reinforcement learning (RL) is that the agent needs to trade off the exploration of the environment and the exploitation of the samples to optimize its behavior.
We propose to tackle the exploration-exploitation problem following a decoupled approach composed of: 1) An "objective-specific" algorithm that prescribes how many samples to collect at which states, as if it has access to a generative model (i.e., sparse simulator of the environment); 2) An "objective-agnostic" sample collection responsible for generating the prescribed samples as fast as possible.
- Score: 123.69175280309226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the challenges in online reinforcement learning (RL) is that the agent
needs to trade off the exploration of the environment and the exploitation of
the samples to optimize its behavior. Whether we optimize for regret, sample
complexity, state-space coverage or model estimation, we need to strike a
different exploration-exploitation trade-off. In this paper, we propose to
tackle the exploration-exploitation problem following a decoupled approach
composed of: 1) An "objective-specific" algorithm that (adaptively) prescribes
how many samples to collect at which states, as if it has access to a
generative model (i.e., a simulator of the environment); 2) An
"objective-agnostic" sample collection exploration strategy responsible for
generating the prescribed samples as fast as possible. Building on recent
methods for exploration in the stochastic shortest path problem, we first
provide an algorithm that, given as input the number of samples $b(s,a)$ needed
in each state-action pair, requires $\tilde{O}(B D + D^{3/2} S^2 A)$ time steps
to collect the $B=\sum_{s,a} b(s,a)$ desired samples, in any unknown
communicating MDP with $S$ states, $A$ actions and diameter $D$. Then we show
how this general-purpose exploration algorithm can be paired with
"objective-specific" strategies that prescribe the sample requirements to
tackle a variety of settings -- e.g., model estimation, sparse reward
discovery, goal-free cost-free exploration in communicating MDPs -- for which
we obtain improved or novel sample complexity guarantees.
Related papers
- Improved Active Learning via Dependent Leverage Score Sampling [8.400581768343804]
We show how to obtain improved active learning methods in the agnostic (adversarial noise) setting.
We propose an easily implemented method based on the emphpivotal sampling algorithm
In comparison to independent sampling, our method reduces the number of samples needed to reach a given target accuracy by up to $50%$.
arXiv Detail & Related papers (2023-10-08T01:51:30Z) - Improved Active Multi-Task Representation Learning via Lasso [44.607652031235716]
In this paper, we show the dominance of the L1-regularized-relevance-based ($nu1$) strategy by giving a lower bound for the $nu2$-based strategy.
We also characterize the potential of our $nu1$-based strategy in sample-cost-sensitive settings.
arXiv Detail & Related papers (2023-06-05T03:08:29Z) - Optimal Horizon-Free Reward-Free Exploration for Linear Mixture MDPs [60.40452803295326]
We propose a new reward-free algorithm for learning linear Markov decision processes (MDPs)
At the core of our algorithm is uncertainty-weighted value-targeted regression with exploration-driven pseudo-reward.
We show that our algorithm only needs to explore $tilde O( d2varepsilon-2)$ episodes to find an $varepsilon$-optimal policy.
arXiv Detail & Related papers (2023-03-17T17:53:28Z) - On-Demand Sampling: Learning Optimally from Multiple Distributions [63.20009081099896]
Social and real-world considerations have given rise to multi-distribution learning paradigms.
We establish the optimal sample complexity of these learning paradigms and give algorithms that meet this sample complexity.
Our algorithm design and analysis are enabled by our extensions of online learning techniques for solving zero-sum games.
arXiv Detail & Related papers (2022-10-22T19:07:26Z) - On Reward-Free RL with Kernel and Neural Function Approximations:
Single-Agent MDP and Markov Game [140.19656665344917]
We study the reward-free RL problem, where an agent aims to thoroughly explore the environment without any pre-specified reward function.
We tackle this problem under the context of function approximation, leveraging powerful function approximators.
We establish the first provably efficient reward-free RL algorithm with kernel and neural function approximators.
arXiv Detail & Related papers (2021-10-19T07:26:33Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal
Sample Complexity [67.02490430380415]
We show that model-based MARL achieves a sample complexity of $tilde O(|S||B|(gamma)-3epsilon-2)$ for finding the Nash equilibrium (NE) value up to some $epsilon$ error.
We also show that such a sample bound is minimax-optimal (up to logarithmic factors) if the algorithm is reward-agnostic, where the algorithm queries state transition samples without reward knowledge.
arXiv Detail & Related papers (2020-07-15T03:25:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.