Reinforcement Learning with Probabilistically Complete Exploration
- URL: http://arxiv.org/abs/2001.06940v1
- Date: Mon, 20 Jan 2020 02:11:24 GMT
- Title: Reinforcement Learning with Probabilistically Complete Exploration
- Authors: Philippe Morere, Gilad Francis, Tom Blau, Fabio Ramos
- Abstract summary: We propose Rapidly Randomly-exploring Reinforcement Learning (R3L)
We formulate exploration as a search problem and leverage widely-used planning algorithms to find initial solutions.
We experimentally demonstrate the method, requiring only a fraction of exploration samples and achieving better performance.
- Score: 27.785017885906313
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Balancing exploration and exploitation remains a key challenge in
reinforcement learning (RL). State-of-the-art RL algorithms suffer from high
sample complexity, particularly in the sparse reward case, where they can do no
better than to explore in all directions until the first positive rewards are
found. To mitigate this, we propose Rapidly Randomly-exploring Reinforcement
Learning (R3L). We formulate exploration as a search problem and leverage
widely-used planning algorithms such as Rapidly-exploring Random Tree (RRT) to
find initial solutions. These solutions are used as demonstrations to
initialize a policy, then refined by a generic RL algorithm, leading to faster
and more stable convergence. We provide theoretical guarantees of R3L
exploration finding successful solutions, as well as bounds for its sampling
complexity. We experimentally demonstrate the method outperforms classic and
intrinsic exploration techniques, requiring only a fraction of exploration
samples and achieving better asymptotic performance.
Related papers
- Random Latent Exploration for Deep Reinforcement Learning [71.88709402926415]
This paper introduces a new exploration technique called Random Latent Exploration (RLE)
RLE combines the strengths of bonus-based and noise-based (two popular approaches for effective exploration in deep RL) exploration strategies.
We evaluate it on the challenging Atari and IsaacGym benchmarks and show that RLE exhibits higher overall scores across all the tasks than other approaches.
arXiv Detail & Related papers (2024-07-18T17:55:22Z) - More Efficient Randomized Exploration for Reinforcement Learning via Approximate Sampling [41.21199687865359]
We propose an algorithmic framework that incorporates different approximate sampling methods with the recently proposed Feel-Good Thompson Sampling (FGTS) approach.
Our regret analysis yields the best known dependency of regret on dimensionality, surpassing existing randomized algorithms.
Our algorithms achieve performance that is either better than or on par with other strong baselines from the deep RL literature.
arXiv Detail & Related papers (2024-06-18T03:32:10Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Jump-Start Reinforcement Learning [68.82380421479675]
We present a meta algorithm that can use offline data, demonstrations, or a pre-existing policy to initialize an RL policy.
In particular, we propose Jump-Start Reinforcement Learning (JSRL), an algorithm that employs two policies to solve tasks.
We show via experiments that JSRL is able to significantly outperform existing imitation and reinforcement learning algorithms.
arXiv Detail & Related papers (2022-04-05T17:25:22Z) - On Reward-Free RL with Kernel and Neural Function Approximations:
Single-Agent MDP and Markov Game [140.19656665344917]
We study the reward-free RL problem, where an agent aims to thoroughly explore the environment without any pre-specified reward function.
We tackle this problem under the context of function approximation, leveraging powerful function approximators.
We establish the first provably efficient reward-free RL algorithm with kernel and neural function approximators.
arXiv Detail & Related papers (2021-10-19T07:26:33Z) - Learning from Demonstration without Demonstrations [5.027571997864707]
We propose Probabilistic Planning for Demonstration Discovery (P2D2), a technique for automatically discovering demonstrations without access to an expert.
We formulate discovering demonstrations as a search problem and leverage widely-used planning algorithms such as Rapidly-exploring Random Tree to find demonstration trajectories.
We experimentally demonstrate the method outperforms classic and intrinsic exploration RL techniques in a range of classic control and robotics tasks.
arXiv Detail & Related papers (2021-06-17T01:57:08Z) - Reannealing of Decaying Exploration Based On Heuristic Measure in Deep
Q-Network [82.20059754270302]
We propose an algorithm based on the idea of reannealing, that aims at encouraging exploration only when it is needed.
We perform an illustrative case study showing that it has potential to both accelerate training and obtain a better policy.
arXiv Detail & Related papers (2020-09-29T20:40:00Z) - Active Finite Reward Automaton Inference and Reinforcement Learning
Using Queries and Counterexamples [31.31937554018045]
Deep reinforcement learning (RL) methods require intensive data from the exploration of the environment to achieve satisfactory performance.
We propose a framework that enables an RL agent to reason over its exploration process and distill high-level knowledge for effectively guiding its future explorations.
Specifically, we propose a novel RL algorithm that learns high-level knowledge in the form of a finite reward automaton by using the L* learning algorithm.
arXiv Detail & Related papers (2020-06-28T21:13:08Z) - Provably Efficient Exploration for Reinforcement Learning Using
Unsupervised Learning [96.78504087416654]
Motivated by the prevailing paradigm of using unsupervised learning for efficient exploration in reinforcement learning (RL) problems, we investigate when this paradigm is provably efficient.
We present a general algorithmic framework that is built upon two components: an unsupervised learning algorithm and a noregret tabular RL algorithm.
arXiv Detail & Related papers (2020-03-15T19:23:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.