Locally Persistent Exploration in Continuous Control Tasks with Sparse
Rewards
- URL: http://arxiv.org/abs/2012.13658v1
- Date: Sat, 26 Dec 2020 01:30:26 GMT
- Title: Locally Persistent Exploration in Continuous Control Tasks with Sparse
Rewards
- Authors: Susan Amin (1 and 2), Maziar Gomrokchi (1 and 2), Hossein Aboutalebi
(3), Harsh Satija (1 and 2) and Doina Precup (1 and 2) ((1) McGill
University, (2) Mila- Quebec Artificial Intelligence Institute, (3)
University of Waterloo)
- Abstract summary: We propose a new exploration method, based on two intuitions.
The choice of the next exploratory action should depend not only on the (Markovian) state of the environment, but also on the agent's trajectory.
We discuss the theoretical properties of locally self-avoiding walks, and their ability to provide a kind of short-term memory.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A major challenge in reinforcement learning is the design of exploration
strategies, especially for environments with sparse reward structures and
continuous state and action spaces. Intuitively, if the reinforcement signal is
very scarce, the agent should rely on some form of short-term memory in order
to cover its environment efficiently. We propose a new exploration method,
based on two intuitions: (1) the choice of the next exploratory action should
depend not only on the (Markovian) state of the environment, but also on the
agent's trajectory so far, and (2) the agent should utilize a measure of spread
in the state space to avoid getting stuck in a small region. Our method
leverages concepts often used in statistical physics to provide explanations
for the behavior of simplified (polymer) chains, in order to generate
persistent (locally self-avoiding) trajectories in state space. We discuss the
theoretical properties of locally self-avoiding walks, and their ability to
provide a kind of short-term memory, through a decaying temporal correlation
within the trajectory. We provide empirical evaluations of our approach in a
simulated 2D navigation task, as well as higher-dimensional MuJoCo continuous
control locomotion tasks with sparse rewards.
Related papers
- A Temporally Correlated Latent Exploration for Reinforcement Learning [4.1101087490516575]
Temporally Correlated Latent Exploration (TeCLE) is a novel intrinsic reward formulation that employs an action-conditioned latent space and temporal correlation.
We find that the injected temporal correlation determines the exploratory behaviors of agents.
We prove that the proposed TeCLE can be robust to the Noisy TV andity in benchmark environments.
arXiv Detail & Related papers (2024-12-06T04:38:43Z) - AI planning in the imagination: High-level planning on learned abstract
search spaces [68.75684174531962]
We propose a new method, called PiZero, that gives an agent the ability to plan in an abstract search space that the agent learns during training.
We evaluate our method on multiple domains, including the traveling salesman problem, Sokoban, 2048, the facility location problem, and Pacman.
arXiv Detail & Related papers (2023-08-16T22:47:16Z) - Long-HOT: A Modular Hierarchical Approach for Long-Horizon Object
Transport [83.06265788137443]
We address key challenges in long-horizon embodied exploration and navigation by proposing a new object transport task and a novel modular framework for temporally extended navigation.
Our first contribution is the design of a novel Long-HOT environment focused on deep exploration and long-horizon planning.
We propose a modular hierarchical transport policy (HTP) that builds a topological graph of the scene to perform exploration with the help of weighted frontiers.
arXiv Detail & Related papers (2022-10-28T05:30:49Z) - Exploration Policies for On-the-Fly Controller Synthesis: A
Reinforcement Learning Approach [0.0]
We propose a new method for obtaining unboundeds based on Reinforcement Learning (RL)
Our agents learn from scratch in a highly observable partially RL task and outperform existing overall, in instances unseen during training.
arXiv Detail & Related papers (2022-10-07T20:28:25Z) - Long-Term Exploration in Persistent MDPs [68.8204255655161]
We propose an exploration method called Rollback-Explore (RbExplore)
In this paper, we propose an exploration method called Rollback-Explore (RbExplore), which utilizes the concept of the persistent Markov decision process.
We test our algorithm in the hard-exploration Prince of Persia game, without rewards and domain knowledge.
arXiv Detail & Related papers (2021-09-21T13:47:04Z) - Exploring Dynamic Context for Multi-path Trajectory Prediction [33.66335553588001]
We propose a novel framework, named Dynamic Context Network (DCENet)
In our framework, the spatial context between agents is explored by using self-attention architectures.
A set of future trajectories for each agent is predicted conditioned on the learned spatial-temporal context.
arXiv Detail & Related papers (2020-10-30T13:39:20Z) - Autonomous UAV Exploration of Dynamic Environments via Incremental
Sampling and Probabilistic Roadmap [0.3867363075280543]
We propose a novel dynamic exploration planner (DEP) for exploring unknown environments using incremental sampling and Probabilistic Roadmap (PRM)
Our method safely explores dynamic environments and outperforms the benchmark planners in terms of exploration time, path length, and computational time.
arXiv Detail & Related papers (2020-10-14T22:52:37Z) - Occupancy Anticipation for Efficient Exploration and Navigation [97.17517060585875]
We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions.
By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment.
Our approach is the winning entry in the 2020 Habitat PointNav Challenge.
arXiv Detail & Related papers (2020-08-21T03:16:51Z) - A Spatial-Temporal Attentive Network with Spatial Continuity for
Trajectory Prediction [74.00750936752418]
We propose a novel model named spatial-temporal attentive network with spatial continuity (STAN-SC)
First, spatial-temporal attention mechanism is presented to explore the most useful and important information.
Second, we conduct a joint feature sequence based on the sequence and instant state information to make the generative trajectories keep spatial continuity.
arXiv Detail & Related papers (2020-03-13T04:35:50Z) - Learning to Move with Affordance Maps [57.198806691838364]
The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent.
Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry.
We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.
arXiv Detail & Related papers (2020-01-08T04:05:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.