Autonomous Exploration Under Uncertainty via Deep Reinforcement Learning
on Graphs
- URL: http://arxiv.org/abs/2007.12640v1
- Date: Fri, 24 Jul 2020 16:50:41 GMT
- Title: Autonomous Exploration Under Uncertainty via Deep Reinforcement Learning
on Graphs
- Authors: Fanfei Chen, John D. Martin, Yewei Huang, Jinkun Wang, Brendan Englot
- Abstract summary: We consider an autonomous exploration problem in which a range-sensing mobile robot is tasked with accurately mapping the landmarks in an a priori unknown environment efficiently in real-time.
We propose a novel approach that uses graph neural networks (GNNs) in conjunction with deep reinforcement learning (DRL), enabling decision-making over graphs containing exploration information to predict a robot's optimal sensing action in belief space.
- Score: 5.043563227694137
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider an autonomous exploration problem in which a range-sensing mobile
robot is tasked with accurately mapping the landmarks in an a priori unknown
environment efficiently in real-time; it must choose sensing actions that both
curb localization uncertainty and achieve information gain. For this problem,
belief space planning methods that forward-simulate robot sensing and
estimation may often fail in real-time implementation, scaling poorly with
increasing size of the state, belief and action spaces. We propose a novel
approach that uses graph neural networks (GNNs) in conjunction with deep
reinforcement learning (DRL), enabling decision-making over graphs containing
exploration information to predict a robot's optimal sensing action in belief
space. The policy, which is trained in different random environments without
human intervention, offers a real-time, scalable decision-making process whose
high-performance exploratory sensing actions yield accurate maps and high rates
of information gain.
Related papers
- Generalizability of Graph Neural Networks for Decentralized Unlabeled Motion Planning [72.86540018081531]
Unlabeled motion planning involves assigning a set of robots to target locations while ensuring collision avoidance.
This problem forms an essential building block for multi-robot systems in applications such as exploration, surveillance, and transportation.
We address this problem in a decentralized setting where each robot knows only the positions of its $k$-nearest robots and $k$-nearest targets.
arXiv Detail & Related papers (2024-09-29T23:57:25Z) - Learning Where to Look: Self-supervised Viewpoint Selection for Active Localization using Geometrical Information [68.10033984296247]
This paper explores the domain of active localization, emphasizing the importance of viewpoint selection to enhance localization accuracy.
Our contributions involve using a data-driven approach with a simple architecture designed for real-time operation, a self-supervised data training method, and the capability to consistently integrate our map into a planning framework tailored for real-world robotics applications.
arXiv Detail & Related papers (2024-07-22T12:32:09Z) - Conformalized Teleoperation: Confidently Mapping Human Inputs to High-Dimensional Robot Actions [4.855534476454559]
We learn a mapping from low-dimensional human inputs to high-dimensional robot actions.
Our key idea is to adapt the assistive map at training time to additionally estimate high-dimensional action quantiles.
We propose an uncertainty-interval-based mechanism for detecting high-uncertainty user inputs and robot states.
arXiv Detail & Related papers (2024-06-11T23:16:46Z) - MEXGEN: An Effective and Efficient Information Gain Approximation for Information Gathering Path Planning [3.195234044113248]
Planning algorithms for autonomous robots need to solve sequential decision making problems under uncertainty.
We develop a computationally efficient and effective approximation for the difficult problem of predicting the likely sensor measurements from uncertain belief states.
We demonstrate improved performance gains in radio-source tracking and localization problems using extensive simulated and field experiments with a multirotor aerial robot.
arXiv Detail & Related papers (2024-05-04T08:09:16Z) - Comparing Active Learning Performance Driven by Gaussian Processes or
Bayesian Neural Networks for Constrained Trajectory Exploration [0.0]
Currently, humans drive robots to meet scientific objectives, but depending on the robot's location, the exchange of information and driving commands may cause undue delays in mission fulfillment.
An autonomous robot encoded with a scientific objective and an exploration strategy incurs no communication delays and can fulfill missions more quickly.
Active learning algorithms offer this capability of intelligent exploration, but the underlying model structure varies the performance of the active learning algorithm in accurately forming an understanding of the environment.
arXiv Detail & Related papers (2023-09-28T02:45:14Z) - Implicit Occupancy Flow Fields for Perception and Prediction in
Self-Driving [68.95178518732965]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants.
Existing works either perform object detection followed by trajectory of the detected objects, or predict dense occupancy and flow grids for the whole scene.
This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network.
arXiv Detail & Related papers (2023-08-02T23:39:24Z) - Incremental 3D Scene Completion for Safe and Efficient Exploration
Mapping and Planning [60.599223456298915]
We propose a novel way to integrate deep learning into exploration by leveraging 3D scene completion for informed, safe, and interpretable mapping and planning.
We show that our method can speed up coverage of an environment by 73% compared to the baselines with only minimal reduction in map accuracy.
Even if scene completions are not included in the final map, we show that they can be used to guide the robot to choose more informative paths, speeding up the measurement of the scene with the robot's sensors by 35%.
arXiv Detail & Related papers (2022-08-17T14:19:33Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - Zero-Shot Reinforcement Learning on Graphs for Autonomous Exploration
Under Uncertainty [6.42522897323111]
We present a framework for self-learning a high-performance exploration policy in a single simulation environment.
We propose a novel approach that uses graph neural networks in conjunction with deep reinforcement learning.
arXiv Detail & Related papers (2021-05-11T02:42:17Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.