Vision-Based Mobile Robotics Obstacle Avoidance With Deep Reinforcement
Learning
- URL: http://arxiv.org/abs/2103.04727v1
- Date: Mon, 8 Mar 2021 13:05:46 GMT
- Title: Vision-Based Mobile Robotics Obstacle Avoidance With Deep Reinforcement
Learning
- Authors: Patrick Wenzel, Torsten Sch\"on, Laura Leal-Taix\'e, Daniel Cremers
- Abstract summary: Obstacle avoidance is a fundamental and challenging problem for autonomous navigation of mobile robots.
In this paper, we consider the problem of obstacle avoidance in simple 3D environments where the robot has to solely rely on a single monocular camera.
We tackle the obstacle avoidance problem as a data-driven end-to-end deep learning approach.
- Score: 49.04274612323564
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Obstacle avoidance is a fundamental and challenging problem for autonomous
navigation of mobile robots. In this paper, we consider the problem of obstacle
avoidance in simple 3D environments where the robot has to solely rely on a
single monocular camera. In particular, we are interested in solving this
problem without relying on localization, mapping, or planning techniques. Most
of the existing work consider obstacle avoidance as two separate problems,
namely obstacle detection, and control. Inspired by the recent advantages of
deep reinforcement learning in Atari games and understanding highly complex
situations in Go, we tackle the obstacle avoidance problem as a data-driven
end-to-end deep learning approach. Our approach takes raw images as input and
generates control commands as output. We show that discrete action spaces are
outperforming continuous control commands in terms of expected average reward
in maze-like environments. Furthermore, we show how to accelerate the learning
and increase the robustness of the policy by incorporating predicted depth maps
by a generative adversarial network.
Related papers
- Deep Reinforcement Learning-based Obstacle Avoidance for Robot Movement in Warehouse Environments [6.061908707850057]
This paper proposes a deep reinforcement learning based on the warehouse environment, the mobile robot obstacle avoidance algorithm.
For the insufficient learning ability of the value function network in the deep reinforcement learning algorithm, the interaction information between pedestrians is extracted through the pedestrian angle grid.
The reward function of reinforcement learning is designed based on the spatial behaviour of pedestrians, and the robot is punished for the state where the angle changes too much.
arXiv Detail & Related papers (2024-09-23T12:42:35Z) - Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - Floor extraction and door detection for visually impaired guidance [78.94595951597344]
Finding obstacle-free paths in unknown environments is a big navigation issue for visually impaired people and autonomous robots.
New devices based on computer vision systems can help impaired people to overcome the difficulties of navigating in unknown environments in safe conditions.
In this work it is proposed a combination of sensors and algorithms that can lead to the building of a navigation system for visually impaired people.
arXiv Detail & Related papers (2024-01-30T14:38:43Z) - Collision Avoidance and Navigation for a Quadrotor Swarm Using End-to-end Deep Reinforcement Learning [8.864432196281268]
We propose an end-to-end DRL approach to control quadrotor swarms in environments with obstacles.
We provide our agents a curriculum and a replay buffer of the clipped collision episodes to improve performance in obstacle-rich environments.
Our work is the first work that demonstrates the possibility of learning neighbor-avoiding and obstacle-avoiding control policies trained with end-to-end DRL.
arXiv Detail & Related papers (2023-09-23T06:56:28Z) - Learning Vision-based Pursuit-Evasion Robot Policies [54.52536214251999]
We develop a fully-observable robot policy that generates supervision for a partially-observable one.
We deploy our policy on a physical quadruped robot with an RGB-D camera on pursuit-evasion interactions in the wild.
arXiv Detail & Related papers (2023-08-30T17:59:05Z) - Nonprehensile Planar Manipulation through Reinforcement Learning with
Multimodal Categorical Exploration [8.343657309038285]
Reinforcement Learning is a powerful framework for developing such robot controllers.
We propose a multimodal exploration approach through categorical distributions, which enables us to train planar pushing RL policies.
We show that the learned policies are robust to external disturbances and observation noise, and scale to tasks with multiple pushers.
arXiv Detail & Related papers (2023-08-04T16:55:00Z) - Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone
Racing [52.50284630866713]
Existing systems often require hand-engineered components for state estimation, planning, and control.
This paper tackles the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies.
arXiv Detail & Related papers (2022-10-26T19:03:17Z) - Safe reinforcement learning of dynamic high-dimensional robotic tasks:
navigation, manipulation, interaction [31.553783147007177]
In reinforcement learning, safety is even more fundamental for exploring an environment without causing any damage.
This paper introduces a new formulation of safe exploration for reinforcement learning of various robotic tasks.
Our approach applies to a wide class of robotic platforms and enforces safety even under complex collision constraints learned from data.
arXiv Detail & Related papers (2022-09-27T11:23:49Z) - Distilling Motion Planner Augmented Policies into Visual Control
Policies for Robot Manipulation [26.47544415550067]
We propose to distill a state-based motion planner augmented policy to a visual control policy.
We evaluate our method on three manipulation tasks in obstructed environments.
Our framework is highly sample-efficient and outperforms the state-of-the-art algorithms.
arXiv Detail & Related papers (2021-11-11T18:52:00Z) - Simultaneous Navigation and Construction Benchmarking Environments [73.0706832393065]
We need intelligent robots for mobile construction, the process of navigating in an environment and modifying its structure according to a geometric design.
In this task, a major robot vision and learning challenge is how to exactly achieve the design without GPS.
We benchmark the performance of a handcrafted policy with basic localization and planning, and state-of-the-art deep reinforcement learning methods.
arXiv Detail & Related papers (2021-03-31T00:05:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.