Deep Reinforcement learning for real autonomous mobile robot navigation
in indoor environments
- URL: http://arxiv.org/abs/2005.13857v1
- Date: Thu, 28 May 2020 09:15:14 GMT
- Title: Deep Reinforcement learning for real autonomous mobile robot navigation
in indoor environments
- Authors: Hartmut Surmann, Christian Jestel, Robin Marchel, Franziska Musberg,
Houssem Elhadj and Mahbube Ardani
- Abstract summary: We present our proof of concept for autonomous self-learning robot navigation in an unknown environment for a real robot without a map or planner.
The input for the robot is only the fused data from a 2D laser scanner and a RGB-D camera as well as the orientation to the goal.
The output actions of an Asynchronous Advantage Actor-Critic network (GA3C) are the linear and angular velocities for the robot.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep Reinforcement Learning has been successfully applied in various computer
games [8]. However, it is still rarely used in real-world applications,
especially for the navigation and continuous control of real mobile robots
[13]. Previous approaches lack safety and robustness and/or need a structured
environment. In this paper we present our proof of concept for autonomous
self-learning robot navigation in an unknown environment for a real robot
without a map or planner. The input for the robot is only the fused data from a
2D laser scanner and a RGB-D camera as well as the orientation to the goal. The
map of the environment is unknown. The output actions of an Asynchronous
Advantage Actor-Critic network (GA3C) are the linear and angular velocities for
the robot. The navigator/controller network is pretrained in a high-speed,
parallel, and self-implemented simulation environment to speed up the learning
process and then deployed to the real robot. To avoid overfitting, we train
relatively small networks, and we add random Gaussian noise to the input laser
data. The sensor data fusion with the RGB-D camera allows the robot to navigate
in real environments with real 3D obstacle avoidance and without the need to
fit the environment to the sensory capabilities of the robot. To further
increase the robustness, we train on environments of varying difficulties and
run 32 training instances simultaneously. Video: supplementary File / YouTube,
Code: GitHub
Related papers
- RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - External Camera-based Mobile Robot Pose Estimation for Collaborative
Perception with Smart Edge Sensors [22.5939915003931]
We present an approach for estimating a mobile robot's pose w.r.t. the allocentric coordinates of a network of static cameras using multi-view RGB images.
The images are processed online, locally on smart edge sensors by deep neural networks to detect the robot.
With the robot's pose precisely estimated, its observations can be fused into the allocentric scene model.
arXiv Detail & Related papers (2023-03-07T11:03:33Z) - DayDreamer: World Models for Physical Robot Learning [142.11031132529524]
Deep reinforcement learning is a common approach to robot learning but requires a large amount of trial and error to learn.
Many advances in robot learning rely on simulators.
In this paper, we apply Dreamer to 4 robots to learn online and directly in the real world, without simulators.
arXiv Detail & Related papers (2022-06-28T17:44:48Z) - Neural Scene Representation for Locomotion on Structured Terrain [56.48607865960868]
We propose a learning-based method to reconstruct the local terrain for a mobile robot traversing urban environments.
Using a stream of depth measurements from the onboard cameras and the robot's trajectory, the estimates the topography in the robot's vicinity.
We propose a 3D reconstruction model that faithfully reconstructs the scene, despite the noisy measurements and large amounts of missing data coming from the blind spots of the camera arrangement.
arXiv Detail & Related papers (2022-06-16T10:45:17Z) - Intention Aware Robot Crowd Navigation with Attention-Based Interaction
Graph [3.8461692052415137]
We study the problem of safe and intention-aware robot navigation in dense and interactive crowds.
We propose a novel recurrent graph neural network with attention mechanisms to capture heterogeneous interactions among agents.
We demonstrate that our method enables the robot to achieve good navigation performance and non-invasiveness in challenging crowd navigation scenarios.
arXiv Detail & Related papers (2022-03-03T16:26:36Z) - Intelligent Motion Planning for a Cost-effective Object Follower Mobile
Robotic System with Obstacle Avoidance [0.2062593640149623]
We propose a robotic system which uses robot vision and deep learning to get the required linear and angular velocities.
The novel methodology that we are proposing is accurate in detecting the position of the unique coloured object in any kind of lighting.
arXiv Detail & Related papers (2021-09-06T19:19:47Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z) - Autonomous Navigation in Dynamic Environments: Deep Learning-Based
Approach [0.0]
This thesis studies different deep learning-based approaches, highlighting the advantages and disadvantages of each scheme.
One of the deep learning methods based on convolutional neural network (CNN) is realized by software implementations.
We propose a low-cost approach, for indoor applications such as restaurants, museums, etc, on the base of using a monocular camera instead of a laser scanner.
arXiv Detail & Related papers (2021-02-03T23:20:20Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z) - Visual Navigation in Real-World Indoor Environments Using End-to-End
Deep Reinforcement Learning [2.7071541526963805]
We propose a novel approach that enables a direct deployment of the trained policy on real robots.
The policy is fine-tuned on images collected from real-world environments.
In 30 navigation experiments, the robot reached a 0.3-meter neighborhood of the goal in more than 86.7% of cases.
arXiv Detail & Related papers (2020-10-21T11:22:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.