Autonomous Navigation in Dynamic Environments: Deep Learning-Based
Approach
- URL: http://arxiv.org/abs/2102.08758v1
- Date: Wed, 3 Feb 2021 23:20:20 GMT
- Title: Autonomous Navigation in Dynamic Environments: Deep Learning-Based
Approach
- Authors: Omar Mohamed, Zeyad Mohsen, Mohamed Wageeh, Mohamed Hegazy
- Abstract summary: This thesis studies different deep learning-based approaches, highlighting the advantages and disadvantages of each scheme.
One of the deep learning methods based on convolutional neural network (CNN) is realized by software implementations.
We propose a low-cost approach, for indoor applications such as restaurants, museums, etc, on the base of using a monocular camera instead of a laser scanner.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mobile robotics is a research area that has witnessed incredible advances for
the last decades. Robot navigation is an essential task for mobile robots. Many
methods are proposed for allowing robots to navigate within different
environments. This thesis studies different deep learning-based approaches,
highlighting the advantages and disadvantages of each scheme. In fact, these
approaches are promising that some of them can navigate the robot in unknown
and dynamic environments. In this thesis, one of the deep learning methods
based on convolutional neural network (CNN) is realized by software
implementations. There are different preparation studies to complete this
thesis such as introduction to Linux, robot operating system (ROS), C++,
python, and GAZEBO simulator. Within this work, we modified the drone network
(namely, DroNet) approach to be used in an indoor environment by using a ground
robot in different cases. Indeed, the DroNet approach suffers from the absence
of goal-oriented motion. Therefore, this thesis mainly focuses on tackling this
problem via mapping using simultaneous localization and mapping (SLAM) and path
planning techniques using Dijkstra. Afterward, the combination between the
DroNet ground robot-based, mapping, and path planning leads to a goal-oriented
motion, following the shortest path while avoiding the dynamic obstacle.
Finally, we propose a low-cost approach, for indoor applications such as
restaurants, museums, etc, on the base of using a monocular camera instead of a
laser scanner.
Related papers
- Multi-Robot Informative Path Planning for Efficient Target Mapping using Deep Reinforcement Learning [11.134855513221359]
We propose a novel deep reinforcement learning approach for multi-robot informative path planning.
We train our reinforcement learning policy via the centralized training and decentralized execution paradigm.
Our approach outperforms other state-of-the-art multi-robot target mapping approaches by 33.75% in terms of the number of discovered targets-of-interest.
arXiv Detail & Related papers (2024-09-25T14:27:37Z) - Reinforcement learning based local path planning for mobile robot [0.0]
In the offline scenario, an environment map is created once, and fixed path planning is made on this map to reach the target.
In the online scenario, the robot moves dynamically to a given target without using a map by using the perceived data coming from the sensors.
Deep neural network powered Q-Learning methods are used as an emerging solution to the aforementioned problems in mobile robot navigation.
arXiv Detail & Related papers (2023-10-24T18:26:25Z) - Minimizing Turns in Watchman Robot Navigation: Strategies and Solutions [1.6749379740049928]
This paper introduces an efficient linear-time algorithm for solving the Orthogonal Watchman Route Problem (OWRP)
The findings of this study contribute to the progress of robotic systems by enabling the design of more streamlined patrol robots.
arXiv Detail & Related papers (2023-08-19T18:53:53Z) - Intention Aware Robot Crowd Navigation with Attention-Based Interaction
Graph [3.8461692052415137]
We study the problem of safe and intention-aware robot navigation in dense and interactive crowds.
We propose a novel recurrent graph neural network with attention mechanisms to capture heterogeneous interactions among agents.
We demonstrate that our method enables the robot to achieve good navigation performance and non-invasiveness in challenging crowd navigation scenarios.
arXiv Detail & Related papers (2022-03-03T16:26:36Z) - REvolveR: Continuous Evolutionary Models for Robot-to-robot Policy
Transfer [57.045140028275036]
We consider the problem of transferring a policy across two different robots with significantly different parameters such as kinematics and morphology.
Existing approaches that train a new policy by matching the action or state transition distribution, including imitation learning methods, fail due to optimal action and/or state distribution being mismatched in different robots.
We propose a novel method named $REvolveR$ of using continuous evolutionary models for robotic policy transfer implemented in a physics simulator.
arXiv Detail & Related papers (2022-02-10T18:50:25Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic
Platforms [60.59764170868101]
Reinforcement learning methods can achieve significant performance but require a large amount of training data collected on the same robotic platform.
We formulate it as a few-shot meta-learning problem where the goal is to find a model that captures the common structure shared across different robotic platforms.
We experimentally evaluate our framework on a simulated reaching and a real-robot picking task using 400 simulated robots.
arXiv Detail & Related papers (2021-03-05T14:16:20Z) - Neural Dynamic Policies for End-to-End Sensorimotor Learning [51.24542903398335]
The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces.
We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space.
NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks.
arXiv Detail & Related papers (2020-12-04T18:59:32Z) - LaND: Learning to Navigate from Disengagements [158.6392333480079]
We present a reinforcement learning approach for learning to navigate from disengagements, or LaND.
LaND learns a neural network model that predicts which actions lead to disengagements given the current sensory observation, and then at test time plans and executes actions that avoid disengagements.
Our results demonstrate LaND can successfully learn to navigate in diverse, real world sidewalk environments, outperforming both imitation learning and reinforcement learning approaches.
arXiv Detail & Related papers (2020-10-09T17:21:42Z) - Projection Mapping Implementation: Enabling Direct Externalization of
Perception Results and Action Intent to Improve Robot Explainability [62.03014078810652]
Existing research on non-verbal cues, e.g., eye gaze or arm movement, may not accurately present a robot's internal states.
Projecting the states directly onto a robot's operating environment has the advantages of being direct, accurate, and more salient.
arXiv Detail & Related papers (2020-10-05T18:16:20Z) - Deep Reinforcement learning for real autonomous mobile robot navigation
in indoor environments [0.0]
We present our proof of concept for autonomous self-learning robot navigation in an unknown environment for a real robot without a map or planner.
The input for the robot is only the fused data from a 2D laser scanner and a RGB-D camera as well as the orientation to the goal.
The output actions of an Asynchronous Advantage Actor-Critic network (GA3C) are the linear and angular velocities for the robot.
arXiv Detail & Related papers (2020-05-28T09:15:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.