Double Deep Reinforcement Learning Techniques for Low Dimensional
Sensing Mapless Navigation of Terrestrial Mobile Robots
- URL: http://arxiv.org/abs/2301.11173v1
- Date: Thu, 26 Jan 2023 15:23:59 GMT
- Title: Double Deep Reinforcement Learning Techniques for Low Dimensional
Sensing Mapless Navigation of Terrestrial Mobile Robots
- Authors: Linda Dotto de Moraes and Victor Augusto Kich and Alisson Henrique
Kolling and Jair Augusto Bottega and Raul Steinmetz and Emerson Cassiano da
Silva and Ricardo Bedin Grando and Anselmo Rafael Cuckla and Daniel Fernando
Tello Gamarra
- Abstract summary: We present two Deep Reinforcement Learning (Deep-RL) approaches to enhance the problem of mapless navigation for a terrestrial mobile robot.
Our methodology focus on comparing a Deep-RL technique based on the Deep Q-Network (DQN) algorithm with a second one based on the Double Deep Q-Network (DDQN) algorithm.
By using a low-dimensional sensing structure of learning, we show that it is possible to train an agent to perform navigation-related tasks and obstacle avoidance without using complex sensing information.
- Score: 0.9175368456179858
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we present two Deep Reinforcement Learning (Deep-RL) approaches
to enhance the problem of mapless navigation for a terrestrial mobile robot.
Our methodology focus on comparing a Deep-RL technique based on the Deep
Q-Network (DQN) algorithm with a second one based on the Double Deep Q-Network
(DDQN) algorithm. We use 24 laser measurement samples and the relative position
and angle of the agent to the target as information for our agents, which
provide the actions as velocities for our robot. By using a low-dimensional
sensing structure of learning, we show that it is possible to train an agent to
perform navigation-related tasks and obstacle avoidance without using complex
sensing information. The proposed methodology was successfully used in three
distinct simulated environments. Overall, it was shown that Double Deep
structures further enhance the problem for the navigation of mobile robots when
compared to the ones with simple Q structures.
Related papers
- Mission-driven Exploration for Accelerated Deep Reinforcement Learning
with Temporal Logic Task Specifications [11.812602599752294]
We consider robots with unknown dynamics operating in environments with unknown structure.
Our goal is to synthesize a control policy that maximizes the probability of satisfying an automaton-encoded task.
We propose a novel DRL algorithm, which has the capability to learn control policies at a notably faster rate compared to similar methods.
arXiv Detail & Related papers (2023-11-28T18:59:58Z) - Enhanced Low-Dimensional Sensing Mapless Navigation of Terrestrial
Mobile Robots Using Double Deep Reinforcement Learning Techniques [1.191504645891765]
We present two distinct approaches aimed at enhancing mapless navigation for a ground-based mobile robot.
The research methodology primarily involves a comparative analysis between a Deep-RL strategy grounded in the foundational Deep Q-Network (DQN) algorithm, and an alternative approach based on the Double Deep Q-Network (DDQN) algorithm.
The proposed methodology is evaluated in three different real environments, revealing that Double Deep structures significantly enhance the navigation capabilities of mobile robots compared to simple Q structures.
arXiv Detail & Related papers (2023-10-20T20:47:07Z) - Robot path planning using deep reinforcement learning [0.0]
Reinforcement learning methods offer an alternative to map-free navigation tasks.
Deep reinforcement learning agents are implemented for both the obstacle avoidance and the goal-oriented navigation task.
An analysis of the changes in the behaviour and performance of the agents caused by modifications in the reward function is conducted.
arXiv Detail & Related papers (2023-02-17T20:08:59Z) - Deterministic and Stochastic Analysis of Deep Reinforcement Learning for
Low Dimensional Sensing-based Navigation of Mobile Robots [0.41562334038629606]
This paper presents a comparative analysis of two Deep-RL techniques - Deep Deterministic Policy Gradients (DDPG) and Soft Actor-Critic (SAC)
We aim to contribute by showing how the neural network architecture influences the learning itself, presenting quantitative results based on the time and distance of aerial mobile robots for each approach.
arXiv Detail & Related papers (2022-09-13T22:28:26Z) - DC-MRTA: Decentralized Multi-Robot Task Allocation and Navigation in
Complex Environments [55.204450019073036]
We present a novel reinforcement learning based task allocation and decentralized navigation algorithm for mobile robots in warehouse environments.
We consider the problem of joint decentralized task allocation and navigation and present a two level approach to solve it.
We observe improvement up to 14% in terms of task completion time and up-to 40% improvement in terms of computing collision-free trajectories for the robots.
arXiv Detail & Related papers (2022-09-07T00:35:27Z) - XAI-N: Sensor-based Robot Navigation using Expert Policies and Decision
Trees [55.9643422180256]
We present a novel sensor-based learning navigation algorithm to compute a collision-free trajectory for a robot in dense and dynamic environments.
Our approach uses deep reinforcement learning-based expert policy that is trained using a sim2real paradigm.
We highlight the benefits of our algorithm in simulated environments and navigating a Clearpath Jackal robot among moving pedestrians.
arXiv Detail & Related papers (2021-04-22T01:33:10Z) - Simultaneous Navigation and Construction Benchmarking Environments [73.0706832393065]
We need intelligent robots for mobile construction, the process of navigating in an environment and modifying its structure according to a geometric design.
In this task, a major robot vision and learning challenge is how to exactly achieve the design without GPS.
We benchmark the performance of a handcrafted policy with basic localization and planning, and state-of-the-art deep reinforcement learning methods.
arXiv Detail & Related papers (2021-03-31T00:05:54Z) - Improving Target-driven Visual Navigation with Attention on 3D Spatial
Relationships [52.72020203771489]
We investigate target-driven visual navigation using deep reinforcement learning (DRL) in 3D indoor scenes.
Our proposed method combines visual features and 3D spatial representations to learn navigation policy.
Our experiments, performed in the AI2-THOR, show that our model outperforms the baselines in both SR and SPL metrics.
arXiv Detail & Related papers (2020-04-29T08:46:38Z) - Reinforcement co-Learning of Deep and Spiking Neural Networks for
Energy-Efficient Mapless Navigation with Neuromorphic Hardware [0.0]
We propose a neuromorphic approach that combines the energy-efficiency of spiking neural networks with the optimality of deep reinforcement learning (DRL)
Our framework consists of a spiking actor network (SAN) and a deep critic network, where the two networks were trained jointly using gradient descent.
To evaluate our approach, we deployed the trained SAN on Intel's Loihi neuromorphic processor.
arXiv Detail & Related papers (2020-03-02T19:39:16Z) - Enhanced Adversarial Strategically-Timed Attacks against Deep
Reinforcement Learning [91.13113161754022]
We introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames.
Our experimental results show that the adversarial timing attacks can lead to a significant performance drop.
arXiv Detail & Related papers (2020-02-20T21:39:25Z) - Deep Learning based Pedestrian Inertial Navigation: Methods, Dataset and
On-Device Inference [49.88536971774444]
Inertial measurements units (IMUs) are small, cheap, energy efficient, and widely employed in smart devices and mobile robots.
Exploiting inertial data for accurate and reliable pedestrian navigation supports is a key component for emerging Internet-of-Things applications and services.
We present and release the Oxford Inertial Odometry dataset (OxIOD), a first-of-its-kind public dataset for deep learning based inertial navigation research.
arXiv Detail & Related papers (2020-01-13T04:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.