Decentralized Motion Planning for Multi-Robot Navigation using Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2011.05605v2
- Date: Fri, 20 Nov 2020 18:19:32 GMT
- Title: Decentralized Motion Planning for Multi-Robot Navigation using Deep
Reinforcement Learning
- Authors: Sivanathan Kandhasamy, Vinayagam Babu Kuppusamy, Tanmay Vilas Samak,
Chinmay Vilas Samak
- Abstract summary: This work presents a decentralized motion planning framework for addressing the task of multi-robot navigation using deep reinforcement learning.
The notion of decentralized motion planning with common and shared policy learning was adopted, which allowed robust training and testing of this approach.
- Score: 0.41998444721319217
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work presents a decentralized motion planning framework for addressing
the task of multi-robot navigation using deep reinforcement learning. A custom
simulator was developed in order to experimentally investigate the navigation
problem of 4 cooperative non-holonomic robots sharing limited state information
with each other in 3 different settings. The notion of decentralized motion
planning with common and shared policy learning was adopted, which allowed
robust training and testing of this approach in a stochastic environment since
the agents were mutually independent and exhibited asynchronous motion
behavior. The task was further aggravated by providing the agents with a sparse
observation space and requiring them to generate continuous action commands so
as to efficiently, yet safely navigate to their respective goal locations,
while avoiding collisions with other dynamic peers and static obstacles at all
times. The experimental results are reported in terms of quantitative measures
and qualitative remarks for both training and deployment phases.
Related papers
- Learning Manipulation Tasks in Dynamic and Shared 3D Spaces [2.4892784882130132]
Learning automated pick-and-place operations can be efficiently done by introducing collaborative autonomous systems.
In this paper, we propose a deep reinforcement learning strategy to learn the place task of multi-categorical items.
arXiv Detail & Related papers (2024-04-26T19:40:19Z) - Multi-Agent Deep Reinforcement Learning for Cooperative and Competitive
Autonomous Vehicles using AutoDRIVE Ecosystem [1.1893676124374688]
We introduce AutoDRIVE Ecosystem as an enabler to develop physically accurate and graphically realistic digital twins of Nigel and F1TENTH.
We first investigate an intersection problem using a set of cooperative vehicles (Nigel) that share limited state information with each other in single as well as multi-agent learning settings.
We then investigate an adversarial head-to-head autonomous racing problem using a different set of vehicles (F1TENTH) in a multi-agent learning setting using an individual policy approach.
arXiv Detail & Related papers (2023-09-18T02:43:59Z) - Latent Exploration for Reinforcement Learning [87.42776741119653]
In Reinforcement Learning, agents learn policies by exploring and interacting with the environment.
We propose LATent TIme-Correlated Exploration (Lattice), a method to inject temporally-correlated noise into the latent state of the policy network.
arXiv Detail & Related papers (2023-05-31T17:40:43Z) - Safe Multi-agent Learning via Trapping Regions [89.24858306636816]
We apply the concept of trapping regions, known from qualitative theory of dynamical systems, to create safety sets in the joint strategy space for decentralized learning.
We propose a binary partitioning algorithm for verification that candidate sets form trapping regions in systems with known learning dynamics, and a sampling algorithm for scenarios where learning dynamics are not known.
arXiv Detail & Related papers (2023-02-27T14:47:52Z) - CLAS: Coordinating Multi-Robot Manipulation with Central Latent Action
Spaces [9.578169216444813]
This paper proposes an approach to coordinating multi-robot manipulation through learned latent action spaces that are shared across different agents.
We validate our method in simulated multi-robot manipulation tasks and demonstrate improvement over previous baselines in terms of sample efficiency and learning performance.
arXiv Detail & Related papers (2022-11-28T23:20:47Z) - Leveraging Sequentiality in Reinforcement Learning from a Single
Demonstration [68.94506047556412]
We propose to leverage a sequential bias to learn control policies for complex robotic tasks using a single demonstration.
We show that DCIL-II can solve with unprecedented sample efficiency some challenging simulated tasks such as humanoid locomotion and stand-up.
arXiv Detail & Related papers (2022-11-09T10:28:40Z) - Inferring Versatile Behavior from Demonstrations by Matching Geometric
Descriptors [72.62423312645953]
Humans intuitively solve tasks in versatile ways, varying their behavior in terms of trajectory-based planning and for individual steps.
Current Imitation Learning algorithms often only consider unimodal expert demonstrations and act in a state-action-based setting.
Instead, we combine a mixture of movement primitives with a distribution matching objective to learn versatile behaviors that match the expert's behavior and versatility.
arXiv Detail & Related papers (2022-10-17T16:42:59Z) - Multi-Task Conditional Imitation Learning for Autonomous Navigation at
Crowded Intersections [4.961474432432092]
We focus on autonomous navigation at crowded intersections that require interaction with pedestrians.
A multi-task conditional imitation learning framework is proposed to adapt both lateral and longitudinal control tasks.
A new benchmark called IntersectNav is developed and human demonstrations are provided.
arXiv Detail & Related papers (2022-02-21T11:13:59Z) - Learning to Shift Attention for Motion Generation [55.61994201686024]
One challenge of motion generation using robot learning from demonstration techniques is that human demonstrations follow a distribution with multiple modes for one task query.
Previous approaches fail to capture all modes or tend to average modes of the demonstrations and thus generate invalid trajectories.
We propose a motion generation model with extrapolation ability to overcome this problem.
arXiv Detail & Related papers (2021-02-24T09:07:52Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.