Proficiency Constrained Multi-Agent Reinforcement Learning for
Environment-Adaptive Multi UAV-UGV Teaming
- URL: http://arxiv.org/abs/2002.03910v3
- Date: Tue, 29 Jun 2021 14:46:56 GMT
- Title: Proficiency Constrained Multi-Agent Reinforcement Learning for
Environment-Adaptive Multi UAV-UGV Teaming
- Authors: Qifei Yu, Zhexin Shen, Yijiang Pang and Rui Liu
- Abstract summary: Mixed aerial and ground robot teams are widely used for disaster rescue, social security, precision agriculture, and military missions.
This paper developed a novel teaming method, proficiency aware multi-agent deep reinforcement learning (Mix-RL), to guide ground and aerial cooperation.
Mix-RL exploits robot capabilities while being aware of the adaptions of robot capabilities to task requirements and environment conditions.
- Score: 2.745883395089022
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A mixed aerial and ground robot team, which includes both unmanned ground
vehicles (UGVs) and unmanned aerial vehicles (UAVs), is widely used for
disaster rescue, social security, precision agriculture, and military missions.
However, team capability and corresponding configuration vary since robots have
different motion speeds, perceiving ranges, reaching areas, and resilient
capabilities to the dynamic environment. Due to heterogeneous robots inside a
team and the resilient capabilities of robots, it is challenging to perform a
task with an optimal balance between reasonable task allocations and maximum
utilization of robot capability. To address this challenge for effective mixed
ground and aerial teaming, this paper developed a novel teaming method,
proficiency aware multi-agent deep reinforcement learning (Mix-RL), to guide
ground and aerial cooperation by considering the best alignments between robot
capabilities, task requirements, and environment conditions. Mix-RL largely
exploits robot capabilities while being aware of the adaption of robot
capabilities to task requirements and environment conditions. Mix-RL's
effectiveness in guiding mixed teaming was validated with the task "social
security for criminal vehicle tracking".
Related papers
- Autonomous Decision Making for UAV Cooperative Pursuit-Evasion Game with Reinforcement Learning [50.33447711072726]
This paper proposes a deep reinforcement learning-based model for decision-making in multi-role UAV cooperative pursuit-evasion game.
The proposed method enables autonomous decision-making of the UAVs in pursuit-evasion game scenarios.
arXiv Detail & Related papers (2024-11-05T10:45:30Z) - Multi-Task Interactive Robot Fleet Learning with Visual World Models [25.001148860168477]
Sirius-Fleet is a multi-task interactive robot fleet learning framework.
It monitors robot performance during deployment and involves humans to correct the robot's actions when necessary.
As the robot autonomy improves, anomaly predictors automatically adapt their prediction criteria.
arXiv Detail & Related papers (2024-10-30T04:49:39Z) - Robotic warehousing operations: a learn-then-optimize approach to large-scale neighborhood search [84.39855372157616]
This paper supports robotic parts-to-picker operations in warehousing by optimizing order-workstation assignments, item-pod assignments and the schedule of order fulfillment at workstations.
We solve it via large-scale neighborhood search, with a novel learn-then-optimize approach to subproblem generation.
In collaboration with Amazon Robotics, we show that our model and algorithm generate much stronger solutions for practical problems than state-of-the-art approaches.
arXiv Detail & Related papers (2024-08-29T20:22:22Z) - Robot Navigation with Entity-Based Collision Avoidance using Deep Reinforcement Learning [0.0]
We present a novel methodology that enhances the robot's interaction with different types of agents and obstacles.
This approach uses information about the entity types, improving collision avoidance and ensuring safer navigation.
We introduce a new reward function that penalizes the robot for collisions with different entities such as adults, bicyclists, children, and static obstacles.
arXiv Detail & Related papers (2024-08-26T11:16:03Z) - Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - We Choose to Go to Space: Agent-driven Human and Multi-Robot
Collaboration in Microgravity [28.64243893838686]
Future space exploration requires humans and robots to work together.
We present SpaceAgents-1, a system for learning human and multi-robot collaboration strategies under microgravity conditions.
arXiv Detail & Related papers (2024-02-22T05:32:27Z) - RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation [68.70755196744533]
RoboGen is a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.
Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics.
arXiv Detail & Related papers (2023-11-02T17:59:21Z) - MTAC: Hierarchical Reinforcement Learning-based Multi-gait
Terrain-adaptive Quadruped Controller [12.300578189051963]
Control of quadruped robots in dynamic and rough terrain environments is a challenging problem due to the many degrees of freedom of these robots.
Current locomotion controllers for quadrupeds are limited in their ability to produce multiple adaptive gaits, solve tasks in a time and resource-efficient manner, and require tedious training and manual tuning procedures.
We propose MTAC: a multi-gait terrain-adaptive controller, which utilizes a Hierarchical reinforcement learning (HRL) approach while being time and memory-efficient.
arXiv Detail & Related papers (2023-11-01T18:17:47Z) - AdverSAR: Adversarial Search and Rescue via Multi-Agent Reinforcement
Learning [4.843554492319537]
We propose an algorithm that allows robots to efficiently coordinate their strategies in the presence of adversarial inter-agent communications.
It is assumed that the robots have no prior knowledge of the target locations, and they can interact with only a subset of neighboring robots at any time.
The effectiveness of our approach is demonstrated on a collection of prototype grid-world environments.
arXiv Detail & Related papers (2022-12-20T08:13:29Z) - Autonomous Aerial Robot for High-Speed Search and Intercept Applications [86.72321289033562]
A fully-autonomous aerial robot for high-speed object grasping has been proposed.
As an additional sub-task, our system is able to autonomously pierce balloons located in poles close to the surface.
Our approach has been validated in a challenging international competition and has shown outstanding results.
arXiv Detail & Related papers (2021-12-10T11:49:51Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.