We Choose to Go to Space: Agent-driven Human and Multi-Robot
Collaboration in Microgravity
- URL: http://arxiv.org/abs/2402.14299v1
- Date: Thu, 22 Feb 2024 05:32:27 GMT
- Title: We Choose to Go to Space: Agent-driven Human and Multi-Robot
Collaboration in Microgravity
- Authors: Miao Xin, Zhongrui You, Zihan Zhang, Taoran Jiang, Tingjia Xu, Haotian
Liang, Guojing Ge, Yuchen Ji, Shentong Mo, Jian Cheng
- Abstract summary: Future space exploration requires humans and robots to work together.
We present SpaceAgents-1, a system for learning human and multi-robot collaboration strategies under microgravity conditions.
- Score: 28.64243893838686
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present SpaceAgents-1, a system for learning human and multi-robot
collaboration (HMRC) strategies under microgravity conditions. Future space
exploration requires humans to work together with robots. However, acquiring
proficient robot skills and adept collaboration under microgravity conditions
poses significant challenges within ground laboratories. To address this issue,
we develop a microgravity simulation environment and present three typical
configurations of intra-cabin robots. We propose a hierarchical heterogeneous
multi-agent collaboration architecture: guided by foundation models, a
Decision-Making Agent serves as a task planner for human-robot collaboration,
while individual Skill-Expert Agents manage the embodied control of robots.
This mechanism empowers the SpaceAgents-1 system to execute a range of
intricate long-horizon HMRC tasks.
Related papers
- $\textbf{EMOS}$: $\textbf{E}$mbodiment-aware Heterogeneous $\textbf{M}$ulti-robot $\textbf{O}$perating $\textbf{S}$ystem with LLM Agents [33.77674812074215]
We introduce a novel multi-agent framework designed to enable effective collaboration among heterogeneous robots.
We propose a self-prompted approach, where agents comprehend robot URDF files and call robot kinematics tools to generate descriptions of their physics capabilities.
The Habitat-MAS benchmark is designed to assess how a multi-agent framework handles tasks that require embodiment-aware reasoning.
arXiv Detail & Related papers (2024-10-30T03:20:01Z) - HARMONIC: Cognitive and Control Collaboration in Human-Robotic Teams [0.0]
We demonstrate a cognitive strategy for robots in human-robot teams that incorporates metacognition, natural language communication, and explainability.
The system is embodied using the HARMONIC architecture that flexibly integrates cognitive and control capabilities.
arXiv Detail & Related papers (2024-09-26T16:48:21Z) - COHERENT: Collaboration of Heterogeneous Multi-Robot System with Large Language Models [49.24666980374751]
COHERENT is a novel LLM-based task planning framework for collaboration of heterogeneous multi-robot systems.
A Proposal-Execution-Feedback-Adjustment mechanism is designed to decompose and assign actions for individual robots.
The experimental results show that our work surpasses the previous methods by a large margin in terms of success rate and execution efficiency.
arXiv Detail & Related papers (2024-09-23T15:53:41Z) - Redundancy-aware Action Spaces for Robot Learning [17.961314026588987]
Joint space and task space control are the two dominant action modes for controlling robot arms within the robot learning literature.
This work analyses the criteria for designing action spaces for robot manipulation and introduces ER (End-effector Redundancy), a novel action space formulation.
We present two implementations of ER, ERAngle (ERA) and ERJoint (ERJ), and we show that ERJ in particular demonstrates superior performance across multiple settings.
arXiv Detail & Related papers (2024-06-06T15:08:41Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation [68.70755196744533]
RoboGen is a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.
Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics.
arXiv Detail & Related papers (2023-11-02T17:59:21Z) - Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots [119.55240471433302]
Habitat 3.0 is a simulation platform for studying collaborative human-robot tasks in home environments.
It addresses challenges in modeling complex deformable bodies and diversity in appearance and motion.
Human-in-the-loop infrastructure enables real human interaction with simulated robots via mouse/keyboard or a VR interface.
arXiv Detail & Related papers (2023-10-19T17:29:17Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Co-Evolution of Multi-Robot Controllers and Task Cues for Off-World Open
Pit Mining [0.6091702876917281]
This paper presents a novel method for developing scalable controllers for use in multi-robot excavation and site-preparation scenarios.
The controller starts with a blank slate and does not require human-authored operations scripts nor detailed modeling of the kinematics and dynamics of the excavator.
In this paper, we explore the use of templates and task cues to improve group performance further and minimize antagonism.
arXiv Detail & Related papers (2020-09-19T03:13:28Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z) - Proficiency Constrained Multi-Agent Reinforcement Learning for
Environment-Adaptive Multi UAV-UGV Teaming [2.745883395089022]
Mixed aerial and ground robot teams are widely used for disaster rescue, social security, precision agriculture, and military missions.
This paper developed a novel teaming method, proficiency aware multi-agent deep reinforcement learning (Mix-RL), to guide ground and aerial cooperation.
Mix-RL exploits robot capabilities while being aware of the adaptions of robot capabilities to task requirements and environment conditions.
arXiv Detail & Related papers (2020-02-10T16:19:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.