We Choose to Go to Space: Agent-driven Human and Multi-Robot
Collaboration in Microgravity
- URL: http://arxiv.org/abs/2402.14299v1
- Date: Thu, 22 Feb 2024 05:32:27 GMT
- Title: We Choose to Go to Space: Agent-driven Human and Multi-Robot
Collaboration in Microgravity
- Authors: Miao Xin, Zhongrui You, Zihan Zhang, Taoran Jiang, Tingjia Xu, Haotian
Liang, Guojing Ge, Yuchen Ji, Shentong Mo, Jian Cheng
- Abstract summary: Future space exploration requires humans and robots to work together.
We present SpaceAgents-1, a system for learning human and multi-robot collaboration strategies under microgravity conditions.
- Score: 28.64243893838686
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present SpaceAgents-1, a system for learning human and multi-robot
collaboration (HMRC) strategies under microgravity conditions. Future space
exploration requires humans to work together with robots. However, acquiring
proficient robot skills and adept collaboration under microgravity conditions
poses significant challenges within ground laboratories. To address this issue,
we develop a microgravity simulation environment and present three typical
configurations of intra-cabin robots. We propose a hierarchical heterogeneous
multi-agent collaboration architecture: guided by foundation models, a
Decision-Making Agent serves as a task planner for human-robot collaboration,
while individual Skill-Expert Agents manage the embodied control of robots.
This mechanism empowers the SpaceAgents-1 system to execute a range of
intricate long-horizon HMRC tasks.
Related papers
- Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - Redundancy-aware Action Spaces for Robot Learning [17.961314026588987]
Joint space and task space control are the two dominant action modes for controlling robot arms within the robot learning literature.
This work analyses the criteria for designing action spaces for robot manipulation and introduces ER (End-effector Redundancy), a novel action space formulation.
We present two implementations of ER, ERAngle (ERA) and ERJoint (ERJ), and we show that ERJ in particular demonstrates superior performance across multiple settings.
arXiv Detail & Related papers (2024-06-06T15:08:41Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation [68.70755196744533]
RoboGen is a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.
Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics.
arXiv Detail & Related papers (2023-11-02T17:59:21Z) - Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots [119.55240471433302]
Habitat 3.0 is a simulation platform for studying collaborative human-robot tasks in home environments.
It addresses challenges in modeling complex deformable bodies and diversity in appearance and motion.
Human-in-the-loop infrastructure enables real human interaction with simulated robots via mouse/keyboard or a VR interface.
arXiv Detail & Related papers (2023-10-19T17:29:17Z) - Generalizable Human-Robot Collaborative Assembly Using Imitation
Learning and Force Control [17.270360447188196]
We present a system for human-robot collaborative assembly using learning from demonstration and pose estimation.
The proposed system is demonstrated using a physical 6 DoF manipulator in a collaborative human-robot assembly scenario.
arXiv Detail & Related papers (2022-12-02T20:35:55Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Co-Evolution of Multi-Robot Controllers and Task Cues for Off-World Open
Pit Mining [0.6091702876917281]
This paper presents a novel method for developing scalable controllers for use in multi-robot excavation and site-preparation scenarios.
The controller starts with a blank slate and does not require human-authored operations scripts nor detailed modeling of the kinematics and dynamics of the excavator.
In this paper, we explore the use of templates and task cues to improve group performance further and minimize antagonism.
arXiv Detail & Related papers (2020-09-19T03:13:28Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z) - Proficiency Constrained Multi-Agent Reinforcement Learning for
Environment-Adaptive Multi UAV-UGV Teaming [2.745883395089022]
Mixed aerial and ground robot teams are widely used for disaster rescue, social security, precision agriculture, and military missions.
This paper developed a novel teaming method, proficiency aware multi-agent deep reinforcement learning (Mix-RL), to guide ground and aerial cooperation.
Mix-RL exploits robot capabilities while being aware of the adaptions of robot capabilities to task requirements and environment conditions.
arXiv Detail & Related papers (2020-02-10T16:19:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.