Cloud-Based Hierarchical Imitation Learning for Scalable Transfer of
Construction Skills from Human Workers to Assisting Robots
- URL: http://arxiv.org/abs/2309.11619v1
- Date: Wed, 20 Sep 2023 20:04:42 GMT
- Title: Cloud-Based Hierarchical Imitation Learning for Scalable Transfer of
Construction Skills from Human Workers to Assisting Robots
- Authors: Hongrui Yu, Vineet R. Kamat, Carol C. Menassa
- Abstract summary: This paper proposes an immersive, cloud robotics-based virtual demonstration framework.
It digitalizes the demonstration process, eliminating the need for repetitive physical manipulation of heavy construction objects.
By delegating the physical strains of construction work to human-trained robots, this framework promotes the inclusion of workers with diverse physical capabilities.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Assigning repetitive and physically-demanding construction tasks to robots
can alleviate human workers's exposure to occupational injuries. Transferring
necessary dexterous and adaptive artisanal construction craft skills from
workers to robots is crucial for the successful delegation of construction
tasks and achieving high-quality robot-constructed work. Predefined motion
planning scripts tend to generate rigid and collision-prone robotic behaviors
in unstructured construction site environments. In contrast, Imitation Learning
(IL) offers a more robust and flexible skill transfer scheme. However, the
majority of IL algorithms rely on human workers to repeatedly demonstrate task
performance at full scale, which can be counterproductive and infeasible in the
case of construction work. To address this concern, this paper proposes an
immersive, cloud robotics-based virtual demonstration framework that serves two
primary purposes. First, it digitalizes the demonstration process, eliminating
the need for repetitive physical manipulation of heavy construction objects.
Second, it employs a federated collection of reusable demonstrations that are
transferable for similar tasks in the future and can thus reduce the
requirement for repetitive illustration of tasks by human agents. Additionally,
to enhance the trustworthiness, explainability, and ethical soundness of the
robot training, this framework utilizes a Hierarchical Imitation Learning (HIL)
model to decompose human manipulation skills into sequential and reactive
sub-skills. These two layers of skills are represented by deep generative
models, enabling adaptive control of robot actions. By delegating the physical
strains of construction work to human-trained robots, this framework promotes
the inclusion of workers with diverse physical capabilities and educational
backgrounds within the construction industry.
Related papers
- $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - Grounding Language Models in Autonomous Loco-manipulation Tasks [3.8363685417355557]
We propose a novel framework that learns, selects, and plans behaviors based on tasks in different scenarios.
We leverage the planning and reasoning features of the large language model (LLM), constructing a hierarchical task graph.
Experiments in simulation and real-world using the CENTAURO robot show that the language model based planner can efficiently adapt to new loco-manipulation tasks.
arXiv Detail & Related papers (2024-09-02T15:27:48Z) - Towards Human-Centered Construction Robotics: A Reinforcement Learning-Driven Companion Robot for Contextually Assisting Carpentry Workers [11.843554918145983]
This paper introduces a human-centered approach with a "work companion rover" designed to assist construction workers within their existing practices.
We conduct an in-depth study on deploying a robotic system in carpentry formwork, showcasing a prototype that emphasizes mobility, safety, and comfortable worker-robot collaboration.
arXiv Detail & Related papers (2024-03-27T23:55:02Z) - RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation [68.70755196744533]
RoboGen is a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.
Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics.
arXiv Detail & Related papers (2023-11-02T17:59:21Z) - RoboCat: A Self-Improving Generalist Agent for Robotic Manipulation [33.10577695383743]
We propose a multi-embodiment, multi-task generalist agent for robotic manipulation called RoboCat.
This data spans a large repertoire of motor control skills from simulated and real robotic arms with varying sets of observations and actions.
With RoboCat, we demonstrate the ability to generalise to new tasks and robots, both zero-shot as well as through adaptation using only 100-1000 examples.
arXiv Detail & Related papers (2023-06-20T17:35:20Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - Bottom-Up Skill Discovery from Unsegmented Demonstrations for
Long-Horizon Robot Manipulation [55.31301153979621]
We tackle real-world long-horizon robot manipulation tasks through skill discovery.
We present a bottom-up approach to learning a library of reusable skills from unsegmented demonstrations.
Our method has shown superior performance over state-of-the-art imitation learning methods in multi-stage manipulation tasks.
arXiv Detail & Related papers (2021-09-28T16:18:54Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Co-Evolution of Multi-Robot Controllers and Task Cues for Off-World Open
Pit Mining [0.6091702876917281]
This paper presents a novel method for developing scalable controllers for use in multi-robot excavation and site-preparation scenarios.
The controller starts with a blank slate and does not require human-authored operations scripts nor detailed modeling of the kinematics and dynamics of the excavator.
In this paper, we explore the use of templates and task cues to improve group performance further and minimize antagonism.
arXiv Detail & Related papers (2020-09-19T03:13:28Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.