Knowledge-Driven Robot Program Synthesis from Human VR Demonstrations
- URL: http://arxiv.org/abs/2306.02739v2
- Date: Mon, 3 Jul 2023 08:57:00 GMT
- Title: Knowledge-Driven Robot Program Synthesis from Human VR Demonstrations
- Authors: Benjamin Alt, Franklin Kenghagho Kenfack, Andrei Haidu, Darko Katic,
Rainer J\"akel, Michael Beetz
- Abstract summary: We present a system for automatically generating executable robot control programs from human task demonstrations in virtual reality (VR)
We leverage common-sense knowledge and game engine-based physics to semantically interpret human VR demonstrations.
We demonstrate our approach in the context of force-sensitive fetch-and-place for a robotic shopping assistant.
- Score: 16.321053835017942
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Aging societies, labor shortages and increasing wage costs call for
assistance robots capable of autonomously performing a wide array of real-world
tasks. Such open-ended robotic manipulation requires not only powerful
knowledge representations and reasoning (KR&R) algorithms, but also methods for
humans to instruct robots what tasks to perform and how to perform them. In
this paper, we present a system for automatically generating executable robot
control programs from human task demonstrations in virtual reality (VR). We
leverage common-sense knowledge and game engine-based physics to semantically
interpret human VR demonstrations, as well as an expressive and general task
representation and automatic path planning and code generation, embedded into a
state-of-the-art cognitive architecture. We demonstrate our approach in the
context of force-sensitive fetch-and-place for a robotic shopping assistant.
The source code is available at
https://github.com/ease-crc/vr-program-synthesis.
Related papers
- $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - Know your limits! Optimize the robot's behavior through self-awareness [11.021217430606042]
Recent human-robot imitation algorithms focus on following a reference human motion with high precision.
We introduce a deep-learning model that anticipates the robot's performance when imitating a given reference.
Our Self-AWare model (SAW) ranks potential robot behaviors based on various criteria, such as fall likelihood, adherence to the reference motion, and smoothness.
arXiv Detail & Related papers (2024-09-16T14:14:58Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Robot Sound Interpretation: Learning Visual-Audio Representations for
Voice-Controlled Robots [0.0]
We learn a representation that associates images and sound commands with minimal supervision.
Using this representation, we generate an intrinsic reward function to learn robotic tasks with reinforcement learning.
We show that our method outperforms previous work across various sound types and robotic tasks empirically.
arXiv Detail & Related papers (2021-09-07T02:26:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.