Automated acquisition of structured, semantic models of manipulation
activities from human VR demonstration
- URL: http://arxiv.org/abs/2011.13689v1
- Date: Fri, 27 Nov 2020 11:58:32 GMT
- Title: Automated acquisition of structured, semantic models of manipulation
activities from human VR demonstration
- Authors: Andrei Haidu and Michael Beetz
- Abstract summary: We present a system capable of collecting and annotating, human performed, robot understandable, everyday activities from virtual environments.
The human movements are mapped in the simulated world using off-the-shelf virtual reality devices with full body, and eye tracking capabilities.
- Score: 21.285606436442656
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we present a system capable of collecting and annotating, human
performed, robot understandable, everyday activities from virtual environments.
The human movements are mapped in the simulated world using off-the-shelf
virtual reality devices with full body, and eye tracking capabilities. All the
interactions in the virtual world are physically simulated, thus movements and
their effects are closely relatable to the real world. During the activity
execution, a subsymbolic data logger is recording the environment and the human
gaze on a per-frame basis, enabling offline scene reproduction and replays.
Coupled with the physics engine, online monitors (symbolic data loggers) are
parsing (using various grammars) and recording events, actions, and their
effects in the simulated world.
Related papers
- Towards Immersive Human-X Interaction: A Real-Time Framework for Physically Plausible Motion Synthesis [51.95817740348585]
Human-X is a novel framework designed to enable immersive and physically plausible human interactions across diverse entities.<n>Our method jointly predicts actions and reactions in real-time using an auto-regressive reaction diffusion planner.<n>Our framework is validated in real-world applications, including virtual reality interface for human-robot interaction.
arXiv Detail & Related papers (2025-08-04T06:35:48Z) - SimPRIVE: a Simulation framework for Physical Robot Interaction with Virtual Environments [4.966661313606916]
This paper presents SimPRIVE, a simulation framework for physical robot interaction with virtual environments.
Using SimPRIVE, any physical mobile robot running on ROS 2 can easily be configured to move its digital twin in a virtual world built with the Unreal Engine 5 graphic engine.
The framework has been validated by testing a reinforcement learning agent trained for obstacle avoidance on an AgileX Scout Mini rover.
arXiv Detail & Related papers (2025-04-30T09:22:55Z) - SynPlay: Importing Real-world Diversity for a Synthetic Human Dataset [19.32308498024933]
We introduce Synthetic Playground (SynPlay), a new synthetic human dataset that aims to bring out the diversity of human appearance in the real world.
We focus on two factors to achieve a level of diversity that has not yet been seen in previous works: realistic human motions and poses.
We show that using SynPlay in model training leads to enhanced accuracy over existing synthetic datasets for human detection and segmentation.
arXiv Detail & Related papers (2024-08-21T17:58:49Z) - Flow as the Cross-Domain Manipulation Interface [73.15952395641136]
Im2Flow2Act enables robots to acquire real-world manipulation skills without the need of real-world robot training data.
Im2Flow2Act comprises two components: a flow generation network and a flow-conditioned policy.
We demonstrate Im2Flow2Act's capabilities in a variety of real-world tasks, including the manipulation of rigid, articulated, and deformable objects.
arXiv Detail & Related papers (2024-07-21T16:15:02Z) - Expressive Whole-Body Control for Humanoid Robots [20.132927075816742]
We learn a whole-body control policy on a human-sized robot to mimic human motions as realistic as possible.
With training in simulation and Sim2Real transfer, our policy can control a humanoid robot to walk in different styles, shake hands with humans, and even dance with a human in the real world.
arXiv Detail & Related papers (2024-02-26T18:09:24Z) - Learning Interactive Real-World Simulators [96.5991333400566]
We explore the possibility of learning a universal simulator of real-world interaction through generative modeling.
We use the simulator to train both high-level vision-language policies and low-level reinforcement learning policies.
Video captioning models can benefit from training with simulated experience, opening up even wider applications.
arXiv Detail & Related papers (2023-10-09T19:42:22Z) - CIRCLE: Capture In Rich Contextual Environments [69.97976304918149]
We propose a novel motion acquisition system in which the actor perceives and operates in a highly contextual virtual world.
We present CIRCLE, a dataset containing 10 hours of full-body reaching motion from 5 subjects across nine scenes.
We use this dataset to train a model that generates human motion conditioned on scene information.
arXiv Detail & Related papers (2023-03-31T09:18:12Z) - HSPACE: Synthetic Parametric Humans Animated in Complex Environments [67.8628917474705]
We build a large-scale photo-realistic dataset, Human-SPACE, of animated humans placed in complex indoor and outdoor environments.
We combine a hundred diverse individuals of varying ages, gender, proportions, and ethnicity, with hundreds of motions and scenes, in order to generate an initial dataset of over 1 million frames.
Assets are generated automatically, at scale, and are compatible with existing real time rendering and game engines.
arXiv Detail & Related papers (2021-12-23T22:27:55Z) - Stochastic Scene-Aware Motion Prediction [41.6104600038666]
We present a novel data-driven, synthesis motion method that models different styles of performing a given action with a target object.
Our method, called SAMP, for SceneAware Motion Prediction, generalizes to target objects of various geometries while enabling the character to navigate in cluttered scenes.
arXiv Detail & Related papers (2021-08-18T17:56:17Z) - iGibson, a Simulation Environment for Interactive Tasks in Large
Realistic Scenes [54.04456391489063]
iGibson is a novel simulation environment to develop robotic solutions for interactive tasks in large-scale realistic scenes.
Our environment contains fifteen fully interactive home-sized scenes populated with rigid and articulated objects.
iGibson features enable the generalization of navigation agents, and that the human-iGibson interface and integrated motion planners facilitate efficient imitation learning of simple human demonstrated behaviors.
arXiv Detail & Related papers (2020-12-05T02:14:17Z) - ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation [75.0278287071591]
ThreeDWorld (TDW) is a platform for interactive multi-modal physical simulation.
TDW enables simulation of high-fidelity sensory data and physical interactions between mobile agents and objects in rich 3D environments.
We present initial experiments enabled by TDW in emerging research directions in computer vision, machine learning, and cognitive science.
arXiv Detail & Related papers (2020-07-09T17:33:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.