DASH: Modularized Human Manipulation Simulation with Vision and Language
for Embodied AI
- URL: http://arxiv.org/abs/2108.12536v1
- Date: Sat, 28 Aug 2021 00:22:30 GMT
- Title: DASH: Modularized Human Manipulation Simulation with Vision and Language
for Embodied AI
- Authors: Yifeng Jiang, Michelle Guo, Jiangshan Li, Ioannis Exarchos, Jiajun Wu,
C. Karen Liu
- Abstract summary: We present Dynamic and Autonomous Simulated Human (DASH), an embodied virtual human that, given natural language commands, performs grasp-and-stack tasks in a physically-simulated cluttered environment.
By factoring the DASH system into a vision module, a language module, and manipulation modules of two skill categories, we can mix and match analytical and machine learning techniques for different modules so that DASH is able to not only perform randomly arranged tasks with a high success rate, but also do so under anthropomorphic constraints.
- Score: 25.144827619452105
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Creating virtual humans with embodied, human-like perceptual and actuation
constraints has the promise to provide an integrated simulation platform for
many scientific and engineering applications. We present Dynamic and Autonomous
Simulated Human (DASH), an embodied virtual human that, given natural language
commands, performs grasp-and-stack tasks in a physically-simulated cluttered
environment solely using its own visual perception, proprioception, and touch,
without requiring human motion data. By factoring the DASH system into a vision
module, a language module, and manipulation modules of two skill categories, we
can mix and match analytical and machine learning techniques for different
modules so that DASH is able to not only perform randomly arranged tasks with a
high success rate, but also do so under anthropomorphic constraints and with
fluid and diverse motions. The modular design also favors analysis and
extensibility to more complex manipulation skills.
Related papers
- DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative
Diffusion Models [102.13968267347553]
We present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks.
We showcase a range of simulated and fabricated robots along with their capabilities.
arXiv Detail & Related papers (2023-11-28T18:58:48Z) - Object Motion Guided Human Motion Synthesis [22.08240141115053]
We study the problem of full-body human motion synthesis for the manipulation of large-sized objects.
We propose Object MOtion guided human MOtion synthesis (OMOMO), a conditional diffusion framework.
We develop a novel system that captures full-body human manipulation motions by simply attaching a smartphone to the object being manipulated.
arXiv Detail & Related papers (2023-09-28T08:22:00Z) - DexDeform: Dexterous Deformable Object Manipulation with Human
Demonstrations and Differentiable Physics [97.75188532559952]
We propose a principled framework that abstracts dexterous manipulation skills from human demonstration.
We then train a skill model using demonstrations for planning over action abstractions in imagination.
To evaluate the effectiveness of our approach, we introduce a suite of six challenging dexterous deformable object manipulation tasks.
arXiv Detail & Related papers (2023-03-27T17:59:49Z) - Reconfigurable Data Glove for Reconstructing Physical and Virtual Grasps [100.72245315180433]
We present a reconfigurable data glove design to capture different modes of human hand-object interactions.
The glove operates in three modes for various downstream tasks with distinct features.
We evaluate the system's three modes by (i) recording hand gestures and associated forces, (ii) improving manipulation fluency in VR, and (iii) producing realistic simulation effects of various tool uses.
arXiv Detail & Related papers (2023-01-14T05:35:50Z) - Accelerating Interactive Human-like Manipulation Learning with GPU-based
Simulation and High-quality Demonstrations [25.393382192511716]
We present an immersive virtual reality teleoperation interface designed for interactive human-like manipulation on contact rich tasks.
We demonstrate the complementary strengths of massively parallel RL and imitation learning, yielding robust and natural behaviors.
arXiv Detail & Related papers (2022-12-05T09:37:27Z) - DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to
Reality [64.51295032956118]
We train a policy that can perform robust dexterous manipulation on an anthropomorphic robot hand.
Our work reaffirms the possibilities of sim-to-real transfer for dexterous manipulation in diverse kinds of hardware and simulator setups.
arXiv Detail & Related papers (2022-10-25T01:51:36Z) - Adaptive Synthetic Characters for Military Training [0.9802137009065037]
Behaviors of synthetic characters in current military simulations are limited since they are generally generated by rule-based and reactive computational models.
This paper introduces a framework that aims to create autonomous synthetic characters that can perform coherent sequences of believable behavior.
arXiv Detail & Related papers (2021-01-06T18:45:48Z) - ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation [75.0278287071591]
ThreeDWorld (TDW) is a platform for interactive multi-modal physical simulation.
TDW enables simulation of high-fidelity sensory data and physical interactions between mobile agents and objects in rich 3D environments.
We present initial experiments enabled by TDW in emerging research directions in computer vision, machine learning, and cognitive science.
arXiv Detail & Related papers (2020-07-09T17:33:27Z) - Visual Navigation Among Humans with Optimal Control as a Supervisor [72.5188978268463]
We propose an approach that combines learning-based perception with model-based optimal control to navigate among humans.
Our approach is enabled by our novel data-generation tool, HumANav.
We demonstrate that the learned navigation policies can anticipate and react to humans without explicitly predicting future human motion.
arXiv Detail & Related papers (2020-03-20T16:13:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.