Reconfigurable Data Glove for Reconstructing Physical and Virtual Grasps
- URL: http://arxiv.org/abs/2301.05821v2
- Date: Wed, 18 Jan 2023 08:51:09 GMT
- Title: Reconfigurable Data Glove for Reconstructing Physical and Virtual Grasps
- Authors: Hangxin Liu, Zeyu Zhang, Ziyuan Jiao, Zhenliang Zhang, Minchen Li,
Chenfanfu Jiang, Yixin Zhu, Song-Chun Zhu
- Abstract summary: We present a reconfigurable data glove design to capture different modes of human hand-object interactions.
The glove operates in three modes for various downstream tasks with distinct features.
We evaluate the system's three modes by (i) recording hand gestures and associated forces, (ii) improving manipulation fluency in VR, and (iii) producing realistic simulation effects of various tool uses.
- Score: 100.72245315180433
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a reconfigurable data glove design to capture different modes of
human hand-object interactions, critical for training embodied AI agents for
fine manipulation tasks. Sharing a unified backbone design that reconstructs
hand gestures in real-time, our reconfigurable data glove operates in three
modes for various downstream tasks with distinct features. In the
tactile-sensing mode, the glove system aggregates manipulation force via
customized force sensors made from a soft and thin piezoresistive material;
this design is to minimize interference during complex hand movements. The
Virtual Reality (VR) mode enables real-time interaction in a physically
plausible fashion; a caging-based approach is devised to determine stable
grasps by detecting collision events. Leveraging a state-of-the-art Finite
Element Method (FEM) simulator, the simulation mode collects a fine-grained 4D
manipulation event: hand and object motions in 3D space and how the object's
physical properties (e.g., stress, energy) change in accord with the
manipulation in time. Of note, this glove system is the first to look into,
through high-fidelity simulation, the unobservable physical and causal factors
behind manipulation actions. In a series of experiments, we characterize our
data glove in terms of individual sensors and the overall system. Specifically,
we evaluate the system's three modes by (i) recording hand gestures and
associated forces, (ii) improving manipulation fluency in VR, and (iii)
producing realistic simulation effects of various tool uses, respectively.
Together, our reconfigurable data glove collects and reconstructs fine-grained
human grasp data in both the physical and virtual environments, opening up new
avenues to learning manipulation skills for embodied AI agents.
Related papers
- Wearable Sensor-Based Few-Shot Continual Learning on Hand Gestures for Motor-Impaired Individuals via Latent Embedding Exploitation [6.782362178252351]
We introduce the Latent Embedding Exploitation (LEE) mechanism in our replay-based Few-Shot Continual Learning framework.
Our method produces a diversified latent feature space by leveraging a preserved latent embedding known as gesture prior knowledge.
Our method helps motor-impaired persons leverage wearable devices, and their unique styles of movement can be learned and applied.
arXiv Detail & Related papers (2024-05-14T21:20:27Z) - MACS: Mass Conditioned 3D Hand and Object Motion Synthesis [68.40728343078257]
The physical properties of an object, such as mass, significantly affect how we manipulate it with our hands.
This work proposes MACS the first MAss Conditioned 3D hand and object motion Synthesis approach.
Our approach is based on cascaded diffusion models and generates interactions that plausibly adjust based on the object mass and interaction type.
arXiv Detail & Related papers (2023-12-22T18:59:54Z) - SynthoGestures: A Novel Framework for Synthetic Dynamic Hand Gesture Generation for Driving Scenarios [17.94374027261511]
We propose a framework to synthesize realistic hand gestures using Unreal Engine.
Our framework offers customization options and reduces the risk of overfitting.
By saving time and effort in the creation of the data set, our tool accelerates the development of gesture recognition systems for automotive applications.
arXiv Detail & Related papers (2023-09-08T16:32:56Z) - GRIP: Generating Interaction Poses Using Spatial Cues and Latent Consistency [57.9920824261925]
Hands are dexterous and highly versatile manipulators that are central to how humans interact with objects and their environment.
modeling realistic hand-object interactions is critical for applications in computer graphics, computer vision, and mixed reality.
GRIP is a learning-based method that takes as input the 3D motion of the body and the object, and synthesizes realistic motion for both hands before, during, and after object interaction.
arXiv Detail & Related papers (2023-08-22T17:59:51Z) - DASH: Modularized Human Manipulation Simulation with Vision and Language
for Embodied AI [25.144827619452105]
We present Dynamic and Autonomous Simulated Human (DASH), an embodied virtual human that, given natural language commands, performs grasp-and-stack tasks in a physically-simulated cluttered environment.
By factoring the DASH system into a vision module, a language module, and manipulation modules of two skill categories, we can mix and match analytical and machine learning techniques for different modules so that DASH is able to not only perform randomly arranged tasks with a high success rate, but also do so under anthropomorphic constraints.
arXiv Detail & Related papers (2021-08-28T00:22:30Z) - gradSim: Differentiable simulation for system identification and
visuomotor control [66.37288629125996]
We present gradSim, a framework that overcomes the dependence on 3D supervision by leveraging differentiable multiphysics simulation and differentiable rendering.
Our unified graph enables learning in challenging visuomotor control tasks, without relying on state-based (3D) supervision.
arXiv Detail & Related papers (2021-04-06T16:32:01Z) - Synthesizing Skeletal Motion and Physiological Signals as a Function of
a Virtual Human's Actions and Emotions [10.59409233835301]
We develop for the first time a system consisting of computational models for synchronously skeletal motion, electrocardiogram, blood pressure, respiration, and skin conductance signals.
The proposed framework is modular and allows the flexibility to experiment with different models.
In addition to facilitating ML research for round-the-clock monitoring at a reduced cost, the proposed framework will allow reusability of code and data.
arXiv Detail & Related papers (2021-02-08T21:56:15Z) - Physics-Based Dexterous Manipulations with Estimated Hand Poses and
Residual Reinforcement Learning [52.37106940303246]
We learn a model that maps noisy input hand poses to target virtual poses.
The agent is trained in a residual setting by using a model-free hybrid RL+IL approach.
We test our framework in two applications that use hand pose estimates for dexterous manipulations: hand-object interactions in VR and hand-object motion reconstruction in-the-wild.
arXiv Detail & Related papers (2020-08-07T17:34:28Z) - ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation [75.0278287071591]
ThreeDWorld (TDW) is a platform for interactive multi-modal physical simulation.
TDW enables simulation of high-fidelity sensory data and physical interactions between mobile agents and objects in rich 3D environments.
We present initial experiments enabled by TDW in emerging research directions in computer vision, machine learning, and cognitive science.
arXiv Detail & Related papers (2020-07-09T17:33:27Z) - Gaining a Sense of Touch. Physical Parameters Estimation using a Soft
Gripper and Neural Networks [3.0892724364965005]
There is not enough research on physical parameters estimation using deep learning algorithms on measurements from direct interaction with objects using robotic grippers.
We propose a trainable system for the regression of a stiffness coefficient and provided extensive experiments using the physics simulator environment.
Our system can reliably estimate the stiffness of an object using the Yale OpenHand soft gripper based on readings from Inertial Measurement Units (IMUs) attached to its fingers.
arXiv Detail & Related papers (2020-03-02T11:56:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.