Implicit Neural Representation for Physics-driven Actuated Soft Bodies
- URL: http://arxiv.org/abs/2401.14861v1
- Date: Fri, 26 Jan 2024 13:42:12 GMT
- Title: Implicit Neural Representation for Physics-driven Actuated Soft Bodies
- Authors: Lingchen Yang, Byungsoo Kim, Gaspard Zoss, Baran G\"ozc\"u, Markus
Gross, Barbara Solenthaler
- Abstract summary: This paper utilizes a differentiable, quasi-static, and physics-based simulation layer to optimize for actuation signals parameterized by neural networks.
We define a function that enables a continuous mapping from a spatial point in the material space to the actuation value.
We extend our implicit model to mandible kinematics for the particular case of facial animation and show that we can reliably reproduce facial expressions captured with high-quality capture systems.
- Score: 15.261578025057593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active soft bodies can affect their shape through an internal actuation
mechanism that induces a deformation. Similar to recent work, this paper
utilizes a differentiable, quasi-static, and physics-based simulation layer to
optimize for actuation signals parameterized by neural networks. Our key
contribution is a general and implicit formulation to control active soft
bodies by defining a function that enables a continuous mapping from a spatial
point in the material space to the actuation value. This property allows us to
capture the signal's dominant frequencies, making the method discretization
agnostic and widely applicable. We extend our implicit model to mandible
kinematics for the particular case of facial animation and show that we can
reliably reproduce facial expressions captured with high-quality capture
systems. We apply the method to volumetric soft bodies, human poses, and facial
expressions, demonstrating artist-friendly properties, such as simple control
over the latent space and resolution invariance at test time.
Related papers
- SurgeMOD: Translating image-space tissue motions into vision-based surgical forces [6.4474263352749075]
We present a new approach for vision-based force estimation in Minimally Invasive Robotic Surgery.
Using internal movements generated by natural processes like breathing or the cardiac cycle, we infer the image-space basis of the motion on the frequency domain.
We demonstrate that this method can estimate point contact forces reliably for silicone phantom and ex-vivo experiments.
arXiv Detail & Related papers (2024-06-25T16:46:21Z) - Simplicits: Mesh-Free, Geometry-Agnostic, Elastic Simulation [18.45850302604534]
We present a data-, mesh-, and grid-free solution for elastic simulation for any object in any geometric representation.
For each object, we fit a small implicit neural network encoding varying weights that act as a reduced deformation basis.
Our experiments demonstrate the versatility, accuracy, and speed of this approach on data including signed distance functions, point clouds, neural primitives, tomography scans, radiance fields, Gaussian splats, surface meshes, and volume meshes.
arXiv Detail & Related papers (2024-06-09T18:57:23Z) - Shape Conditioned Human Motion Generation with Diffusion Model [0.0]
We propose a Shape-conditioned Motion Diffusion model (SMD), which enables the generation of motion sequences directly in mesh format.
We also propose a Spectral-Temporal Autoencoder (STAE) to leverage cross-temporal dependencies within the spectral domain.
arXiv Detail & Related papers (2024-05-10T19:06:41Z) - DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative
Diffusion Models [102.13968267347553]
We present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks.
We showcase a range of simulated and fabricated robots along with their capabilities.
arXiv Detail & Related papers (2023-11-28T18:58:48Z) - Neural Motion Fields: Encoding Grasp Trajectories as Implicit Value
Functions [65.84090965167535]
We present Neural Motion Fields, a novel object representation which encodes both object point clouds and the relative task trajectories as an implicit value function parameterized by a neural network.
This object-centric representation models a continuous distribution over the SE(3) space and allows us to perform grasping reactively by leveraging sampling-based MPC to optimize this value function.
arXiv Detail & Related papers (2022-06-29T18:47:05Z) - Neural Implicit Representations for Physical Parameter Inference from a Single Video [49.766574469284485]
We propose to combine neural implicit representations for appearance modeling with neural ordinary differential equations (ODEs) for modelling physical phenomena.
Our proposed model combines several unique advantages: (i) Contrary to existing approaches that require large training datasets, we are able to identify physical parameters from only a single video.
The use of neural implicit representations enables the processing of high-resolution videos and the synthesis of photo-realistic images.
arXiv Detail & Related papers (2022-04-29T11:55:35Z) - Imposing Temporal Consistency on Deep Monocular Body Shape and Pose
Estimation [67.23327074124855]
This paper presents an elegant solution for the integration of temporal constraints in the fitting process.
We derive parameters of a sequence of body models, representing shape and motion of a person, including jaw poses, facial expressions, and finger poses.
Our approach enables the derivation of realistic 3D body models from image sequences, including facial expression and articulated hands.
arXiv Detail & Related papers (2022-02-07T11:11:55Z) - gradSim: Differentiable simulation for system identification and
visuomotor control [66.37288629125996]
We present gradSim, a framework that overcomes the dependence on 3D supervision by leveraging differentiable multiphysics simulation and differentiable rendering.
Our unified graph enables learning in challenging visuomotor control tasks, without relying on state-based (3D) supervision.
arXiv Detail & Related papers (2021-04-06T16:32:01Z) - Characterization of surface motion patterns in highly deformable soft
tissue organs from dynamic MRI: An application to assess 4D bladder motion [0.0]
The objective of this study is to go towards 3D dense velocity measurements to fully cover the entire surface.
We present a pipeline for characterization of bladder surface dynamics during deep respiratory movements.
arXiv Detail & Related papers (2020-10-05T08:38:08Z) - Euclideanizing Flows: Diffeomorphic Reduction for Learning Stable
Dynamical Systems [74.80320120264459]
We present an approach to learn such motions from a limited number of human demonstrations.
The complex motions are encoded as rollouts of a stable dynamical system.
The efficacy of this approach is demonstrated through validation on an established benchmark as well demonstrations collected on a real-world robotic system.
arXiv Detail & Related papers (2020-05-27T03:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.