Title: {\phi}-SfT: Shape-from-Template with a Physics-Based Deformation Model
Authors: Navami Kairanda and Edith Tretschk and Mohamed Elgharib and Christian
Theobalt and Vladislav Golyanik
Abstract summary: Shape-from-Template (SfT) methods estimate 3D surface deformations from a single monocular RGB camera.
This paper proposes a new SfT approach explaining 2D observations through physical simulations.
Abstract: Shape-from-Template (SfT) methods estimate 3D surface deformations from a
single monocular RGB camera while assuming a 3D state known in advance (a
template). This is an important yet challenging problem due to the
under-constrained nature of the monocular setting. Existing SfT techniques
predominantly use geometric and simplified deformation models, which often
limits their reconstruction abilities. In contrast to previous works, this
paper proposes a new SfT approach explaining 2D observations through physical
simulations accounting for forces and material properties. Our differentiable
physics simulator regularises the surface evolution and optimises the material
elastic properties such as bending coefficients, stretching stiffness and
density. We use a differentiable renderer to minimise the dense reprojection
error between the estimated 3D states and the input images and recover the
deformation parameters using an adaptive gradient-based optimisation. For the
evaluation, we record with an RGB-D camera challenging real surfaces exposed to
physical forces with various material properties and textures. Our approach
significantly reduces the 3D reconstruction error compared to multiple
competing methods. For the source code and data, see
https://4dqv.mpi-inf.mpg.de/phi-SfT/.
Related papers
Enhancing Single Image to 3D Generation using Gaussian Splatting and Hybrid Diffusion Priors [17.544733016978928] 3D object generation from a single image involves estimating the full 3D geometry and texture of unseen views from an unposed RGB image captured in the wild.
Recent advancements in 3D object generation have introduced techniques that reconstruct an object's 3D shape and texture.
We propose bridging the gap between 2D and 3D diffusion models to address this limitation. arXivDetail & Related papers (2024-10-12T10:14:11Z)
GASP: Gaussian Splatting for Physic-Based Simulations [0.42881773214459123] Existing physics models use additional meshing mechanisms, including triangle or tetrahedron meshing, marching cubes, or cage meshes.
We modify the physics grounded Newtonian dynamics to align with 3D Gaussian components.
Resulting solution can be integrated into any physics engine that can be treated as a black box. arXivDetail & Related papers (2024-09-09T17:28:57Z)
PhyRecon: Physically Plausible Neural Scene Reconstruction [81.73129450090684] We introduce PHYRECON, the first approach to leverage both differentiable rendering and differentiable physics simulation to learn implicit surface representations.
Central to this design is an efficient transformation between SDF-based implicit representations and explicit surface points.
Our results also exhibit superior physical stability in physical simulators, with at least a 40% improvement across all datasets. arXivDetail & Related papers (2024-04-25T15:06:58Z)
Physics-guided Shape-from-Template: Monocular Video Perception through Neural Surrogate Models [4.529832252085145] We propose a novel SfT reconstruction algorithm for cloth using a pre-trained neural surrogate model.
Differentiable rendering of the simulated mesh enables pixel-wise comparisons between the reconstruction and a target video sequence.
This allows to retain a precise, stable, and smooth reconstructed geometry while reducing the runtime by a factor of 400-500 compared to $phi$-SfT. arXivDetail & Related papers (2023-11-21T18:59:58Z)
Decaf: Monocular Deformation Capture for Face and Hand Interactions [77.75726740605748] This paper introduces the first method that allows tracking human hands interacting with human faces in 3D from single monocular RGB videos.
We model hands as articulated objects inducing non-rigid face deformations during an active interaction.
Our method relies on a new hand-face motion and interaction capture dataset with realistic face deformations acquired with a markerless multi-view camera system. arXivDetail & Related papers (2023-09-28T17:59:51Z)
FrozenRecon: Pose-free 3D Scene Reconstruction with Frozen Depth Models [67.96827539201071] We propose a novel test-time optimization approach for 3D scene reconstruction.
Our method achieves state-of-the-art cross-dataset reconstruction on five zero-shot testing datasets. arXivDetail & Related papers (2023-08-10T17:55:02Z)
3D shape reconstruction of semi-transparent worms [0.950214811819847] 3D shape reconstruction typically requires identifying object features or textures in multiple images of a subject.
Here we overcome these challenges by rendering a candidate shape with adaptive blurring and transparency for comparison with the images.
We model the slender Caenorhabditis elegans as a 3D curve using an intrinsic parametrisation that naturally admits biologically-informed constraints and regularisation. arXivDetail & Related papers (2023-04-28T13:29:36Z)
MoDA: Modeling Deformable 3D Objects from Casual Videos [84.29654142118018] We propose neural dual quaternion blend skinning (NeuDBS) to achieve 3D point deformation without skin-collapsing artifacts.
In the endeavor to register 2D pixels across different frames, we establish a correspondence between canonical feature embeddings that encodes 3D points within the canonical space.
Our approach can reconstruct 3D models for humans and animals with better qualitative and quantitative performance than state-of-the-art methods. arXivDetail & Related papers (2023-04-17T13:49:04Z)
Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape
Laplacian [58.704089101826774] We present a 3D-aware image deformation method with minimal restrictions on shape category and deformation type.
We take a supervised learning-based approach to predict the shape Laplacian of the underlying volume of a 3D reconstruction represented as a point cloud.
In the experiments, we present our results of deforming 2D character and clothed human images. arXivDetail & Related papers (2022-03-29T04:57:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.