Fast-SNARF: A Fast Deformer for Articulated Neural Fields
- URL: http://arxiv.org/abs/2211.15601v2
- Date: Thu, 1 Dec 2022 18:20:52 GMT
- Title: Fast-SNARF: A Fast Deformer for Articulated Neural Fields
- Authors: Xu Chen, Tianjian Jiang, Jie Song, Max Rietmann, Andreas Geiger,
Michael J. Black, Otmar Hilliges
- Abstract summary: We propose a new articulation module for neural fields, Fast-SNARF, which finds accurate correspondences between canonical space and posed space.
Fast-SNARF is a drop-in replacement in to our previous work, SNARF, while significantly improving its computational efficiency.
Because learning of deformation maps is a crucial component in many 3D human avatar methods, we believe that this work represents a significant step towards the practical creation of 3D virtual humans.
- Score: 92.68788512596254
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural fields have revolutionized the area of 3D reconstruction and novel
view synthesis of rigid scenes. A key challenge in making such methods
applicable to articulated objects, such as the human body, is to model the
deformation of 3D locations between the rest pose (a canonical space) and the
deformed space. We propose a new articulation module for neural fields,
Fast-SNARF, which finds accurate correspondences between canonical space and
posed space via iterative root finding. Fast-SNARF is a drop-in replacement in
functionality to our previous work, SNARF, while significantly improving its
computational efficiency. We contribute several algorithmic and implementation
improvements over SNARF, yielding a speed-up of $150\times$. These improvements
include voxel-based correspondence search, pre-computing the linear blend
skinning function, and an efficient software implementation with CUDA kernels.
Fast-SNARF enables efficient and simultaneous optimization of shape and
skinning weights given deformed observations without correspondences (e.g. 3D
meshes). Because learning of deformation maps is a crucial component in many 3D
human avatar methods and since Fast-SNARF provides a computationally efficient
solution, we believe that this work represents a significant step towards the
practical creation of 3D virtual humans.
Related papers
- GS-SLAM: Dense Visual SLAM with 3D Gaussian Splatting [51.96353586773191]
We introduce textbfGS-SLAM that first utilizes 3D Gaussian representation in the Simultaneous Localization and Mapping system.
Our method utilizes a real-time differentiable splatting rendering pipeline that offers significant speedup to map optimization and RGB-D rendering.
Our method achieves competitive performance compared with existing state-of-the-art real-time methods on the Replica, TUM-RGBD datasets.
arXiv Detail & Related papers (2023-11-20T12:08:23Z) - IKOL: Inverse kinematics optimization layer for 3D human pose and shape
estimation via Gauss-Newton differentiation [44.00115413716392]
This paper presents an inverse kinematic layer (IKOL) for 3D human pose shape estimation.
IKOL has a much over over than most existing regression-based methods.
It provides a more accurate range of 3D human pose estimation.
arXiv Detail & Related papers (2023-02-02T12:43:29Z) - Neural Deformable Voxel Grid for Fast Optimization of Dynamic View
Synthesis [63.25919018001152]
We propose a fast deformable radiance field method to handle dynamic scenes.
Our method achieves comparable performance to D-NeRF using only 20 minutes for training.
arXiv Detail & Related papers (2022-06-15T17:49:08Z) - Medical Image Registration via Neural Fields [35.80302878742334]
We present a new neural net based image registration framework, called NIR (Neural Image Registration), which is based on optimization but utilizes deep neural nets to model deformations between image pairs.
Experiments on two 3D MR brain scan datasets demonstrate that NIR yields state-of-the-art performance in terms of both registration accuracy and regularity, while running significantly faster than traditional optimization-based methods.
arXiv Detail & Related papers (2022-06-07T08:43:31Z) - Learned Vertex Descent: A New Direction for 3D Human Model Fitting [64.04726230507258]
We propose a novel optimization-based paradigm for 3D human model fitting on images and scans.
Our approach is able to capture the underlying body of clothed people with very different body shapes, achieving a significant improvement compared to state-of-the-art.
LVD is also applicable to 3D model fitting of humans and hands, for which we show a significant improvement to the SOTA with a much simpler and faster method.
arXiv Detail & Related papers (2022-05-12T17:55:51Z) - Implicit Optimizer for Diffeomorphic Image Registration [3.1970342304563037]
We propose a rapid and accurate Implicit for Diffeomorphic Image Registration (IDIR) which utilizes the Deep Implicit Function as the neural velocity field.
We evaluate our proposed method on two 3D large-scale MR brain scan datasets, the results show that our proposed method provides faster and better registration results than conventional image registration approaches.
arXiv Detail & Related papers (2022-02-25T05:04:29Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - Towards Fast, Accurate and Stable 3D Dense Face Alignment [73.01620081047336]
We propose a novel regression framework named 3DDFA-V2 which makes a balance among speed, accuracy and stability.
We present a virtual synthesis method to transform one still image to a short-video which incorporates in-plane and out-of-plane face moving.
arXiv Detail & Related papers (2020-09-21T15:37:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.