SeSDF: Self-evolved Signed Distance Field for Implicit 3D Clothed Human
Reconstruction
- URL: http://arxiv.org/abs/2304.00359v1
- Date: Sat, 1 Apr 2023 16:58:19 GMT
- Title: SeSDF: Self-evolved Signed Distance Field for Implicit 3D Clothed Human
Reconstruction
- Authors: Yukang Cao, Kai Han, Kwan-Yee K. Wong
- Abstract summary: We address the problem of clothed human reconstruction from a single image or uncalibrated multi-view images.
We propose a flexible framework which, by leveraging the parametric SMPL-X model, can take an arbitrary number of input images to reconstruct a clothed human model under an uncalibrated setting.
- Score: 23.89884587074109
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the problem of clothed human reconstruction from a single image or
uncalibrated multi-view images. Existing methods struggle with reconstructing
detailed geometry of a clothed human and often require a calibrated setting for
multi-view reconstruction. We propose a flexible framework which, by leveraging
the parametric SMPL-X model, can take an arbitrary number of input images to
reconstruct a clothed human model under an uncalibrated setting. At the core of
our framework is our novel self-evolved signed distance field (SeSDF) module
which allows the framework to learn to deform the signed distance field (SDF)
derived from the fitted SMPL-X model, such that detailed geometry reflecting
the actual clothed human can be encoded for better reconstruction. Besides, we
propose a simple method for self-calibration of multi-view images via the
fitted SMPL-X parameters. This lifts the requirement of tedious manual
calibration and largely increases the flexibility of our method. Further, we
introduce an effective occlusion-aware feature fusion strategy to account for
the most useful features to reconstruct the human model. We thoroughly evaluate
our framework on public benchmarks, demonstrating significant superiority over
the state-of-the-arts both qualitatively and quantitatively.
Related papers
- Divide and Fuse: Body Part Mesh Recovery from Partially Visible Human Images [57.479339658504685]
"Divide and Fuse" strategy reconstructs human body parts independently before fusing them.
Human Part Parametric Models (HPPM) independently reconstruct the mesh from a few shape and global-location parameters.
A specially designed fusion module seamlessly integrates the reconstructed parts, even when only a few are visible.
arXiv Detail & Related papers (2024-07-12T21:29:11Z) - GTR: Improving Large 3D Reconstruction Models through Geometry and Texture Refinement [51.97726804507328]
We propose a novel approach for 3D mesh reconstruction from multi-view images.
Our method takes inspiration from large reconstruction models that use a transformer-based triplane generator and a Neural Radiance Field (NeRF) model trained on multi-view images.
arXiv Detail & Related papers (2024-06-09T05:19:24Z) - Template-Free Single-View 3D Human Digitalization with Diffusion-Guided LRM [29.13412037370585]
We present Human-LRM, a diffusion-guided feed-forward model that predicts the implicit field of a human from a single image.
Our method is able to capture human without any template prior, e.g., SMPL, and effectively enhance occluded parts with rich and realistic details.
arXiv Detail & Related papers (2024-01-22T18:08:22Z) - Spectral Graphormer: Spectral Graph-based Transformer for Egocentric
Two-Hand Reconstruction using Multi-View Color Images [33.70056950818641]
We propose a novel transformer-based framework that reconstructs two high fidelity hands from multi-view RGB images.
We show that our framework is able to produce realistic two-hand reconstructions and demonstrate the generalisation of synthetic-trained models to real data.
arXiv Detail & Related papers (2023-08-21T20:07:02Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - CrossHuman: Learning Cross-Guidance from Multi-Frame Images for Human
Reconstruction [6.450579406495884]
CrossHuman is a novel method that learns cross-guidance from parametric human model and multi-frame RGB images.
We design a reconstruction pipeline combined with tracking-based methods and tracking-free methods.
Compared with previous works, our CrossHuman enables high-fidelity geometry details and texture in both visible and invisible regions.
arXiv Detail & Related papers (2022-07-20T08:25:20Z) - PaMIR: Parametric Model-Conditioned Implicit Representation for
Image-based Human Reconstruction [67.08350202974434]
We propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function.
We show that our method achieves state-of-the-art performance for image-based 3D human reconstruction in the cases of challenging poses and clothing types.
arXiv Detail & Related papers (2020-07-08T02:26:19Z) - SparseFusion: Dynamic Human Avatar Modeling from Sparse RGBD Images [49.52782544649703]
We propose a novel approach to reconstruct 3D human body shapes based on a sparse set of RGBD frames.
The main challenge is how to robustly fuse these sparse frames into a canonical 3D model.
Our framework is flexible, with potential applications going beyond shape reconstruction.
arXiv Detail & Related papers (2020-06-05T18:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.