FastHuman: Reconstructing High-Quality Clothed Human in Minutes
- URL: http://arxiv.org/abs/2211.14485v2
- Date: Sat, 28 Oct 2023 07:42:44 GMT
- Title: FastHuman: Reconstructing High-Quality Clothed Human in Minutes
- Authors: Lixiang Lin, Songyou Peng, Qijun Gan, Jianke Zhu
- Abstract summary: We propose an approach for optimizing high-quality clothed human body shapes in minutes.
Our method uses a mesh-based patch warping technique to ensure multi-view photometric consistency.
Our approach has demonstrated promising results on both synthetic and real-world datasets.
- Score: 18.643091757385626
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an approach for optimizing high-quality clothed human body shapes
in minutes, using multi-view posed images. While traditional neural rendering
methods struggle to disentangle geometry and appearance using only rendering
loss, and are computationally intensive, our method uses a mesh-based patch
warping technique to ensure multi-view photometric consistency, and sphere
harmonics (SH) illumination to refine geometric details efficiently. We employ
oriented point clouds' shape representation and SH shading, which significantly
reduces optimization and rendering times compared to implicit methods. Our
approach has demonstrated promising results on both synthetic and real-world
datasets, making it an effective solution for rapidly generating high-quality
human body shapes. Project page
\href{https://l1346792580123.github.io/nccsfs/}{https://l1346792580123.github.io/nccsfs/}
Related papers
- Few-Shot Multi-Human Neural Rendering Using Geometry Constraints [8.819403814092865]
We present a method for recovering the shape and radiance of a scene consisting of multiple people given solely a few images.
Existing approaches using implicit neural representations have achieved impressive results that deliver accurate geometry and appearance.
We propose a neural implicit reconstruction method that addresses the inherent challenges of this task through the following contributions.
arXiv Detail & Related papers (2025-02-11T00:10:58Z) - Real-time Free-view Human Rendering from Sparse-view RGB Videos using Double Unprojected Textures [87.80984588545589]
Real-time free-view human rendering from sparse-view RGB inputs is a challenging task due to the sensor scarcity and the tight time budget.
Recent methods leverage 2D CNNs operating in texture space to learn rendering primitives.
We present Double Unprojected Textures, which at the core disentangles coarse geometric deformation estimation from appearance synthesis.
arXiv Detail & Related papers (2024-12-17T18:57:38Z) - NeuManifold: Neural Watertight Manifold Reconstruction with Efficient and High-Quality Rendering Support [43.5015470997138]
We present a method for generating high-quality watertight manifold meshes from multi-view input images.
Our method combines the benefits of both worlds; we take the geometry obtained from neural fields, and further optimize the geometry as well as a compact neural texture representation.
arXiv Detail & Related papers (2023-05-26T17:59:21Z) - Efficient Meshy Neural Fields for Animatable Human Avatars [87.68529918184494]
Efficiently digitizing high-fidelity animatable human avatars from videos is a challenging and active research topic.
Recent rendering-based neural representations open a new way for human digitization with their friendly usability and photo-varying reconstruction quality.
We present EMA, a method that Efficiently learns Meshy neural fields to reconstruct animatable human Avatars.
arXiv Detail & Related papers (2023-03-23T00:15:34Z) - Mesh Strikes Back: Fast and Efficient Human Reconstruction from RGB
videos [15.746993448290175]
Many methods employ deferred rendering, NeRFs and implicit methods to represent clothed humans.
We provide a counter viewpoint by optimizing a SMPL+D mesh and an efficient, multi-resolution texture representation.
We show competitive novel view synthesis and improvements in novel pose synthesis compared to NeRF-based methods.
arXiv Detail & Related papers (2023-03-15T17:57:13Z) - Differentiable Point-Based Radiance Fields for Efficient View Synthesis [57.56579501055479]
We propose a differentiable rendering algorithm for efficient novel view synthesis.
Our method is up to 300x faster than NeRF in both training and inference.
For dynamic scenes, our method trains two orders of magnitude faster than STNeRF and renders at near interactive rate.
arXiv Detail & Related papers (2022-05-28T04:36:13Z) - Efficient Textured Mesh Recovery from Multiple Views with Differentiable
Rendering [8.264851594332677]
We propose an efficient coarse-to-fine approach to recover the textured mesh from multi-view images.
We optimize the shape geometry by minimizing the difference between the rendered mesh with the depth predicted by the learning-based multi-view stereo algorithm.
In contrast to the implicit neural representation on shape and color, we introduce a physically based inverse rendering scheme to jointly estimate the lighting and reflectance of the objects.
arXiv Detail & Related papers (2022-05-25T03:33:55Z) - Geometry-Guided Progressive NeRF for Generalizable and Efficient Neural
Human Rendering [139.159534903657]
We develop a generalizable and efficient Neural Radiance Field (NeRF) pipeline for high-fidelity free-viewpoint human body details.
To better tackle self-occlusion, we devise a geometry-guided multi-view feature integration approach.
For achieving higher rendering efficiency, we introduce a geometry-guided progressive rendering pipeline.
arXiv Detail & Related papers (2021-12-08T14:42:10Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Efficient and Differentiable Shadow Computation for Inverse Problems [64.70468076488419]
Differentiable geometric computation has received increasing interest for image-based inverse problems.
We propose an efficient yet efficient approach for differentiable visibility and soft shadow computation.
As our formulation is differentiable, it can be used to solve inverse problems such as texture, illumination, rigid pose, and deformation recovery from images.
arXiv Detail & Related papers (2021-04-01T09:29:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.