Registering Explicit to Implicit: Towards High-Fidelity Garment mesh
Reconstruction from Single Images
- URL: http://arxiv.org/abs/2203.15007v1
- Date: Mon, 28 Mar 2022 18:13:01 GMT
- Title: Registering Explicit to Implicit: Towards High-Fidelity Garment mesh
Reconstruction from Single Images
- Authors: Heming Zhu, Lingteng Qiu, Yuda Qiu, Xiaoguang Han
- Abstract summary: A common problem for implicit-based methods is that they cannot produce separated and topology-consistent mesh for each garment piece.
We proposed a novel geometry inference framework ReEF that reconstructs topology-consistent layered garment mesh by registering the explicit garment template to the whole-body implicit fields predicted from single images.
- Score: 19.43767376835559
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fueled by the power of deep learning techniques and implicit shape learning,
recent advances in single-image human digitalization have reached unprecedented
accuracy and could recover fine-grained surface details such as garment
wrinkles. However, a common problem for the implicit-based methods is that they
cannot produce separated and topology-consistent mesh for each garment piece,
which is crucial for the current 3D content creation pipeline. To address this
issue, we proposed a novel geometry inference framework ReEF that reconstructs
topology-consistent layered garment mesh by registering the explicit garment
template to the whole-body implicit fields predicted from single images.
Experiments demonstrate that our method notably outperforms its counterparts on
single-image layered garment reconstruction and could bring high-quality
digital assets for further content creation.
Related papers
- Spatio-Temporal Garment Reconstruction Using Diffusion Mapping via Pattern Coordinates [38.93906389023275]
Reconstructing 3D clothed humans from monocular images and videos is a fundamental problem with applications in virtual try-on, avatar creation, and mixed reality.<n>We propose a high-fidelity 3D garment reconstruction from both single images and sequences.<n>The reconstructed garments preserve fine geometric detail while exhibiting realistic dynamic motion, supporting downstream applications such as texture editing, garment Sewing, and animation.
arXiv Detail & Related papers (2026-02-27T14:19:23Z) - GarVerseLOD: High-Fidelity 3D Garment Reconstruction from a Single In-the-Wild Image using a Dataset with Levels of Details [21.959372614365908]
GarVerseLOD aims to achieve unprecedented robustness in high-fidelity 3D garment reconstruction from a single unconstrained image.
GarVerseLOD collects 6,000 high-quality cloth models with fine-grained geometry details manually created by professional artists.
We propose a novel labeling paradigm based on conditional diffusion models to generate extensive paired images for each garment model with high photorealism.
arXiv Detail & Related papers (2024-11-05T12:30:07Z) - HR Human: Modeling Human Avatars with Triangular Mesh and High-Resolution Textures from Videos [52.23323966700072]
We present a framework for acquiring human avatars that are attached with high-resolution physically-based material textures and mesh from monocular video.
Our method introduces a novel information fusion strategy to combine the information from the monocular video and synthesize virtual multi-view images.
Experiments show that our approach outperforms previous representations in terms of high fidelity, and this explicit result supports deployment on common triangulars.
arXiv Detail & Related papers (2024-05-18T11:49:09Z) - Reconstructing Topology-Consistent Face Mesh by Volume Rendering from Multi-View Images [71.20113392204183]
Industrial 3D face assets creation typically reconstructs topology-consistent face meshes from multi-view images for downstream production.<n>NeRF has shown great advantages in 3D reconstruction, by representing scenes as density and radiance fields.<n>We introduce a novel method which combines explicit mesh with neural volume rendering to optimize geometry of an artist-made template face mesh from multi-view images.
arXiv Detail & Related papers (2024-04-08T15:25:50Z) - Semantic Image Translation for Repairing the Texture Defects of Building
Models [16.764719266178655]
We introduce a novel approach for synthesizing faccade texture images that authentically reflect the architectural style from a structured label map.
Our proposed method is also capable of synthesizing texture images with specific styles for faccades that lack pre-existing textures.
arXiv Detail & Related papers (2023-03-30T14:38:53Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z) - Neural Template: Topology-aware Reconstruction and Disentangled
Generation of 3D Meshes [52.038346313823524]
This paper introduces a novel framework called DTNet for 3D mesh reconstruction and generation via Disentangled Topology.
Our method is able to produce high-quality meshes, particularly with diverse topologies, as compared with the state-of-the-art methods.
arXiv Detail & Related papers (2022-06-10T08:32:57Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - Deep Rectangling for Image Stitching: A Learning Baseline [57.76737888499145]
We build the first image stitching rectangling dataset with a large diversity in irregular boundaries and scenes.
Experiments demonstrate our superiority over traditional methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2022-03-08T03:34:10Z) - SelfRecon: Self Reconstruction Your Digital Avatar from Monocular Video [48.23424267130425]
SelfRecon recovers space-time coherent geometries from a monocular self-rotating human video.
Explicit methods require a predefined template mesh for a given sequence, while the template is hard to acquire for a specific subject.
Implicit methods support arbitrary topology and have high quality due to continuous geometric representation.
arXiv Detail & Related papers (2022-01-30T11:49:29Z) - Self-supervised High-fidelity and Re-renderable 3D Facial Reconstruction
from a Single Image [19.0074836183624]
We propose a novel self-supervised learning framework for reconstructing high-quality 3D faces from single-view images in-the-wild.
Our framework substantially outperforms state-of-the-art approaches in both qualitative and quantitative comparisons.
arXiv Detail & Related papers (2021-11-16T08:10:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.