PARTE: Part-Guided Texturing for 3D Human Reconstruction from a Single Image
- URL: http://arxiv.org/abs/2507.17332v4
- Date: Wed, 30 Jul 2025 08:43:58 GMT
- Title: PARTE: Part-Guided Texturing for 3D Human Reconstruction from a Single Image
- Authors: Hyeongjin Nam, Donghwan Kim, Gyeongsik Moon, Kyoung Mu Lee,
- Abstract summary: The structural human parts serves as a crucial cue to infer human textures in the invisible regions of a single image.<n>We propose a framework that incorporates 3D human part information to reconstruct human textures from their reconstructions.
- Score: 64.16266736300962
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The misaligned human texture across different human parts is one of the main limitations of existing 3D human reconstruction methods. Each human part, such as a jacket or pants, should maintain a distinct texture without blending into others. The structural coherence of human parts serves as a crucial cue to infer human textures in the invisible regions of a single image. However, most existing 3D human reconstruction methods do not explicitly exploit such part segmentation priors, leading to misaligned textures in their reconstructions. In this regard, we present PARTE, which utilizes 3D human part information as a key guide to reconstruct 3D human textures. Our framework comprises two core components. First, to infer 3D human part information from a single image, we propose a 3D part segmentation module (PartSegmenter) that initially reconstructs a textureless human surface and predicts human part labels based on the textureless surface. Second, to incorporate part information into texture reconstruction, we introduce a part-guided texturing module (PartTexturer), which acquires prior knowledge from a pre-trained image generation network on texture alignment of human parts. Extensive experiments demonstrate that our framework achieves state-of-the-art quality in 3D human reconstruction. The project page is available at https://hygenie1228.github.io/PARTE/.
Related papers
- DeClotH: Decomposable 3D Cloth and Human Body Reconstruction from a Single Image [49.69224401751216]
Most existing methods of 3D clothed human reconstruction from a single image treat the clothed human as a single object without distinguishing between cloth and human body.<n>We present DeClotH, which separately reconstructs 3D cloth and human body from a single image.
arXiv Detail & Related papers (2025-03-25T06:00:15Z) - PartGen: Part-level 3D Generation and Reconstruction with Multi-View Diffusion Models [63.1432721793683]
We introduce PartGen, a novel approach that generates 3D objects composed of meaningful parts starting from text, an image, or an unstructured 3D object.<n>We evaluate our method on generated and real 3D assets and show that it outperforms segmentation and part-extraction baselines by a large margin.
arXiv Detail & Related papers (2024-12-24T18:59:43Z) - Part123: Part-aware 3D Reconstruction from a Single-view Image [54.589723979757515]
Part123 is a novel framework for part-aware 3D reconstruction from a single-view image.
We introduce contrastive learning into a neural rendering framework to learn a part-aware feature space.
A clustering-based algorithm is also developed to automatically derive 3D part segmentation results from the reconstructed models.
arXiv Detail & Related papers (2024-05-27T07:10:21Z) - Ultraman: Single Image 3D Human Reconstruction with Ultra Speed and Detail [11.604919466757003]
We propose a new method called emphUltraman for fast reconstruction of textured 3D human models from a single image.
emphUltraman greatly improves the reconstruction speed and accuracy while preserving high-quality texture details.
arXiv Detail & Related papers (2024-03-18T17:57:30Z) - SiTH: Single-view Textured Human Reconstruction with Image-Conditioned Diffusion [35.73448283467723]
SiTH is a novel pipeline that integrates an image-conditioned diffusion model into a 3D mesh reconstruction workflow.
We employ a powerful generative diffusion model to hallucinate unseen back-view appearance based on the input images.
For the latter, we leverage skinned body meshes as guidance to recover full-body texture meshes from the input and back-view images.
arXiv Detail & Related papers (2023-11-27T14:22:07Z) - TeCH: Text-guided Reconstruction of Lifelike Clothed Humans [35.68114652041377]
Existing methods often generate overly smooth back-side surfaces with a blurry texture.
Motivated by the power of foundation models, TeCH reconstructs the 3D human by leveraging descriptive text prompts.
We propose a hybrid 3D representation based on DMTet, which consists of an explicit body shape grid and an implicit distance field.
arXiv Detail & Related papers (2023-08-16T17:59:13Z) - ReFu: Refine and Fuse the Unobserved View for Detail-Preserving
Single-Image 3D Human Reconstruction [31.782985891629448]
Single-image 3D human reconstruction aims to reconstruct the 3D textured surface of the human body given a single image.
We propose ReFu, a coarse-to-fine approach that refines the projected backside view image and fuses the refined image to predict the final human body.
arXiv Detail & Related papers (2022-11-09T09:14:11Z) - TSCom-Net: Coarse-to-Fine 3D Textured Shape Completion Network [14.389603490486364]
Reconstructing 3D human body shapes from 3D partial textured scans is a fundamental task for many computer vision and graphics applications.
We propose a new neural network architecture for 3D body shape and high-resolution texture completion.
arXiv Detail & Related papers (2022-08-18T11:06:10Z) - 3DBooSTeR: 3D Body Shape and Texture Recovery [76.91542440942189]
3DBooSTeR is a novel method to recover a textured 3D body mesh from a partial 3D scan.
The proposed approach decouples the shape and texture completion into two sequential tasks.
arXiv Detail & Related papers (2020-10-23T21:07:59Z) - AutoSweep: Recovering 3D Editable Objectsfrom a Single Photograph [54.701098964773756]
We aim to recover 3D objects with semantic parts and can be directly edited.
Our work makes an attempt towards recovering two types of primitive-shaped objects, namely, generalized cuboids and generalized cylinders.
Our algorithm can recover high quality 3D models and outperforms existing methods in both instance segmentation and 3D reconstruction.
arXiv Detail & Related papers (2020-05-27T12:16:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.