ReMu: Reconstructing Multi-layer 3D Clothed Human from Image Layers
- URL: http://arxiv.org/abs/2508.01381v1
- Date: Sat, 02 Aug 2025 14:24:47 GMT
- Title: ReMu: Reconstructing Multi-layer 3D Clothed Human from Image Layers
- Authors: Onat Vuran, Hsuan-I Ho,
- Abstract summary: We introduce ReMu for reconstructing multi-layer clothed humans in a new setup, Image Layers.<n>We first reconstruct and align each garment layer in a shared coordinate system defined by the canonical body pose.<n>It is worth noting that our method is template-free and category-agnostic, which enables the reconstruction of 3D garments in diverse clothing styles.
- Score: 3.046315755726937
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The reconstruction of multi-layer 3D garments typically requires expensive multi-view capture setups and specialized 3D editing efforts. To support the creation of life-like clothed human avatars, we introduce ReMu for reconstructing multi-layer clothed humans in a new setup, Image Layers, which captures a subject wearing different layers of clothing with a single RGB camera. To reconstruct physically plausible multi-layer 3D garments, a unified 3D representation is necessary to model these garments in a layered manner. Thus, we first reconstruct and align each garment layer in a shared coordinate system defined by the canonical body pose. Afterwards, we introduce a collision-aware optimization process to address interpenetration and further refine the garment boundaries leveraging implicit neural fields. It is worth noting that our method is template-free and category-agnostic, which enables the reconstruction of 3D garments in diverse clothing styles. Through our experiments, we show that our method reconstructs nearly penetration-free 3D clothed humans and achieves competitive performance compared to category-specific methods. Project page: https://eth-ait.github.io/ReMu/
Related papers
- Single View Garment Reconstruction Using Diffusion Mapping Via Pattern Coordinates [45.48311596587306]
Reconstructing 3D clothed humans from images is fundamental to applications like virtual try-on, avatar creation, and mixed reality.<n>We present a novel method for high-fidelity 3D garment reconstruction from single images that bridges 2D and 3D representations.
arXiv Detail & Related papers (2025-04-11T08:39:18Z) - DeClotH: Decomposable 3D Cloth and Human Body Reconstruction from a Single Image [49.69224401751216]
Most existing methods of 3D clothed human reconstruction from a single image treat the clothed human as a single object without distinguishing between cloth and human body.<n>We present DeClotH, which separately reconstructs 3D cloth and human body from a single image.
arXiv Detail & Related papers (2025-03-25T06:00:15Z) - GarmentCrafter: Progressive Novel View Synthesis for Single-View 3D Garment Reconstruction and Editing [85.67881477813592]
GarmentCrafter is a new approach that enables non-professional users to create and modify 3D garments from a single-view image.<n>Our method achieves superior visual fidelity and inter-view coherence compared to state-of-the-art single-view 3D garment reconstruction methods.
arXiv Detail & Related papers (2025-03-11T17:56:03Z) - HumanCoser: Layered 3D Human Generation via Semantic-Aware Diffusion Model [43.66218796152962]
This paper aims to generate physically-layered 3D humans from text prompts.
We propose a novel layer-wise dressed human representation based on a physically-decoupled diffusion model.
To match the clothing with different body shapes, we propose an SMPL-driven implicit field network.
arXiv Detail & Related papers (2024-08-21T06:00:11Z) - LAGA: Layered 3D Avatar Generation and Customization via Gaussian Splatting [18.613001290226773]
LAyered Gaussian Avatar (LAGA) is a framework enabling the creation of high-fidelity decomposable avatars with diverse garments.<n>By decoupling garments from avatar, our framework empowers users to conviniently edit avatars at the garment level.<n>Our approach surpasses existing methods in the generation of 3D clothed humans.
arXiv Detail & Related papers (2024-05-21T10:24:06Z) - Capturing and Animation of Body and Clothing from Monocular Video [105.87228128022804]
We present SCARF, a hybrid model combining a mesh-based body with a neural radiance field.
integrating the mesh into the rendering enables us to optimize SCARF directly from monocular videos.
We demonstrate that SCARFs clothing with higher visual quality than existing methods, that the clothing deforms with changing body pose and body shape, and that clothing can be successfully transferred between avatars of different subjects.
arXiv Detail & Related papers (2022-10-04T19:34:05Z) - MVP-Human Dataset for 3D Human Avatar Reconstruction from Unconstrained
Frames [59.37430649840777]
We present 3D Avatar Reconstruction in the wild (ARwild), which first reconstructs the implicit skinning fields in a multi-level manner.
We contribute a large-scale dataset, MVP-Human, which contains 400 subjects, each of which has 15 scans in different poses.
Overall, benefits from the specific network architecture and the diverse data, the trained model enables 3D avatar reconstruction from unconstrained frames.
arXiv Detail & Related papers (2022-04-24T03:57:59Z) - gDNA: Towards Generative Detailed Neural Avatars [94.9804106939663]
We show that our model is able to generate natural human avatars wearing diverse and detailed clothing.
Our method can be used on the task of fitting human models to raw scans, outperforming the previous state-of-the-art.
arXiv Detail & Related papers (2022-01-11T18:46:38Z) - Garment4D: Garment Reconstruction from Point Cloud Sequences [12.86951061306046]
Learning to reconstruct 3D garments is important for dressing 3D human bodies of different shapes in different poses.
Previous works typically rely on 2D images as input, which however suffer from the scale and pose ambiguities.
We propose a principled framework, Garment4D, that uses 3D point cloud sequences of dressed humans for garment reconstruction.
arXiv Detail & Related papers (2021-12-08T08:15:20Z) - Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction
from Single Images [50.34202789543989]
Deep Fashion3D is the largest collection to date of 3D garment models.
It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images.
A novel adaptable template is proposed to enable the learning of all types of clothing in a single network.
arXiv Detail & Related papers (2020-03-28T09:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.