NGD: Neural Gradient Based Deformation for Monocular Garment Reconstruction
- URL: http://arxiv.org/abs/2508.17712v1
- Date: Mon, 25 Aug 2025 06:40:57 GMT
- Title: NGD: Neural Gradient Based Deformation for Monocular Garment Reconstruction
- Authors: Soham Dasgupta, Shanthika Naik, Preet Savalia, Sujay Kumar Ingle, Avinash Sharma,
- Abstract summary: Dynamic garment reconstruction from monocular video is an important yet challenging task due to the complex dynamics and unconstrained nature of the garments.<n>Recent advancements in neural rendering have enabled high-quality geometric reconstruction with image/video supervision.<n>We propose NGD, a Neural Gradient-based Deformation method to reconstruct dynamically evolving textured garments from monocular videos.
- Score: 2.8801537805576776
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamic garment reconstruction from monocular video is an important yet challenging task due to the complex dynamics and unconstrained nature of the garments. Recent advancements in neural rendering have enabled high-quality geometric reconstruction with image/video supervision. However, implicit representation methods that use volume rendering often provide smooth geometry and fail to model high-frequency details. While template reconstruction methods model explicit geometry, they use vertex displacement for deformation, which results in artifacts. Addressing these limitations, we propose NGD, a Neural Gradient-based Deformation method to reconstruct dynamically evolving textured garments from monocular videos. Additionally, we propose a novel adaptive remeshing strategy for modelling dynamically evolving surfaces like wrinkles and pleats of the skirt, leading to high-quality reconstruction. Finally, we learn dynamic texture maps to capture per-frame lighting and shadow effects. We provide extensive qualitative and quantitative evaluations to demonstrate significant improvements over existing SOTA methods and provide high-quality garment reconstructions.
Related papers
- MoAngelo: Motion-Aware Neural Surface Reconstruction for Dynamic Scenes [9.504709780252979]
We present a novel framework for highly detailed dynamic reconstruction that extends the static 3D reconstruction method NeuralAngelo.<n>We show superior reconstruction accuracy in comparison to previous state-of-the-art methods on the ActorsHQ dataset.
arXiv Detail & Related papers (2025-09-19T11:43:01Z) - AniGaussian: Animatable Gaussian Avatar with Pose-guided Deformation [51.61117351997808]
We introduce an innovative pose guided deformation strategy that constrains the dynamic Gaussian avatar with SMPL pose guidance.<n>We incorporate rigid-based priors from previous works to enhance the dynamic transform capabilities of the Gaussian model.<n>Through extensive comparisons with existing methods, AniGaussian demonstrates superior performance in both qualitative result and quantitative metrics.
arXiv Detail & Related papers (2025-02-24T06:53:37Z) - AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction [55.69271635843385]
We present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction.<n>Our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
arXiv Detail & Related papers (2024-10-02T03:10:38Z) - Reconstructing Topology-Consistent Face Mesh by Volume Rendering from Multi-View Images [71.20113392204183]
Industrial 3D face assets creation typically reconstructs topology-consistent face meshes from multi-view images for downstream production.<n>NeRF has shown great advantages in 3D reconstruction, by representing scenes as density and radiance fields.<n>We introduce a novel method which combines explicit mesh with neural volume rendering to optimize geometry of an artist-made template face mesh from multi-view images.
arXiv Detail & Related papers (2024-04-08T15:25:50Z) - GaussianBody: Clothed Human Reconstruction via 3d Gaussian Splatting [14.937297984020821]
We propose a novel clothed human reconstruction method called GaussianBody, based on 3D Gaussian Splatting.
Applying the static 3D Gaussian Splatting model to the dynamic human reconstruction problem is non-trivial due to complicated non-rigid deformations and rich cloth details.
We show that our method can achieve state-of-the-art photorealistic novel-view rendering results with high-quality details for dynamic clothed human bodies.
arXiv Detail & Related papers (2024-01-18T04:48:13Z) - Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D
Camera [26.410460029742456]
We propose a template-free method to recover high-fidelity geometry and motions of a dynamic scene from a monocular RGB-D camera.
Experiments on public datasets and our collected dataset demonstrate that NDR outperforms existing monocular dynamic reconstruction methods.
arXiv Detail & Related papers (2022-06-30T13:09:39Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - NeuS: Learning Neural Implicit Surfaces by Volume Rendering for
Multi-view Reconstruction [88.02850205432763]
We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs.
Existing neural surface reconstruction approaches, such as DVR and IDR, require foreground mask as supervision.
We observe that the conventional volume rendering method causes inherent geometric errors for surface reconstruction.
We propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision.
arXiv Detail & Related papers (2021-06-20T12:59:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.