Generalizable Human Gaussians from Single-View Image
- URL: http://arxiv.org/abs/2406.06050v1
- Date: Mon, 10 Jun 2024 06:38:11 GMT
- Title: Generalizable Human Gaussians from Single-View Image
- Authors: Jinnan Chen, Chen Li, Jianfeng Zhang, Hanlin Chen, Buzhen Huang, Gim Hee Lee,
- Abstract summary: We propose single-view generalizable Human Gaussian model (HGM), a diffusion-guided framework for 3D human modeling from a single image.
Although effective in hallucinating the unobserved views, the approach may generate unrealistic human pose and shapes due to the lack of supervision.
We validate our approach on publicly available datasets and demonstrate that it significantly surpasses state-of-the-art methods in terms of PSNR and SSIM.
- Score: 54.712838657788566
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we tackle the task of learning generalizable 3D human Gaussians from a single image. The main challenge for this task is to recover detailed geometry and appearance, especially for the unobserved regions. To this end, we propose single-view generalizable Human Gaussian model (HGM), a diffusion-guided framework for 3D human modeling from a single image. We design a diffusion-based coarse-to-fine pipeline, where the diffusion model is adapted to refine novel-view images rendered from a coarse human Gaussian model. The refined images are then used together with the input image to learn a refined human Gaussian model. Although effective in hallucinating the unobserved views, the approach may generate unrealistic human pose and shapes due to the lack of supervision. We circumvent this problem by further encoding the geometric priors from SMPL model. Specifically, we propagate geometric features from SMPL volume to the predicted Gaussians via sparse convolution and attention mechanism. We validate our approach on publicly available datasets and demonstrate that it significantly surpasses state-of-the-art methods in terms of PSNR and SSIM. Additionally, our method exhibits strong generalization for in-the-wild images.
Related papers
- Learning Efficient and Generalizable Human Representation with Human Gaussian Model [25.864364910265127]
We propose Human Gaussian Graph to model the connection between predicted Gaussians and human SMPL mesh.<n>We show that we can leverage information from all frames to recover animatable human representation.<n> Experimental results on novel view synthesis and novel pose animation demonstrate the efficiency and generalization of our method.
arXiv Detail & Related papers (2025-07-24T19:18:59Z) - RoGSplat: Learning Robust Generalizable Human Gaussian Splatting from Sparse Multi-View Images [39.03889696169877]
RoGSplat is a novel approach for synthesizing high-fidelity novel views of unseen human from sparse multi-view images.
Our method outperforms state-of-the-art methods in novel view synthesis and cross-dataset generalization.
arXiv Detail & Related papers (2025-03-18T12:18:34Z) - HuGDiffusion: Generalizable Single-Image Human Rendering via 3D Gaussian Diffusion [50.02316409061741]
HuGDiffusion is a learning pipeline to achieve novel view synthesis (NVS) of human characters from single-view input images.
We aim to generate the set of 3DGS attributes via a diffusion-based framework conditioned on human priors extracted from a single image.
Our HuGDiffusion shows significant performance improvements over the state-of-the-art methods.
arXiv Detail & Related papers (2025-01-25T01:00:33Z) - NovelGS: Consistent Novel-view Denoising via Large Gaussian Reconstruction Model [57.92709692193132]
NovelGS is a diffusion model for Gaussian Splatting given sparse-view images.
We leverage the novel view denoising through a transformer-based network to generate 3D Gaussians.
arXiv Detail & Related papers (2024-11-25T07:57:17Z) - DiHuR: Diffusion-Guided Generalizable Human Reconstruction [51.31232435994026]
We introduce DiHuR, a Diffusion-guided model for generalizable Human 3D Reconstruction and view synthesis from sparse, minimally overlapping images.
Our method integrates two key priors in a coherent manner: the prior from generalizable feed-forward models and the 2D diffusion prior, and it requires only multi-view image training, without 3D supervision.
arXiv Detail & Related papers (2024-11-16T03:52:23Z) - HFGaussian: Learning Generalizable Gaussian Human with Integrated Human Features [23.321087432786605]
We present a novel approach called HFGaussian that can estimate novel views and human features, such as the 3D skeleton, 3D key points, and dense pose, from sparse input images in real time at 25 FPS.
We thoroughly evaluate our HFGaussian method against the latest state-of-the-art techniques in human Gaussian splatting and pose estimation, demonstrating its real-time, state-of-the-art performance.
arXiv Detail & Related papers (2024-11-05T13:31:04Z) - UniGS: Modeling Unitary 3D Gaussians for Novel View Synthesis from Sparse-view Images [20.089890859122168]
We introduce UniGS, a novel 3D Gaussian reconstruction and novel view synthesis model.
UniGS predicts a high-fidelity representation of 3D Gaussians from arbitrary number of posed sparse-view images.
arXiv Detail & Related papers (2024-10-17T03:48:02Z) - EVA-Gaussian: 3D Gaussian-based Real-time Human Novel View Synthesis under Diverse Camera Settings [11.248908608011941]
EVA-Gaussian is a real-time pipeline for 3D human novel view synthesis across diverse camera settings.
We introduce an Efficient cross-View Attention (EVA) module to accurately estimate the position of each 3D Gaussian from the source images.
We incorporate a powerful anchor loss function for both 3D Gaussian attributes and human face landmarks.
arXiv Detail & Related papers (2024-10-02T11:23:08Z) - Generalizable Human Gaussians for Sparse View Synthesis [48.47812125126829]
This paper introduces a new method to learn generalizable human Gaussians that allows photorealistic and accurate view-rendering of a new human subject from a limited set of sparse views.
A pivotal innovation of our approach involves reformulating the learning of 3D Gaussian parameters into a regression process defined on the 2D UV space of a human template.
Our method outperforms recent methods on both within-dataset generalization as well as cross-dataset generalization settings.
arXiv Detail & Related papers (2024-07-17T17:56:30Z) - UV Gaussians: Joint Learning of Mesh Deformation and Gaussian Textures for Human Avatar Modeling [71.87807614875497]
We propose UV Gaussians, which models the 3D human body by jointly learning mesh deformations and 2D UV-space Gaussian textures.
We collect and process a new dataset of human motion, which includes multi-view images, scanned models, parametric model registration, and corresponding texture maps. Experimental results demonstrate that our method achieves state-of-the-art synthesis of novel view and novel pose.
arXiv Detail & Related papers (2024-03-18T09:03:56Z) - Template-Free Single-View 3D Human Digitalization with Diffusion-Guided LRM [29.13412037370585]
We present Human-LRM, a diffusion-guided feed-forward model that predicts the implicit field of a human from a single image.
Our method is able to capture human without any template prior, e.g., SMPL, and effectively enhance occluded parts with rich and realistic details.
arXiv Detail & Related papers (2024-01-22T18:08:22Z) - Deformable 3D Gaussian Splatting for Animatable Human Avatars [50.61374254699761]
We propose a fully explicit approach to construct a digital avatar from as little as a single monocular sequence.
ParDy-Human constitutes an explicit model for realistic dynamic human avatars which requires significantly fewer training views and images.
Our avatars learning is free of additional annotations such as Splat masks and can be trained with variable backgrounds while inferring full-resolution images efficiently even on consumer hardware.
arXiv Detail & Related papers (2023-12-22T20:56:46Z) - Animatable 3D Gaussians for High-fidelity Synthesis of Human Motions [37.50707388577952]
We present a novel animatable 3D Gaussian model for rendering high-fidelity free-view human motions in real time.
Compared to existing NeRF-based methods, the model owns better capability in high-frequency details without the jittering problem across video frames.
arXiv Detail & Related papers (2023-11-22T14:00:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.