GST: Precise 3D Human Body from a Single Image with Gaussian Splatting Transformers
- URL: http://arxiv.org/abs/2409.04196v1
- Date: Fri, 6 Sep 2024 11:34:24 GMT
- Title: GST: Precise 3D Human Body from a Single Image with Gaussian Splatting Transformers
- Authors: Lorenza Prospero, Abdullah Hamdi, Joao F. Henriques, Christian Rupprecht,
- Abstract summary: We base our work on 3D Gaussian Splatting (3DGS), a scene representation composed of a mixture of Gaussians.
We show that this combination can achieve fast inference of 3D human models from a single image without test-time optimization.
We also show that it can improve 3D pose estimation by better fitting human models that account for clothes and other variations.
- Score: 23.96688843662126
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reconstructing realistic 3D human models from monocular images has significant applications in creative industries, human-computer interfaces, and healthcare. We base our work on 3D Gaussian Splatting (3DGS), a scene representation composed of a mixture of Gaussians. Predicting such mixtures for a human from a single input image is challenging, as it is a non-uniform density (with a many-to-one relationship with input pixels) with strict physical constraints. At the same time, it needs to be flexible to accommodate a variety of clothes and poses. Our key observation is that the vertices of standardized human meshes (such as SMPL) can provide an adequate density and approximate initial position for Gaussians. We can then train a transformer model to jointly predict comparatively small adjustments to these positions, as well as the other Gaussians' attributes and the SMPL parameters. We show empirically that this combination (using only multi-view supervision) can achieve fast inference of 3D human models from a single image without test-time optimization, expensive diffusion models, or 3D points supervision. We also show that it can improve 3D pose estimation by better fitting human models that account for clothes and other variations. The code is available on the project website https://abdullahamdi.com/gst/ .
Related papers
- iHuman: Instant Animatable Digital Humans From Monocular Videos [16.98924995658091]
We present a fast, simple, yet effective method for creating animatable 3D digital humans from monocular videos.
This work achieves and illustrates the need of accurate 3D mesh-type modelling of the human body.
Our method is faster by an order of magnitude (in terms of training time) than its closest competitor.
arXiv Detail & Related papers (2024-07-15T18:51:51Z) - Generalizable Human Gaussians from Single-View Image [52.100234836129786]
We introduce a single-view generalizable Human Gaussian Model (HGM)
Our approach uses a ControlNet to refine rendered back-view images from coarse predicted human Gaussians.
To mitigate the potential generation of unrealistic human poses and shapes, we incorporate human priors from the SMPL-X model as a dual branch.
arXiv Detail & Related papers (2024-06-10T06:38:11Z) - 3D Human Reconstruction in the Wild with Synthetic Data Using Generative Models [52.96248836582542]
We propose an effective approach based on recent diffusion models, termed HumanWild, which can effortlessly generate human images and corresponding 3D mesh annotations.
By exclusively employing generative models, we generate large-scale in-the-wild human images and high-quality annotations, eliminating the need for real-world data collection.
arXiv Detail & Related papers (2024-03-17T06:31:16Z) - Deformable 3D Gaussian Splatting for Animatable Human Avatars [50.61374254699761]
We propose a fully explicit approach to construct a digital avatar from as little as a single monocular sequence.
ParDy-Human constitutes an explicit model for realistic dynamic human avatars which requires significantly fewer training views and images.
Our avatars learning is free of additional annotations such as Splat masks and can be trained with variable backgrounds while inferring full-resolution images efficiently even on consumer hardware.
arXiv Detail & Related papers (2023-12-22T20:56:46Z) - GauHuman: Articulated Gaussian Splatting from Monocular Human Videos [58.553979884950834]
GauHuman is a 3D human model with Gaussian Splatting for both fast training (1 2 minutes) and real-time rendering (up to 189 FPS)
GauHuman encodes Gaussian Splatting in the canonical space and transforms 3D Gaussians from canonical space to posed space with linear blend skinning (LBS)
Experiments on ZJU_Mocap and MonoCap datasets demonstrate that GauHuman achieves state-of-the-art performance quantitatively and qualitatively with fast training and real-time rendering speed.
arXiv Detail & Related papers (2023-12-05T18:59:14Z) - HumanGaussian: Text-Driven 3D Human Generation with Gaussian Splatting [113.37908093915837]
Existing methods optimize 3D representations like mesh or neural fields via score distillation sampling (SDS), which suffers from inadequate fine details or excessive training time.
In this paper, we propose an efficient yet effective framework, HumanGaussian, that generates high-quality 3D humans with fine-grained geometry and realistic appearance.
arXiv Detail & Related papers (2023-11-28T18:59:58Z) - Animatable 3D Gaussians for High-fidelity Synthesis of Human Motions [37.50707388577952]
We present a novel animatable 3D Gaussian model for rendering high-fidelity free-view human motions in real time.
Compared to existing NeRF-based methods, the model owns better capability in high-frequency details without the jittering problem across video frames.
arXiv Detail & Related papers (2023-11-22T14:00:23Z) - SplatArmor: Articulated Gaussian splatting for animatable humans from
monocular RGB videos [15.74530749823217]
We propose SplatArmor, a novel approach for recovering detailed and animatable human models by armoring' a parameterized body model with 3D Gaussians.
Our approach represents the human as a set of 3D Gaussians within a canonical space, whose articulation is defined by extending the skinning of the underlying SMPL geometry.
We show compelling results on the ZJU MoCap and People Snapshot datasets, which underscore the effectiveness of our method for controllable human synthesis.
arXiv Detail & Related papers (2023-11-17T18:47:07Z) - Drivable 3D Gaussian Avatars [26.346626608626057]
Current drivable avatars require either accurate 3D registrations during training, dense input images during testing, or both.
This work uses the recently presented 3D Gaussian Splatting (3DGS) technique to render realistic humans at real-time framerates.
Given their smaller size, we drive these deformations with joint angles and keypoints, which are more suitable for communication applications.
arXiv Detail & Related papers (2023-11-14T22:54:29Z) - AvatarGen: A 3D Generative Model for Animatable Human Avatars [108.11137221845352]
AvatarGen is an unsupervised generation of 3D-aware clothed humans with various appearances and controllable geometries.
Our method can generate animatable 3D human avatars with high-quality appearance and geometry modeling.
It is competent for many applications, e.g., single-view reconstruction, re-animation, and text-guided synthesis/editing.
arXiv Detail & Related papers (2022-11-26T15:15:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.