Drivable 3D Gaussian Avatars
- URL: http://arxiv.org/abs/2311.08581v1
- Date: Tue, 14 Nov 2023 22:54:29 GMT
- Title: Drivable 3D Gaussian Avatars
- Authors: Wojciech Zielonka, Timur Bagautdinov, Shunsuke Saito, Michael
Zollh\"ofer, Justus Thies, Javier Romero
- Abstract summary: Current drivable avatars require either accurate 3D registrations during training, dense input images during testing, or both.
This work uses the recently presented 3D Gaussian Splatting (3DGS) technique to render realistic humans at real-time framerates.
Given their smaller size, we drive these deformations with joint angles and keypoints, which are more suitable for communication applications.
- Score: 26.346626608626057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Drivable 3D Gaussian Avatars (D3GA), the first 3D controllable
model for human bodies rendered with Gaussian splats. Current photorealistic
drivable avatars require either accurate 3D registrations during training,
dense input images during testing, or both. The ones based on neural radiance
fields also tend to be prohibitively slow for telepresence applications. This
work uses the recently presented 3D Gaussian Splatting (3DGS) technique to
render realistic humans at real-time framerates, using dense calibrated
multi-view videos as input. To deform those primitives, we depart from the
commonly used point deformation method of linear blend skinning (LBS) and use a
classic volumetric deformation method: cage deformations. Given their smaller
size, we drive these deformations with joint angles and keypoints, which are
more suitable for communication applications. Our experiments on nine subjects
with varied body shapes, clothes, and motions obtain higher-quality results
than state-of-the-art methods when using the same training and test data.
Related papers
- SIGMAN:Scaling 3D Human Gaussian Generation with Millions of Assets [72.26350984924129]
We propose a latent space generation paradigm for 3D human digitization.
We transform the ill-posed low-to-high-dimensional mapping problem into a learnable distribution shift.
We employ the multi-view optimization approach combined with synthetic data to construct the HGS-1M dataset.
arXiv Detail & Related papers (2025-04-09T15:38:18Z) - DirectTriGS: Triplane-based Gaussian Splatting Field Representation for 3D Generation [37.09199962653554]
We present DirectTriGS, a novel framework designed for 3D object generation with Gaussian Splatting (GS)
The proposed generation framework can produce high-quality 3D object geometry and rendering results in the text-to-3D task.
arXiv Detail & Related papers (2025-03-10T04:05:38Z) - Neural Signed Distance Function Inference through Splatting 3D Gaussians Pulled on Zero-Level Set [49.780302894956776]
It is vital to infer a signed distance function (SDF) in multi-view based surface reconstruction.
We propose a method that seamlessly merge 3DGS with the learning of neural SDFs.
Our numerical and visual comparisons show our superiority over the state-of-the-art results on the widely used benchmarks.
arXiv Detail & Related papers (2024-10-18T05:48:06Z) - L3DG: Latent 3D Gaussian Diffusion [74.36431175937285]
L3DG is the first approach for generative 3D modeling of 3D Gaussians through a latent 3D Gaussian diffusion formulation.
We employ a sparse convolutional architecture to efficiently operate on room-scale scenes.
By leveraging the 3D Gaussian representation, the generated scenes can be rendered from arbitrary viewpoints in real-time.
arXiv Detail & Related papers (2024-10-17T13:19:32Z) - Generalizable and Animatable Gaussian Head Avatar [50.34788590904843]
We propose Generalizable and Animatable Gaussian head Avatar (GAGAvatar) for one-shot animatable head avatar reconstruction.
We generate the parameters of 3D Gaussians from a single image in a single forward pass.
Our method exhibits superior performance compared to previous methods in terms of reconstruction quality and expression accuracy.
arXiv Detail & Related papers (2024-10-10T14:29:00Z) - DreamMesh4D: Video-to-4D Generation with Sparse-Controlled Gaussian-Mesh Hybrid Representation [10.250715657201363]
We introduce DreamMesh4D, a novel framework combining mesh representation with geometric skinning technique to generate high-quality 4D object from a monocular video.
Our method is compatible with modern graphic pipelines, showcasing its potential in the 3D gaming and film industry.
arXiv Detail & Related papers (2024-10-09T10:41:08Z) - iHuman: Instant Animatable Digital Humans From Monocular Videos [16.98924995658091]
We present a fast, simple, yet effective method for creating animatable 3D digital humans from monocular videos.
This work achieves and illustrates the need of accurate 3D mesh-type modelling of the human body.
Our method is faster by an order of magnitude (in terms of training time) than its closest competitor.
arXiv Detail & Related papers (2024-07-15T18:51:51Z) - UV Gaussians: Joint Learning of Mesh Deformation and Gaussian Textures for Human Avatar Modeling [71.87807614875497]
We propose UV Gaussians, which models the 3D human body by jointly learning mesh deformations and 2D UV-space Gaussian textures.
We collect and process a new dataset of human motion, which includes multi-view images, scanned models, parametric model registration, and corresponding texture maps. Experimental results demonstrate that our method achieves state-of-the-art synthesis of novel view and novel pose.
arXiv Detail & Related papers (2024-03-18T09:03:56Z) - Hybrid Explicit Representation for Ultra-Realistic Head Avatars [55.829497543262214]
We introduce a novel approach to creating ultra-realistic head avatars and rendering them in real-time.
UV-mapped 3D mesh is utilized to capture sharp and rich textures on smooth surfaces, while 3D Gaussian Splatting is employed to represent complex geometric structures.
Experiments that our modeled results exceed those of state-of-the-art approaches.
arXiv Detail & Related papers (2024-03-18T04:01:26Z) - Rig3DGS: Creating Controllable Portraits from Casual Monocular Videos [33.779636707618785]
We introduce Rig3DGS to create controllable 3D human portraits from casual smartphone videos.
Key innovation is a carefully designed deformation method which is guided by a learnable prior derived from a 3D morphable model.
We demonstrate the effectiveness of our learned deformation through extensive quantitative and qualitative experiments.
arXiv Detail & Related papers (2024-02-06T05:40:53Z) - Deformable 3D Gaussian Splatting for Animatable Human Avatars [50.61374254699761]
We propose a fully explicit approach to construct a digital avatar from as little as a single monocular sequence.
ParDy-Human constitutes an explicit model for realistic dynamic human avatars which requires significantly fewer training views and images.
Our avatars learning is free of additional annotations such as Splat masks and can be trained with variable backgrounds while inferring full-resolution images efficiently even on consumer hardware.
arXiv Detail & Related papers (2023-12-22T20:56:46Z) - 3DGS-Avatar: Animatable Avatars via Deformable 3D Gaussian Splatting [32.63571465495127]
We introduce an approach that creates animatable human avatars from monocular videos using 3D Gaussian Splatting (3DGS)
We learn a non-rigid network to reconstruct animatable clothed human avatars that can be trained within 30 minutes and rendered at real-time frame rates (50+ FPS)
Experimental results show that our method achieves comparable and even better performance compared to state-of-the-art approaches on animatable avatar creation from a monocular input.
arXiv Detail & Related papers (2023-12-14T18:54:32Z) - ASH: Animatable Gaussian Splats for Efficient and Photoreal Human Rendering [62.81677824868519]
We propose an animatable Gaussian splatting approach for photorealistic rendering of dynamic humans in real-time.
We parameterize the clothed human as animatable 3D Gaussians, which can be efficiently splatted into image space to generate the final rendering.
We benchmark ASH with competing methods on pose-controllable avatars, demonstrating that our method outperforms existing real-time methods by a large margin and shows comparable or even better results than offline methods.
arXiv Detail & Related papers (2023-12-10T17:07:37Z) - Gaussian3Diff: 3D Gaussian Diffusion for 3D Full Head Synthesis and
Editing [53.05069432989608]
We present a novel framework for generating 3D human heads with remarkable flexibility.
Our method facilitates the creation of diverse and realistic 3D human heads with fine-grained editing over facial features and expressions.
arXiv Detail & Related papers (2023-12-05T19:05:58Z) - SplatArmor: Articulated Gaussian splatting for animatable humans from
monocular RGB videos [15.74530749823217]
We propose SplatArmor, a novel approach for recovering detailed and animatable human models by armoring' a parameterized body model with 3D Gaussians.
Our approach represents the human as a set of 3D Gaussians within a canonical space, whose articulation is defined by extending the skinning of the underlying SMPL geometry.
We show compelling results on the ZJU MoCap and People Snapshot datasets, which underscore the effectiveness of our method for controllable human synthesis.
arXiv Detail & Related papers (2023-11-17T18:47:07Z) - AG3D: Learning to Generate 3D Avatars from 2D Image Collections [96.28021214088746]
We propose a new adversarial generative model of realistic 3D people from 2D images.
Our method captures shape and deformation of the body and loose clothing by adopting a holistic 3D generator.
We experimentally find that our method outperforms previous 3D- and articulation-aware methods in terms of geometry and appearance.
arXiv Detail & Related papers (2023-05-03T17:56:24Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.