Synthetic Prior for Few-Shot Drivable Head Avatar Inversion
- URL: http://arxiv.org/abs/2501.06903v1
- Date: Sun, 12 Jan 2025 19:01:05 GMT
- Title: Synthetic Prior for Few-Shot Drivable Head Avatar Inversion
- Authors: Wojciech Zielonka, Stephan J. Garbin, Alexandros Lattas, George Kopanas, Paulo Gotardo, Thabo Beeler, Justus Thies, Timo Bolkart,
- Abstract summary: We present SynShot, a novel method for the few-shot inversion of a drivable head avatar based on a synthetic prior.
Inspired by machine learning models trained solely on synthetic data, we propose a method that learns a prior model from a large dataset of synthetic heads.
We model the head avatar using 3D Gaussian splatting and a convolutional encoder-decoder that outputs Gaussian parameters in UV texture space.
- Score: 61.51887011274453
- License:
- Abstract: We present SynShot, a novel method for the few-shot inversion of a drivable head avatar based on a synthetic prior. We tackle two major challenges. First, training a controllable 3D generative network requires a large number of diverse sequences, for which pairs of images and high-quality tracked meshes are not always available. Second, state-of-the-art monocular avatar models struggle to generalize to new views and expressions, lacking a strong prior and often overfitting to a specific viewpoint distribution. Inspired by machine learning models trained solely on synthetic data, we propose a method that learns a prior model from a large dataset of synthetic heads with diverse identities, expressions, and viewpoints. With few input images, SynShot fine-tunes the pretrained synthetic prior to bridge the domain gap, modeling a photorealistic head avatar that generalizes to novel expressions and viewpoints. We model the head avatar using 3D Gaussian splatting and a convolutional encoder-decoder that outputs Gaussian parameters in UV texture space. To account for the different modeling complexities over parts of the head (e.g., skin vs hair), we embed the prior with explicit control for upsampling the number of per-part primitives. Compared to state-of-the-art monocular methods that require thousands of real training images, SynShot significantly improves novel view and expression synthesis.
Related papers
- GAS: Generative Avatar Synthesis from a Single Image [54.95198111659466]
We introduce a generalizable and unified framework to synthesize view-consistent and temporally coherent avatars from a single image.
Our approach bridges this gap by combining the reconstruction power of regression-based 3D human reconstruction with the generative capabilities of a diffusion model.
arXiv Detail & Related papers (2025-02-10T19:00:39Z) - Cafca: High-quality Novel View Synthesis of Expressive Faces from Casual Few-shot Captures [33.463245327698]
We present a novel volumetric prior on human faces that allows for high-fidelity expressive face modeling.
We leverage a 3D Morphable Face Model to synthesize a large training set, rendering each identity with different expressions.
We then train a conditional Neural Radiance Field prior on this synthetic dataset and, at inference time, fine-tune the model on a very sparse set of real images of a single subject.
arXiv Detail & Related papers (2024-10-01T12:24:50Z) - GPHM: Gaussian Parametric Head Model for Monocular Head Avatar Reconstruction [47.113910048252805]
High-fidelity 3D human head avatars are crucial for applications in VR/AR, digital human, and film production.
Recent advances have leveraged morphable face models to generate animated head avatars, representing varying identities and expressions.
We introduce 3D Gaussian Parametric Head Model, which employs 3D Gaussians to accurately represent the complexities of the human head.
arXiv Detail & Related papers (2024-07-21T06:03:11Z) - HR Human: Modeling Human Avatars with Triangular Mesh and High-Resolution Textures from Videos [52.23323966700072]
We present a framework for acquiring human avatars that are attached with high-resolution physically-based material textures and mesh from monocular video.
Our method introduces a novel information fusion strategy to combine the information from the monocular video and synthesize virtual multi-view images.
Experiments show that our approach outperforms previous representations in terms of high fidelity, and this explicit result supports deployment on common triangulars.
arXiv Detail & Related papers (2024-05-18T11:49:09Z) - Deformable 3D Gaussian Splatting for Animatable Human Avatars [50.61374254699761]
We propose a fully explicit approach to construct a digital avatar from as little as a single monocular sequence.
ParDy-Human constitutes an explicit model for realistic dynamic human avatars which requires significantly fewer training views and images.
Our avatars learning is free of additional annotations such as Splat masks and can be trained with variable backgrounds while inferring full-resolution images efficiently even on consumer hardware.
arXiv Detail & Related papers (2023-12-22T20:56:46Z) - BakedAvatar: Baking Neural Fields for Real-Time Head Avatar Synthesis [7.485318043174123]
We introduce BakedAvatar, a novel representation for real-time neural head avatar.
Our approach extracts layered meshes from learned isosurfaces of the head and computes expression-, pose-, and view-dependent appearances.
Experimental results demonstrate that our representation generates photorealistic results of comparable quality to other state-the-art methods.
arXiv Detail & Related papers (2023-11-09T17:05:53Z) - 3DMM-RF: Convolutional Radiance Fields for 3D Face Modeling [111.98096975078158]
We introduce a style-based generative network that synthesizes in one pass all and only the required rendering samples of a neural radiance field.
We show that this model can accurately be fit to "in-the-wild" facial images of arbitrary pose and illumination, extract the facial characteristics, and be used to re-render the face in controllable conditions.
arXiv Detail & Related papers (2022-09-15T15:28:45Z) - Free-HeadGAN: Neural Talking Head Synthesis with Explicit Gaze Control [54.079327030892244]
Free-HeadGAN is a person-generic neural talking head synthesis system.
We show that modeling faces with sparse 3D facial landmarks are sufficient for achieving state-of-the-art generative performance.
arXiv Detail & Related papers (2022-08-03T16:46:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.