OPHAvatars: One-shot Photo-realistic Head Avatars
- URL: http://arxiv.org/abs/2307.09153v2
- Date: Wed, 19 Jul 2023 01:27:17 GMT
- Title: OPHAvatars: One-shot Photo-realistic Head Avatars
- Authors: Shaoxu Li
- Abstract summary: Given a portrait, our method synthesizes a coarse talking head video using driving keypoints features.
With rendered images of the coarse avatar, our method updates the low-quality images with a blind face restoration model.
After several iterations, our method can synthesize a photo-realistic animatable 3D neural head avatar.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a method for synthesizing photo-realistic digital avatars from
only one portrait as the reference. Given a portrait, our method synthesizes a
coarse talking head video using driving keypoints features. And with the coarse
video, our method synthesizes a coarse talking head avatar with a deforming
neural radiance field. With rendered images of the coarse avatar, our method
updates the low-quality images with a blind face restoration model. With
updated images, we retrain the avatar for higher quality. After several
iterations, our method can synthesize a photo-realistic animatable 3D neural
head avatar. The motivation of our method is deformable neural radiance field
can eliminate the unnatural distortion caused by the image2video method. Our
method outperforms state-of-the-art methods in quantitative and qualitative
studies on various subjects.
Related papers
- Generalizable and Animatable Gaussian Head Avatar [50.34788590904843]
We propose Generalizable and Animatable Gaussian head Avatar (GAGAvatar) for one-shot animatable head avatar reconstruction.
We generate the parameters of 3D Gaussians from a single image in a single forward pass.
Our method exhibits superior performance compared to previous methods in terms of reconstruction quality and expression accuracy.
arXiv Detail & Related papers (2024-10-10T14:29:00Z) - GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations [54.94362657501809]
We propose a new method to generate highly dynamic and deformable human head avatars from multi-view imagery in real-time.
At the core of our method is a hierarchical representation of head models that allows to capture the complex dynamics of facial expressions and head movements.
We train this coarse-to-fine facial avatar model along with the head pose as a learnable parameter in an end-to-end framework.
arXiv Detail & Related papers (2024-09-18T13:05:43Z) - One2Avatar: Generative Implicit Head Avatar For Few-shot User Adaptation [31.310769289315648]
This paper introduces a novel approach to create high quality head avatar utilizing only a single or a few images per user.
We learn a generative model for 3D animatable photo-realistic head avatar from a multi-view dataset of expressions from 2407 subjects.
Our method demonstrates compelling results and outperforms existing state-of-the-art methods for few-shot avatar adaptation.
arXiv Detail & Related papers (2024-02-19T07:48:29Z) - 360{\deg} Volumetric Portrait Avatar [20.94425848146312]
We propose a novel method for reconstructing 360deg photo-realistic portrait avatars of human subjects solely based on monocular video inputs.
We evaluate our approach on captured real-world data and compare against state-of-the-art monocular reconstruction methods.
arXiv Detail & Related papers (2023-12-08T19:00:03Z) - HeadSculpt: Crafting 3D Head Avatars with Text [143.14548696613886]
We introduce a versatile pipeline dubbed HeadSculpt for crafting 3D head avatars from textual prompts.
We first equip the diffusion model with 3D awareness by leveraging landmark-based control and a learned textual embedding.
We propose a novel identity-aware editing score distillation strategy to optimize a textured mesh with a high-resolution differentiable rendering technique.
arXiv Detail & Related papers (2023-06-05T16:53:58Z) - Instruct-Video2Avatar: Video-to-Avatar Generation with Instructions [0.0]
Given a short monocular RGB video and text instructions, our method uses an image-conditioned diffusion model to edit one head image.
Our method synthesizes edited photo-realistic animatable 3D neural head avatars with a deformable neural radiance field head synthesis method.
arXiv Detail & Related papers (2023-06-05T14:10:28Z) - HQ3DAvatar: High Quality Controllable 3D Head Avatar [65.70885416855782]
This paper presents a novel approach to building highly photorealistic digital head avatars.
Our method learns a canonical space via an implicit function parameterized by a neural network.
At test time, our method is driven by a monocular RGB video.
arXiv Detail & Related papers (2023-03-25T13:56:33Z) - READ Avatars: Realistic Emotion-controllable Audio Driven Avatars [11.98034899127065]
We present READ Avatars, a 3D-based approach for generating 2D avatars driven by audio input with direct and granular control over the emotion.
Previous methods are unable to achieve realistic animation due to the many-to-many nature of audio to expression mappings.
This removes the smoothing effect of regression-based models and helps to improve the realism and expressiveness of the generated avatars.
arXiv Detail & Related papers (2023-03-01T18:56:43Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - I M Avatar: Implicit Morphable Head Avatars from Videos [68.13409777995392]
We propose IMavatar, a novel method for learning implicit head avatars from monocular videos.
Inspired by the fine-grained control mechanisms afforded by conventional 3DMMs, we represent the expression- and pose-related deformations via learned blendshapes and skinning fields.
We show quantitatively and qualitatively that our method improves geometry and covers a more complete expression space compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-14T15:30:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.