DiffusionRig: Learning Personalized Priors for Facial Appearance Editing
- URL: http://arxiv.org/abs/2304.06711v1
- Date: Thu, 13 Apr 2023 17:58:00 GMT
- Title: DiffusionRig: Learning Personalized Priors for Facial Appearance Editing
- Authors: Zheng Ding, Xuaner Zhang, Zhihao Xia, Lars Jebe, Zhuowen Tu, Xiuming
Zhang
- Abstract summary: DiffusionRig is a diffusion model conditioned on, or "rigged by," crude 3D face models.
It learns to map simplistic renderings of 3D face models to realistic photos of a given person.
By learning the CGI-to-photo mapping with such personalized priors, DiffusionRig can "rig" the lighting, facial expression, head pose, etc.
- Score: 29.967273146028177
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the problem of learning person-specific facial priors from a small
number (e.g., 20) of portrait photos of the same person. This enables us to
edit this specific person's facial appearance, such as expression and lighting,
while preserving their identity and high-frequency facial details. Key to our
approach, which we dub DiffusionRig, is a diffusion model conditioned on, or
"rigged by," crude 3D face models estimated from single in-the-wild images by
an off-the-shelf estimator. On a high level, DiffusionRig learns to map
simplistic renderings of 3D face models to realistic photos of a given person.
Specifically, DiffusionRig is trained in two stages: It first learns generic
facial priors from a large-scale face dataset and then person-specific priors
from a small portrait photo collection of the person of interest. By learning
the CGI-to-photo mapping with such personalized priors, DiffusionRig can "rig"
the lighting, facial expression, head pose, etc. of a portrait photo,
conditioned only on coarse 3D models while preserving this person's identity
and other high-frequency characteristics. Qualitative and quantitative
experiments show that DiffusionRig outperforms existing approaches in both
identity preservation and photorealism. Please see the project website:
https://diffusionrig.github.io for the supplemental material, video, code, and
data.
Related papers
- Single Image, Any Face: Generalisable 3D Face Generation [59.9369171926757]
We propose a novel model, Gen3D-Face, which generates 3D human faces with unconstrained single image input.
To the best of our knowledge, this is the first attempt and benchmark for creating photorealistic 3D human face avatars from single images.
arXiv Detail & Related papers (2024-09-25T14:56:37Z) - FitDiff: Robust monocular 3D facial shape and reflectance estimation using Diffusion Models [79.65289816077629]
We present FitDiff, a diffusion-based 3D facial avatar generative model.
Our model accurately generates relightable facial avatars, utilizing an identity embedding extracted from an "in-the-wild" 2D facial image.
Being the first 3D LDM conditioned on face recognition embeddings, FitDiff reconstructs relightable human avatars, that can be used as-is in common rendering engines.
arXiv Detail & Related papers (2023-12-07T17:35:49Z) - GAN-Avatar: Controllable Personalized GAN-based Human Head Avatar [48.21353924040671]
We propose to learn person-specific animatable avatars from images without assuming to have access to precise facial expression tracking.
We learn a mapping from 3DMM facial expression parameters to the latent space of the generative model.
With this scheme, we decouple 3D appearance reconstruction and animation control to achieve high fidelity in image synthesis.
arXiv Detail & Related papers (2023-11-22T19:13:00Z) - A Generative Framework for Self-Supervised Facial Representation Learning [18.094262972295702]
Self-supervised representation learning has gained increasing attention for strong generalization ability without relying on paired datasets.
Self-supervised facial representation learning remains unsolved due to the coupling of facial identities, expressions, and external factors like pose and light.
We propose LatentFace, a novel generative framework for self-supervised facial representations.
arXiv Detail & Related papers (2023-09-15T09:34:05Z) - FaceChain: A Playground for Human-centric Artificial Intelligence
Generated Content [36.48960592782015]
FaceChain is a personalized portrait generation framework that combines a series of customized image-generation model and a rich set of face-related perceptual understanding models.
We inject several SOTA face models into the generation procedure, achieving a more efficient label-tagging, data-processing, and model post-processing compared to previous solutions.
Based on FaceChain, we further develop several applications to build a broader playground for better showing its value, including virtual try on and 2D talking head.
arXiv Detail & Related papers (2023-08-28T02:20:44Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - Learning an Animatable Detailed 3D Face Model from In-The-Wild Images [50.09971525995828]
We present the first approach to jointly learn a model with animatable detail and a detailed 3D face regressor from in-the-wild images.
Our DECA model is trained to robustly produce a UV displacement map from a low-dimensional latent representation.
We introduce a novel detail-consistency loss to disentangle person-specific details and expression-dependent wrinkles.
arXiv Detail & Related papers (2020-12-07T19:30:45Z) - Face Super-Resolution Guided by 3D Facial Priors [92.23902886737832]
We propose a novel face super-resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures.
Our work is the first to explore 3D morphable knowledge based on the fusion of parametric descriptions of face attributes.
The proposed 3D priors achieve superior face super-resolution results over the state-of-the-arts.
arXiv Detail & Related papers (2020-07-18T15:26:07Z) - FaR-GAN for One-Shot Face Reenactment [20.894596219099164]
We present a one-shot face reenactment model, FaR-GAN, that takes only one face image of any given source identity and a target expression as input.
The proposed method makes no assumptions about the source identity, facial expression, head pose, or even image background.
arXiv Detail & Related papers (2020-05-13T16:15:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.