Learning Formation of Physically-Based Face Attributes
- URL: http://arxiv.org/abs/2004.03458v2
- Date: Fri, 24 Apr 2020 03:29:48 GMT
- Title: Learning Formation of Physically-Based Face Attributes
- Authors: Ruilong Li, Karl Bladin, Yajie Zhao, Chinmay Chinara, Owen Ingraham,
Pengda Xiang, Xinglei Ren, Pratusha Prasad, Bipin Kishore, Jun Xing, Hao Li
- Abstract summary: Based on a combined data set of 4000 high resolution facial scans, we introduce a non-linear morphable face model.
Our deep learning based generative model learns to correlate albedo and geometry, which ensures the anatomical correctness of the generated assets.
- Score: 16.55993873730069
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Based on a combined data set of 4000 high resolution facial scans, we
introduce a non-linear morphable face model, capable of producing multifarious
face geometry of pore-level resolution, coupled with material attributes for
use in physically-based rendering. We aim to maximize the variety of face
identities, while increasing the robustness of correspondence between unique
components, including middle-frequency geometry, albedo maps, specular
intensity maps and high-frequency displacement details. Our deep learning based
generative model learns to correlate albedo and geometry, which ensures the
anatomical correctness of the generated assets. We demonstrate potential use of
our generative model for novel identity generation, model fitting,
interpolation, animation, high fidelity data visualization, and low-to-high
resolution data domain transferring. We hope the release of this generative
model will encourage further cooperation between all graphics, vision, and data
focused professionals while demonstrating the cumulative value of every
individual's complete biometric profile.
Related papers
- Geometry Distributions [51.4061133324376]
We propose a novel geometric data representation that models geometry as distributions.
Our approach uses diffusion models with a novel network architecture to learn surface point distributions.
We evaluate our representation qualitatively and quantitatively across various object types, demonstrating its effectiveness in achieving high geometric fidelity.
arXiv Detail & Related papers (2024-11-25T04:06:48Z) - HyperFace: Generating Synthetic Face Recognition Datasets by Exploring Face Embedding Hypersphere [22.8742248559748]
Face recognition datasets are often collected by crawling Internet and without individuals' consents, raising ethical and privacy concerns.
Generating synthetic datasets for training face recognition models has emerged as a promising alternative.
We propose a new synthetic dataset generation approach, called HyperFace.
arXiv Detail & Related papers (2024-11-13T09:42:12Z) - ID-to-3D: Expressive ID-guided 3D Heads via Score Distillation Sampling [96.87575334960258]
ID-to-3D is a method to generate identity- and text-guided 3D human heads with disentangled expressions.
Results achieve an unprecedented level of identity-consistent and high-quality texture and geometry generation.
arXiv Detail & Related papers (2024-05-26T13:36:45Z) - Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability [118.26563926533517]
Auto-regressive models have achieved impressive results in 2D image generation by modeling joint distributions in grid space.
We extend auto-regressive models to 3D domains, and seek a stronger ability of 3D shape generation by improving auto-regressive models at capacity and scalability simultaneously.
arXiv Detail & Related papers (2024-02-19T15:33:09Z) - ImFace++: A Sophisticated Nonlinear 3D Morphable Face Model with Implicit Neural Representations [25.016000421755162]
This paper presents a novel 3D morphable face model, named ImFace++, to learn a sophisticated and continuous space with implicit neural representations.
ImFace++ first constructs two explicitly disentangled deformation fields to model complex shapes associated with identities and expressions.
A refinement displacement field within the template space is further incorporated, enabling fine-grained learning of individual-specific facial details.
arXiv Detail & Related papers (2023-12-07T03:53:53Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - Generalizable Neural Performer: Learning Robust Radiance Fields for
Human Novel View Synthesis [52.720314035084215]
This work targets at using a general deep learning framework to synthesize free-viewpoint images of arbitrary human performers.
We present a simple yet powerful framework, named Generalizable Neural Performer (GNR), that learns a generalizable and robust neural body representation.
Experiments on GeneBody-1.0 and ZJU-Mocap show better robustness of our methods than recent state-of-the-art generalizable methods.
arXiv Detail & Related papers (2022-04-25T17:14:22Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - Methodology for Building Synthetic Datasets with Virtual Humans [1.5556923898855324]
Large datasets can be used for improved, targeted training of deep neural networks.
In particular, we make use of a 3D morphable face model for the rendering of multiple 2D images across a dataset of 100 synthetic identities.
arXiv Detail & Related papers (2020-06-21T10:29:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.