MeInGame: Create a Game Character Face from a Single Portrait
- URL: http://arxiv.org/abs/2102.02371v2
- Date: Sun, 7 Feb 2021 03:27:07 GMT
- Title: MeInGame: Create a Game Character Face from a Single Portrait
- Authors: Jiangke Lin, Yi Yuan, Zhengxia Zou
- Abstract summary: We propose an automatic character face creation method that predicts both facial shape and texture from a single portrait.
Experiments show that our method outperforms state-of-the-art methods used in games.
- Score: 15.432712351907012
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many deep learning based 3D face reconstruction methods have been proposed
recently, however, few of them have applications in games. Current game
character customization systems either require players to manually adjust
considerable face attributes to obtain the desired face, or have limited
freedom of facial shape and texture. In this paper, we propose an automatic
character face creation method that predicts both facial shape and texture from
a single portrait, and it can be integrated into most existing 3D games.
Although 3D Morphable Face Model (3DMM) based methods can restore accurate 3D
faces from single images, the topology of 3DMM mesh is different from the
meshes used in most games. To acquire fidelity texture, existing methods
require a large amount of face texture data for training, while building such
datasets is time-consuming and laborious. Besides, such a dataset collected
under laboratory conditions may not generalized well to in-the-wild situations.
To tackle these problems, we propose 1) a low-cost facial texture acquisition
method, 2) a shape transfer algorithm that can transform the shape of a 3DMM
mesh to games, and 3) a new pipeline for training 3D game face reconstruction
networks. The proposed method not only can produce detailed and vivid game
characters similar to the input portrait, but can also eliminate the influence
of lighting and occlusions. Experiments show that our method outperforms
state-of-the-art methods used in games.
Related papers
- 3D-GANTex: 3D Face Reconstruction with StyleGAN3-based Multi-View Images and 3DDFA based Mesh Generation [0.8479659578608233]
This paper introduces a novel method for texture estimation from a single image by first using StyleGAN and 3D Morphable Models.
The result shows that the generated mesh is of high quality with near to accurate texture representation.
arXiv Detail & Related papers (2024-10-21T13:42:06Z) - FaceGPT: Self-supervised Learning to Chat about 3D Human Faces [69.4651241319356]
We introduce FaceGPT, a self-supervised learning framework for Large Vision-Language Models (VLMs) to reason about 3D human faces from images and text.
FaceGPT overcomes this limitation by embedding the parameters of a 3D morphable face model (3DMM) into the token space of a VLM.
We show that FaceGPT achieves high-quality 3D face reconstructions and retains the ability for general-purpose visual instruction following.
arXiv Detail & Related papers (2024-06-11T11:13:29Z) - Articulated 3D Head Avatar Generation using Text-to-Image Diffusion
Models [107.84324544272481]
The ability to generate diverse 3D articulated head avatars is vital to a plethora of applications, including augmented reality, cinematography, and education.
Recent work on text-guided 3D object generation has shown great promise in addressing these needs.
We show that our diffusion-based articulated head avatars outperform state-of-the-art approaches for this task.
arXiv Detail & Related papers (2023-07-10T19:15:32Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - BareSkinNet: De-makeup and De-lighting via 3D Face Reconstruction [2.741266294612776]
We propose BareSkinNet, a novel method that simultaneously removes makeup and lighting influences from the face image.
By combining the process of 3D face reconstruction, we can easily obtain 3D geometry and coarse 3D textures.
In experiments, we show that BareSkinNet outperforms state-of-the-art makeup removal methods.
arXiv Detail & Related papers (2022-09-19T14:02:03Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - FaceScape: 3D Facial Dataset and Benchmark for Single-View 3D Face
Reconstruction [29.920622006999732]
We present a large-scale detailed 3D face dataset, FaceScape, and the corresponding benchmark to evaluate single-view facial 3D reconstruction.
By training on FaceScape data, a novel algorithm is proposed to predict elaborate riggable 3D face models from a single image input.
We also use FaceScape data to generate the in-the-wild and in-the-lab benchmark to evaluate recent methods of single-view face reconstruction.
arXiv Detail & Related papers (2021-11-01T16:48:34Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - StyleRig: Rigging StyleGAN for 3D Control over Portrait Images [81.43265493604302]
StyleGAN generates portrait images of faces with eyes, teeth, hair and context (neck, shoulders, background)
StyleGAN lacks a rig-like control over semantic face parameters that are interpretable in 3D, such as face pose, expressions, and scene illumination.
We present the first method to provide a face rig-like control over a pretrained and fixed StyleGAN via a 3DMM.
arXiv Detail & Related papers (2020-03-31T21:20:34Z) - FaceScape: a Large-scale High Quality 3D Face Dataset and Detailed
Riggable 3D Face Prediction [39.95272819738226]
We present a novel algorithm that is able to predict elaborate riggable 3D face models from a single image input.
FaceScape dataset provides 18,760 textured 3D faces, captured from 938 subjects and each with 20 specific expressions.
arXiv Detail & Related papers (2020-03-31T07:11:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.