Fast and Robust Face-to-Parameter Translation for Game Character
Auto-Creation
- URL: http://arxiv.org/abs/2008.07132v1
- Date: Mon, 17 Aug 2020 07:45:31 GMT
- Title: Fast and Robust Face-to-Parameter Translation for Game Character
Auto-Creation
- Authors: Tianyang Shi (1), Zhengxia Zou (2), Yi Yuan (1), Changjie Fan (1) ((1)
NetEase Fuxi AI Lab, (2) University of Michigan)
- Abstract summary: This paper proposes a game character auto-creation framework that generates in-game characters according to a player's input face photo.
Our method shows better robustness than previous methods, especially for those photos with head-pose variance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid development of Role-Playing Games (RPGs), players are now
allowed to edit the facial appearance of their in-game characters with their
preferences rather than using default templates. This paper proposes a game
character auto-creation framework that generates in-game characters according
to a player's input face photo. Different from the previous methods that are
designed based on neural style transfer or monocular 3D face reconstruction, we
re-formulate the character auto-creation process in a different point of view:
by predicting a large set of physically meaningful facial parameters under a
self-supervised learning paradigm. Instead of updating facial parameters
iteratively at the input end of the renderer as suggested by previous methods,
which are time-consuming, we introduce a facial parameter translator so that
the creation can be done efficiently through a single forward propagation from
the face embeddings to parameters, with a considerable 1000x computational
speedup. Despite its high efficiency, the interactivity is preserved in our
method where users are allowed to optionally fine-tune the facial parameters on
our creation according to their needs. Our approach also shows better
robustness than previous methods, especially for those photos with head-pose
variance. Comparison results and ablation analysis on seven public face
verification datasets suggest the effectiveness of our method.
Related papers
- FlashFace: Human Image Personalization with High-fidelity Identity Preservation [59.76645602354481]
FlashFace allows users to easily personalize their own photos by providing one or a few reference face images and a text prompt.
Our approach is distinguishable from existing human photo customization methods by higher-fidelity identity preservation and better instruction following.
arXiv Detail & Related papers (2024-03-25T17:59:57Z) - GSmoothFace: Generalized Smooth Talking Face Generation via Fine Grained
3D Face Guidance [83.43852715997596]
GSmoothFace is a novel two-stage generalized talking face generation model guided by a fine-grained 3d face model.
It can synthesize smooth lip dynamics while preserving the speaker's identity.
Both quantitative and qualitative experiments confirm the superiority of our method in terms of realism, lip synchronization, and visual quality.
arXiv Detail & Related papers (2023-12-12T16:00:55Z) - MyPortrait: Morphable Prior-Guided Personalized Portrait Generation [19.911068375240905]
Myportrait is a simple, general, and flexible framework for neural portrait generation.
Our proposed framework supports both video-driven and audio-driven face animation.
Our method provides a real-time online version and a high-quality offline version.
arXiv Detail & Related papers (2023-12-05T12:05:01Z) - GAN-Avatar: Controllable Personalized GAN-based Human Head Avatar [48.21353924040671]
We propose to learn person-specific animatable avatars from images without assuming to have access to precise facial expression tracking.
We learn a mapping from 3DMM facial expression parameters to the latent space of the generative model.
With this scheme, we decouple 3D appearance reconstruction and animation control to achieve high fidelity in image synthesis.
arXiv Detail & Related papers (2023-11-22T19:13:00Z) - DreamIdentity: Improved Editability for Efficient Face-identity
Preserved Image Generation [69.16517915592063]
We propose a novel face-identity encoder to learn an accurate representation of human faces.
We also propose self-augmented editability learning to enhance the editability of models.
Our methods can generate identity-preserved images under different scenes at a much faster speed.
arXiv Detail & Related papers (2023-07-01T11:01:17Z) - Face0: Instantaneously Conditioning a Text-to-Image Model on a Face [3.5150821092068383]
We present Face0, a novel way to instantaneously condition a text-to-image generation model on a face.
We augment a dataset of annotated images with embeddings of the included faces and train an image generation model, on the augmented dataset.
Our method achieves pleasing results, is remarkably simple, extremely fast, and equips the underlying model with new capabilities.
arXiv Detail & Related papers (2023-06-11T09:52:03Z) - Zero-Shot Text-to-Parameter Translation for Game Character Auto-Creation [48.62643177644139]
This paper proposes a novel text-to- parameter translation method (T2P) to achieve zero-shot text-driven game character auto-creation.
With our method, users can create a vivid in-game character with arbitrary text description without using any reference photo or editing hundreds of parameters manually.
arXiv Detail & Related papers (2023-03-02T14:37:17Z) - Unsupervised Learning Facial Parameter Regressor for Action Unit
Intensity Estimation via Differentiable Renderer [51.926868759681014]
We present a framework to predict the facial parameters based on a bone-driven face model (BDFM) under different views.
The proposed framework consists of a feature extractor, a generator, and a facial parameter regressor.
arXiv Detail & Related papers (2020-08-20T09:49:13Z) - Neutral Face Game Character Auto-Creation via PokerFace-GAN [0.0]
This paper studies the problem of automatically creating in-game characters with a single photo.
We first build a differentiable character which is more flexible than the previous methods in multi-view rendering cases.
We then take advantage of the adversarial training to effectively disentangle the expression parameters from the identity parameters.
arXiv Detail & Related papers (2020-08-17T08:43:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.