Learning Continuous Face Representation with Explicit Functions
- URL: http://arxiv.org/abs/2110.15268v1
- Date: Mon, 25 Oct 2021 03:49:20 GMT
- Title: Learning Continuous Face Representation with Explicit Functions
- Authors: Liping Zhang, Weijun Li, Linjun Sun, Lina Yu, Xin Ning, Xiaoli Dong,
Jian Xu, Hong Qin
- Abstract summary: We propose an explicit model (EmFace) for human face representation in the form of a finite sum of mathematical terms.
EmFace achieves reasonable performance on several face image processing tasks, including face image restoration, denoising, and transformation.
- Score: 20.5159277443333
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How to represent a face pattern? While it is presented in a continuous way in
our visual system, computers often store and process the face image in a
discrete manner with 2D arrays of pixels. In this study, we attempt to learn a
continuous representation for face images with explicit functions. First, we
propose an explicit model (EmFace) for human face representation in the form of
a finite sum of mathematical terms, where each term is an analytic function
element. Further, to estimate the unknown parameters of EmFace, a novel neural
network, EmNet, is designed with an encoder-decoder structure and trained using
the backpropagation algorithm, where the encoder is defined by a deep
convolutional neural network and the decoder is an explicit mathematical
expression of EmFace. Experimental results show that EmFace has a higher
representation performance on faces with various expressions, postures, and
other factors, compared to that of other methods. Furthermore, EmFace achieves
reasonable performance on several face image processing tasks, including face
image restoration, denoising, and transformation.
Related papers
- GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations [54.94362657501809]
We propose a new method to generate highly dynamic and deformable human head avatars from multi-view imagery in real-time.
At the core of our method is a hierarchical representation of head models that allows to capture the complex dynamics of facial expressions and head movements.
We train this coarse-to-fine facial avatar model along with the head pose as a learnable parameter in an end-to-end framework.
arXiv Detail & Related papers (2024-09-18T13:05:43Z) - 3D Facial Expressions through Analysis-by-Neural-Synthesis [30.2749903946587]
SMIRK (Spatial Modeling for Image-based Reconstruction of Kinesics) faithfully reconstructs expressive 3D faces from images.
We identify two key limitations in existing methods: shortcomings in their self-supervised training formulation, and a lack of expression diversity in the training images.
Our qualitative, quantitative and particularly our perceptual evaluations demonstrate that SMIRK achieves the new state-of-the art performance on accurate expression reconstruction.
arXiv Detail & Related papers (2024-04-05T14:00:07Z) - GaFET: Learning Geometry-aware Facial Expression Translation from
In-The-Wild Images [55.431697263581626]
We introduce a novel Geometry-aware Facial Expression Translation framework, which is based on parametric 3D facial representations and can stably decoupled expression.
We achieve higher-quality and more accurate facial expression transfer results compared to state-of-the-art methods, and demonstrate applicability of various poses and complex textures.
arXiv Detail & Related papers (2023-08-07T09:03:35Z) - Emotion Separation and Recognition from a Facial Expression by Generating the Poker Face with Vision Transformers [57.1091606948826]
We propose a novel FER model, named Poker Face Vision Transformer or PF-ViT, to address these challenges.
PF-ViT aims to separate and recognize the disturbance-agnostic emotion from a static facial image via generating its corresponding poker face.
PF-ViT utilizes vanilla Vision Transformers, and its components are pre-trained as Masked Autoencoders on a large facial expression dataset.
arXiv Detail & Related papers (2022-07-22T13:39:06Z) - Human Face Recognition from Part of a Facial Image based on Image
Stitching [0.0]
Most of the current techniques for face recognition require the presence of a full face of the person to be recognized.
In this work, we adopted the process of stitching the face by completing the missing part with the flipping of the part shown in the picture.
The selected face recognition algorithms that are applied here are Eigenfaces and geometrical methods.
arXiv Detail & Related papers (2022-03-10T19:31:57Z) - FaceTuneGAN: Face Autoencoder for Convolutional Expression Transfer
Using Neural Generative Adversarial Networks [0.7043489166804575]
We present FaceTuneGAN, a new 3D face model representation decomposing and encoding separately facial identity and facial expression.
We propose a first adaptation of image-to-image translation networks, that have successfully been used in the 2D domain, to 3D face geometry.
arXiv Detail & Related papers (2021-12-01T14:42:03Z) - Pro-UIGAN: Progressive Face Hallucination from Occluded Thumbnails [53.080403912727604]
We propose a multi-stage Progressive Upsampling and Inpainting Generative Adversarial Network, dubbed Pro-UIGAN.
It exploits facial geometry priors to replenish and upsample (8*) the occluded and tiny faces.
Pro-UIGAN achieves visually pleasing HR faces, reaching superior performance in downstream tasks.
arXiv Detail & Related papers (2021-08-02T02:29:24Z) - Image-to-Video Generation via 3D Facial Dynamics [78.01476554323179]
We present a versatile model, FaceAnime, for various video generation tasks from still images.
Our model is versatile for various AR/VR and entertainment applications, such as face video and face video prediction.
arXiv Detail & Related papers (2021-05-31T02:30:11Z) - Real-Time Facial Expression Emoji Masking with Convolutional Neural
Networks and Homography [0.0]
In image processing, Convolutional Neural Networks (CNN) can be trained to categorize facial expressions of images of human faces.
In this work, we create a system that masks a student's face with a emoji of the respective emotion.
Our results show that this pipeline is deploy-able in real-time, and is usable in educational settings.
arXiv Detail & Related papers (2020-12-24T21:25:48Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.