Unsupervised Learning Facial Parameter Regressor for Action Unit
Intensity Estimation via Differentiable Renderer
- URL: http://arxiv.org/abs/2008.08862v1
- Date: Thu, 20 Aug 2020 09:49:13 GMT
- Title: Unsupervised Learning Facial Parameter Regressor for Action Unit
Intensity Estimation via Differentiable Renderer
- Authors: Xinhui Song, Tianyang Shi, Zunlei Feng, Mingli Song, Jackie Lin,
Chuanjie Lin, Changjie Fan, Yi Yuan
- Abstract summary: We present a framework to predict the facial parameters based on a bone-driven face model (BDFM) under different views.
The proposed framework consists of a feature extractor, a generator, and a facial parameter regressor.
- Score: 51.926868759681014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facial action unit (AU) intensity is an index to describe all visually
discernible facial movements. Most existing methods learn intensity estimator
with limited AU data, while they lack generalization ability out of the
dataset. In this paper, we present a framework to predict the facial parameters
(including identity parameters and AU parameters) based on a bone-driven face
model (BDFM) under different views. The proposed framework consists of a
feature extractor, a generator, and a facial parameter regressor. The regressor
can fit the physical meaning parameters of the BDFM from a single face image
with the help of the generator, which maps the facial parameters to the
game-face images as a differentiable renderer. Besides, identity loss, loopback
loss, and adversarial loss can improve the regressive results. Quantitative
evaluations are performed on two public databases BP4D and DISFA, which
demonstrates that the proposed method can achieve comparable or better
performance than the state-of-the-art methods. What's more, the qualitative
results also demonstrate the validity of our method in the wild.
Related papers
- Physically Disentangled Representations [13.234029150635658]
inverse rendering can be used to learn physically disentangled representations of scenes without supervision.
We show the utility of inverse rendering in learning representations that yield improved accuracy on downstream clustering, linear classification, and segmentation tasks.
arXiv Detail & Related papers (2022-04-11T17:36:40Z) - Fair SA: Sensitivity Analysis for Fairness in Face Recognition [1.7149364927872013]
We propose a new fairness evaluation based on robustness in the form of a generic framework.
We analyze the performance of common face recognition models and empirically show that certain subgroups are at a disadvantage when images are perturbed.
arXiv Detail & Related papers (2022-02-08T01:16:09Z) - The FaceChannel: A Fast & Furious Deep Neural Network for Facial
Expression Recognition [71.24825724518847]
Current state-of-the-art models for automatic Facial Expression Recognition (FER) are based on very deep neural networks that are effective but rather expensive to train.
We formalize the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks.
We demonstrate how our model achieves a comparable, if not better, performance to the current state-of-the-art in FER.
arXiv Detail & Related papers (2020-09-15T09:25:37Z) - MIPGAN -- Generating Strong and High Quality Morphing Attacks Using
Identity Prior Driven GAN [22.220940043294334]
We present a new approach for generating strong attacks using an Identity Prior Driven Generative Adversarial Network.
The proposed MIPGAN is derived from the StyleGAN with a newly formulated loss function exploiting perceptual quality and identity factor.
We demonstrate the proposed approach's applicability to generate strong morphing attacks by evaluating its vulnerability against both commercial and deep learning based Face Recognition System.
arXiv Detail & Related papers (2020-09-03T15:08:38Z) - InterFaceGAN: Interpreting the Disentangled Face Representation Learned
by GANs [73.27299786083424]
We propose a framework called InterFaceGAN to interpret the disentangled face representation learned by state-of-the-art GAN models.
We first find that GANs learn various semantics in some linear subspaces of the latent space.
We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection.
arXiv Detail & Related papers (2020-05-18T18:01:22Z) - Multi-Scale Thermal to Visible Face Verification via Attribute Guided
Synthesis [55.29770222566124]
We use attributes extracted from visible images to synthesize attribute-preserved visible images from thermal imagery for cross-modal matching.
A novel multi-scale generator is proposed to synthesize the visible image from the thermal image guided by the extracted attributes.
A pre-trained VGG-Face network is leveraged to extract features from the synthesized image and the input visible image for verification.
arXiv Detail & Related papers (2020-04-20T01:45:05Z) - Unsupervised Facial Action Unit Intensity Estimation via Differentiable
Optimization [45.07851622835555]
We propose an unsupervised framework GE-Net for facial AU intensity estimation from a single image.
Our framework performs differentiable optimization, which iteratively updates the facial parameters to match the input image.
Experimental results demonstrate that our method can achieve state-of-the-art results compared with existing methods.
arXiv Detail & Related papers (2020-04-13T12:56:28Z) - Joint Deep Learning of Facial Expression Synthesis and Recognition [97.19528464266824]
We propose a novel joint deep learning of facial expression synthesis and recognition method for effective FER.
The proposed method involves a two-stage learning procedure. Firstly, a facial expression synthesis generative adversarial network (FESGAN) is pre-trained to generate facial images with different facial expressions.
In order to alleviate the problem of data bias between the real images and the synthetic images, we propose an intra-class loss with a novel real data-guided back-propagation (RDBP) algorithm.
arXiv Detail & Related papers (2020-02-06T10:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.