Improving Makeup Face Verification by Exploring Part-Based
Representations
- URL: http://arxiv.org/abs/2101.07338v1
- Date: Mon, 18 Jan 2021 21:51:38 GMT
- Title: Improving Makeup Face Verification by Exploring Part-Based
Representations
- Authors: Marcus de Assis Angeloni and Helio Pedrini
- Abstract summary: We propose and evaluate the adoption of facial parts to fuse with current holistic representations.
Experimental results show that the fusion of deep features extracted of facial parts with holistic representation increases the accuracy of face verification systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, we have seen an increase in the global facial recognition market
size. Despite significant advances in face recognition technology with the
adoption of convolutional neural networks, there are still open challenges, as
when there is makeup in the face. To address this challenge, we propose and
evaluate the adoption of facial parts to fuse with current holistic
representations. We propose two strategies of facial parts: one with four
regions (left periocular, right periocular, nose and mouth) and another with
three facial thirds (upper, middle and lower). Experimental results obtained in
four public makeup face datasets and in a challenging cross-dataset protocol
show that the fusion of deep features extracted of facial parts with holistic
representation increases the accuracy of face verification systems and
decreases the error rates, even without any retraining of the CNN models. Our
proposed pipeline achieved state-of-the-art performance for the YMU dataset and
competitive results for other three datasets (EMFD, FAM and M501).
Related papers
- DiffusionFace: Towards a Comprehensive Dataset for Diffusion-Based Face Forgery Analysis [71.40724659748787]
DiffusionFace is the first diffusion-based face forgery dataset.
It covers various forgery categories, including unconditional and Text Guide facial image generation, Img2Img, Inpaint, and Diffusion-based facial exchange algorithms.
It provides essential metadata and a real-world internet-sourced forgery facial image dataset for evaluation.
arXiv Detail & Related papers (2024-03-27T11:32:44Z) - TANet: A new Paradigm for Global Face Super-resolution via
Transformer-CNN Aggregation Network [72.41798177302175]
We propose a novel paradigm based on the self-attention mechanism (i.e., the core of Transformer) to fully explore the representation capacity of the facial structure feature.
Specifically, we design a Transformer-CNN aggregation network (TANet) consisting of two paths, in which one path uses CNNs responsible for restoring fine-grained facial details.
By aggregating the features from the above two paths, the consistency of global facial structure and fidelity of local facial detail restoration are strengthened simultaneously.
arXiv Detail & Related papers (2021-09-16T18:15:07Z) - On Recognizing Occluded Faces in the Wild [10.420394952839242]
We present the Real World Occluded Faces dataset.
This dataset contains faces with both upper face.
occluded, due to sunglasses, and lower face.
occluded, due to masks.
It is observed that the performance drop is far less when the models are tested on synthetically generated occluded faces.
arXiv Detail & Related papers (2021-09-08T14:20:10Z) - End2End Occluded Face Recognition by Masking Corrupted Features [82.27588990277192]
State-of-the-art general face recognition models do not generalize well to occluded face images.
This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network.
Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks.
arXiv Detail & Related papers (2021-08-21T09:08:41Z) - Landmark-Aware and Part-based Ensemble Transfer Learning Network for
Facial Expression Recognition from Static images [0.5156484100374059]
Part-based Ensemble Transfer Learning network models how humans recognize facial expressions.
It consists of 5 sub-networks, in which each sub-network performs transfer learning from one of the five subsets of facial landmarks.
It requires only 3.28 $times$ $106$ FLOPS, which ensures computational efficiency for real-time deployment.
arXiv Detail & Related papers (2021-04-22T18:38:33Z) - Towards NIR-VIS Masked Face Recognition [47.00916333095693]
Near-infrared to visible (NIR-VIS) face recognition is the most common case in heterogeneous face recognition.
We propose a novel training method to maximize the mutual information shared by the face representation of two domains.
In addition, a 3D face reconstruction based approach is employed to synthesize masked face from the existing NIR image.
arXiv Detail & Related papers (2021-04-14T10:40:09Z) - Unmasking Face Embeddings by Self-restrained Triplet Loss for Accurate
Masked Face Recognition [6.865656740940772]
We present a solution to improve the masked face recognition performance.
Specifically, we propose the Embedding Unmasking Model (EUM) operated on top of existing face recognition models.
We also propose a novel loss function, the Self-restrained Triplet (SRT), which enabled the EUM to produce embeddings similar to these of unmasked faces of the same identities.
arXiv Detail & Related papers (2021-03-02T13:43:11Z) - Facial Expressions as a Vulnerability in Face Recognition [73.85525896663371]
This work explores facial expression bias as a security vulnerability of face recognition systems.
We present a comprehensive analysis of how facial expression bias impacts the performance of face recognition technologies.
arXiv Detail & Related papers (2020-11-17T18:12:41Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.