PoseFace: Pose-Invariant Features and Pose-Adaptive Loss for Face
Recognition
- URL: http://arxiv.org/abs/2107.11721v1
- Date: Sun, 25 Jul 2021 03:50:47 GMT
- Title: PoseFace: Pose-Invariant Features and Pose-Adaptive Loss for Face
Recognition
- Authors: Qiang Meng, Xiaqing Xu, Xiaobo Wang, Yang Qian, Yunxiao Qin, Zezheng
Wang, Chenxu Zhao, Feng Zhou, Zhen Lei
- Abstract summary: We propose an efficient PoseFace framework which utilizes the facial landmarks to disentangle the pose-invariant features and exploits a pose-adaptive loss to handle the imbalance issue adaptively.
- Score: 42.62320574369969
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the great success achieved by deep learning methods in face
recognition, severe performance drops are observed for large pose variations in
unconstrained environments (e.g., in cases of surveillance and photo-tagging).
To address it, current methods either deploy pose-specific models or frontalize
faces by additional modules. Still, they ignore the fact that identity
information should be consistent across poses and are not realizing the data
imbalance between frontal and profile face images during training. In this
paper, we propose an efficient PoseFace framework which utilizes the facial
landmarks to disentangle the pose-invariant features and exploits a
pose-adaptive loss to handle the imbalance issue adaptively. Extensive
experimental results on the benchmarks of Multi-PIE, CFP, CPLFW and IJB have
demonstrated the superiority of our method over the state-of-the-arts.
Related papers
- Realistic and Efficient Face Swapping: A Unified Approach with Diffusion Models [69.50286698375386]
We propose a novel approach that better harnesses diffusion models for face-swapping.
We introduce a mask shuffling technique during inpainting training, which allows us to create a so-called universal model for swapping.
Ours is a relatively unified approach and so it is resilient to errors in other off-the-shelf models.
arXiv Detail & Related papers (2024-09-11T13:43:53Z) - Effective Adapter for Face Recognition in the Wild [72.75516495170199]
We tackle the challenge of face recognition in the wild, where images often suffer from low quality and real-world distortions.
Traditional approaches-either training models directly on degraded images or their enhanced counterparts using face restoration techniques-have proven ineffective.
We propose an effective adapter for augmenting existing face recognition models trained on high-quality facial datasets.
arXiv Detail & Related papers (2023-12-04T08:55:46Z) - Controllable Inversion of Black-Box Face Recognition Models via
Diffusion [8.620807177029892]
We tackle the task of inverting the latent space of pre-trained face recognition models without full model access.
We show that the conditional diffusion model loss naturally emerges and that we can effectively sample from the inverse distribution.
Our method is the first black-box face recognition model inversion method that offers intuitive control over the generation process.
arXiv Detail & Related papers (2023-03-23T03:02:09Z) - Pose-disentangled Contrastive Learning for Self-supervised Facial
Representation [12.677909048435408]
We propose a novel Pose-disentangled Contrastive Learning (PCL) method for general self-supervised facial representation.
Our PCL first devises a pose-disentangled decoder (PDD), which disentangles the pose-related features from the face-aware features.
We then introduce a pose-related contrastive learning scheme that learns pose-related information based on data augmentation of the same image.
arXiv Detail & Related papers (2022-11-24T09:30:51Z) - PAM: Pose Attention Module for Pose-Invariant Face Recognition [3.0839245814393723]
We propose a lightweight and easy-to-implement attention block, named Pose Attention Module (PAM), for pose-invariant face recognition.
Specifically, PAM performs frontal-profile feature transformation in hierarchical feature space by learning residuals between pose variations with a soft gate mechanism.
arXiv Detail & Related papers (2021-11-23T15:18:33Z) - Attention-guided Progressive Mapping for Profile Face Recognition [12.792576041526289]
Cross pose face recognition remains a significant challenge.
Learning pose-robust features by traversing to the feature space of frontal faces provides an effective and cheap way to alleviate this problem.
arXiv Detail & Related papers (2021-06-27T02:21:41Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - Pixel Sampling for Style Preserving Face Pose Editing [53.14006941396712]
We present a novel two-stage approach to solve the dilemma, where the task of face pose manipulation is cast into face inpainting.
By selectively sampling pixels from the input face and slightly adjust their relative locations, the face editing result faithfully keeps the identity information as well as the image style unchanged.
With the 3D facial landmarks as guidance, our method is able to manipulate face pose in three degrees of freedom, i.e., yaw, pitch, and roll, resulting in more flexible face pose editing.
arXiv Detail & Related papers (2021-06-14T11:29:29Z) - Dual-Attention GAN for Large-Pose Face Frontalization [59.689836951934694]
We present a novel Dual-Attention Generative Adversarial Network (DA-GAN) for photo-realistic face frontalization.
Specifically, a self-attention-based generator is introduced to integrate local features with their long-range dependencies.
A novel face-attention-based discriminator is applied to emphasize local features of face regions.
arXiv Detail & Related papers (2020-02-17T20:00:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.