Effective Adapter for Face Recognition in the Wild
- URL: http://arxiv.org/abs/2312.01734v2
- Date: Wed, 3 Apr 2024 18:11:54 GMT
- Title: Effective Adapter for Face Recognition in the Wild
- Authors: Yunhao Liu, Yu-Ju Tsai, Kelvin C. K. Chan, Xiangtai Li, Lu Qi, Ming-Hsuan Yang,
- Abstract summary: We tackle the challenge of face recognition in the wild, where images often suffer from low quality and real-world distortions.
Traditional approaches-either training models directly on degraded images or their enhanced counterparts using face restoration techniques-have proven ineffective.
We propose an effective adapter for augmenting existing face recognition models trained on high-quality facial datasets.
- Score: 72.75516495170199
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we tackle the challenge of face recognition in the wild, where images often suffer from low quality and real-world distortions. Traditional heuristic approaches-either training models directly on these degraded images or their enhanced counterparts using face restoration techniques-have proven ineffective, primarily due to the degradation of facial features and the discrepancy in image domains. To overcome these issues, we propose an effective adapter for augmenting existing face recognition models trained on high-quality facial datasets. The key of our adapter is to process both the unrefined and enhanced images using two similar structures, one fixed and the other trainable. Such design can confer two benefits. First, the dual-input system minimizes the domain gap while providing varied perspectives for the face recognition model, where the enhanced image can be regarded as a complex non-linear transformation of the original one by the restoration model. Second, both two similar structures can be initialized by the pre-trained models without dropping the past knowledge. The extensive experiments in zero-shot settings show the effectiveness of our method by surpassing baselines of about 3%, 4%, and 7% in three datasets. Our code will be publicly available.
Related papers
- Face Anonymization Made Simple [44.24233169815565]
Current face anonymization techniques often depend on identity loss calculated by face recognition models, which can be inaccurate and unreliable.
In contrast, our approach uses diffusion models with only a reconstruction loss, eliminating the need for facial landmarks or masks.
Our model achieves state-of-the-art performance in three key areas: identity anonymization, facial preservation, and image quality.
arXiv Detail & Related papers (2024-11-01T17:45:21Z) - FaceChain-FACT: Face Adapter with Decoupled Training for Identity-preserved Personalization [24.600720169589334]
adapter-based method obtains the ability to customize and generate portraits by text-to-image training on facial data.
There is often a significant performance decrease in test following ability, controllability, and diversity of generated faces compared to the base model.
We propose the Face Adapter with deCoupled Training (FACT) framework, focusing on both model architecture and training strategy.
arXiv Detail & Related papers (2024-10-16T07:25:24Z) - DSL-FIQA: Assessing Facial Image Quality via Dual-Set Degradation Learning and Landmark-Guided Transformer [23.70791030264281]
Generic Face Image Quality Assessment (GFIQA) evaluates the perceptual quality of facial images.
We present a novel transformer-based method for GFIQA, which is aided by two unique mechanisms.
arXiv Detail & Related papers (2024-06-13T23:11:25Z) - Attribute-preserving Face Dataset Anonymization via Latent Code
Optimization [64.4569739006591]
We present a task-agnostic anonymization procedure that directly optimize the images' latent representation in the latent space of a pre-trained GAN.
We demonstrate through a series of experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes.
arXiv Detail & Related papers (2023-03-20T17:34:05Z) - MorphGANFormer: Transformer-based Face Morphing and De-Morphing [55.211984079735196]
StyleGAN-based approaches to face morphing are among the leading techniques.
We propose a transformer-based alternative to face morphing and demonstrate its superiority to StyleGAN-based methods.
arXiv Detail & Related papers (2023-02-18T19:09:11Z) - Thinking the Fusion Strategy of Multi-reference Face Reenactment [4.1509697008011175]
We show that simple extension by using multiple reference images significantly improves generation quality.
We show this by 1) conducting the reconstruction task on publicly available dataset, 2) conducting facial motion transfer on our original dataset which consists of multi-person's head movement video sequences, and 3) using a newly proposed evaluation metric to validate that our method achieves better quantitative results.
arXiv Detail & Related papers (2022-02-22T09:17:26Z) - RestoreFormer: High-Quality Blind Face Restoration From Undegraded
Key-Value Pairs [48.33214614798882]
We propose RestoreFormer, which explores fully-spatial attentions to model contextual information.
It learns fully-spatial interactions between corrupted queries and high-quality key-value pairs.
It outperforms advanced state-of-the-art methods on one synthetic dataset and three real-world datasets.
arXiv Detail & Related papers (2022-01-17T12:21:55Z) - Joint Face Image Restoration and Frontalization for Recognition [79.78729632975744]
In real-world scenarios, many factors may harm face recognition performance, e.g., large pose, bad illumination,low resolution, blur and noise.
Previous efforts usually first restore the low-quality faces to high-quality ones and then perform face recognition.
We propose an Multi-Degradation Face Restoration model to restore frontalized high-quality faces from the given low-quality ones.
arXiv Detail & Related papers (2021-05-12T03:52:41Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Joint Deep Learning of Facial Expression Synthesis and Recognition [97.19528464266824]
We propose a novel joint deep learning of facial expression synthesis and recognition method for effective FER.
The proposed method involves a two-stage learning procedure. Firstly, a facial expression synthesis generative adversarial network (FESGAN) is pre-trained to generate facial images with different facial expressions.
In order to alleviate the problem of data bias between the real images and the synthetic images, we propose an intra-class loss with a novel real data-guided back-propagation (RDBP) algorithm.
arXiv Detail & Related papers (2020-02-06T10:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.