Enhancing Quality of Pose-varied Face Restoration with Local Weak
Feature Sensing and GAN Prior
- URL: http://arxiv.org/abs/2205.14377v3
- Date: Thu, 15 Jun 2023 02:24:58 GMT
- Title: Enhancing Quality of Pose-varied Face Restoration with Local Weak
Feature Sensing and GAN Prior
- Authors: Kai Hu, Yu Liu, Renhe Liu, Wei Lu, Gang Yu, Bin Fu
- Abstract summary: We propose a well-designed blind face restoration network with generative facial prior.
Our model performs superior to the prior art for face restoration and face super-resolution tasks.
- Score: 29.17397958948725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facial semantic guidance (including facial landmarks, facial heatmaps, and
facial parsing maps) and facial generative adversarial networks (GAN) prior
have been widely used in blind face restoration (BFR) in recent years. Although
existing BFR methods have achieved good performance in ordinary cases, these
solutions have limited resilience when applied to face images with serious
degradation and pose-varied (e.g., looking right, looking left, laughing, etc.)
in real-world scenarios. In this work, we propose a well-designed blind face
restoration network with generative facial prior. The proposed network is
mainly comprised of an asymmetric codec and a StyleGAN2 prior network. In the
asymmetric codec, we adopt a mixed multi-path residual block (MMRB) to
gradually extract weak texture features of input images, which can better
preserve the original facial features and avoid excessive fantasy. The MMRB can
also be plug-and-play in other networks. Furthermore, thanks to the affluent
and diverse facial priors of the StyleGAN2 model, we adopt it as the primary
generator network in our proposed method and specially design a novel
self-supervised training strategy to fit the distribution closer to the target
and flexibly restore natural and realistic facial details. Extensive
experiments on synthetic and real-world datasets demonstrate that our model
performs superior to the prior art for face restoration and face
super-resolution tasks.
Related papers
- OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.
We propose OSDFace, a novel one-step diffusion model for face restoration.
Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - Towards Real-World Blind Face Restoration with Generative Diffusion Prior [69.84480964328465]
Blind face restoration is an important task in computer vision and has gained significant attention due to its wide-range applications.
We propose BFRffusion which is thoughtfully designed to effectively extract features from low-quality face images.
We also build a privacy-preserving face dataset called PFHQ with balanced attributes like race, gender, and age.
arXiv Detail & Related papers (2023-12-25T14:16:24Z) - FaceFormer: Scale-aware Blind Face Restoration with Transformers [18.514630131883536]
We propose a novel scale-aware blind face restoration framework, named FaceFormer, which formulates facial feature restoration as scale-aware transformation.
Our proposed method trained with synthetic dataset generalizes better to a natural low quality images than current state-of-the-arts.
arXiv Detail & Related papers (2022-07-20T10:08:34Z) - Multi-Prior Learning via Neural Architecture Search for Blind Face
Restoration [61.27907052910136]
Blind Face Restoration (BFR) aims to recover high-quality face images from low-quality ones.
Current methods still suffer from two major difficulties: 1) how to derive a powerful network architecture without extensive hand tuning; 2) how to capture complementary information from multiple facial priors in one network to improve restoration performance.
We propose a Face Restoration Searching Network (FRSNet) to adaptively search the suitable feature extraction architecture within our specified search space.
arXiv Detail & Related papers (2022-06-28T12:29:53Z) - Reconstruct Face from Features Using GAN Generator as a Distribution
Constraint [17.486032607577577]
Face recognition based on the deep convolutional neural networks (CNN) shows superior accuracy performance attributed to the high discriminative features extracted.
Yet, the security and privacy of the extracted features from deep learning models (deep features) have been often overlooked.
This paper proposes the reconstruction of face images from deep features without accessing the CNN network configurations.
arXiv Detail & Related papers (2022-06-09T06:11:59Z) - TANet: A new Paradigm for Global Face Super-resolution via
Transformer-CNN Aggregation Network [72.41798177302175]
We propose a novel paradigm based on the self-attention mechanism (i.e., the core of Transformer) to fully explore the representation capacity of the facial structure feature.
Specifically, we design a Transformer-CNN aggregation network (TANet) consisting of two paths, in which one path uses CNNs responsible for restoring fine-grained facial details.
By aggregating the features from the above two paths, the consistency of global facial structure and fidelity of local facial detail restoration are strengthened simultaneously.
arXiv Detail & Related papers (2021-09-16T18:15:07Z) - Pro-UIGAN: Progressive Face Hallucination from Occluded Thumbnails [53.080403912727604]
We propose a multi-stage Progressive Upsampling and Inpainting Generative Adversarial Network, dubbed Pro-UIGAN.
It exploits facial geometry priors to replenish and upsample (8*) the occluded and tiny faces.
Pro-UIGAN achieves visually pleasing HR faces, reaching superior performance in downstream tasks.
arXiv Detail & Related papers (2021-08-02T02:29:24Z) - Towards Real-World Blind Face Restoration with Generative Facial Prior [19.080349401153097]
Blind face restoration usually relies on facial priors, such as facial geometry prior or reference prior, to restore realistic and faithful details.
We propose GFP-GAN that leverages rich and diverse priors encapsulated in a pretrained face GAN for blind face restoration.
Our method achieves superior performance to prior art on both synthetic and real-world datasets.
arXiv Detail & Related papers (2021-01-11T17:54:38Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.