Network Architecture Search for Face Enhancement
- URL: http://arxiv.org/abs/2105.06528v1
- Date: Thu, 13 May 2021 19:46:05 GMT
- Title: Network Architecture Search for Face Enhancement
- Authors: Rajeev Yasarla, Hamid Reza Vaezi Joze, and Vishal M Patel
- Abstract summary: We present a multi-task face restoration network, called Network Architecture Search for Face Enhancement (NASFE)
NASFE can enhance poor quality face images containing a single degradation (i.e. noise or blur) or multiple degradations (noise+blur+low-light)
- Score: 82.25775020564654
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Various factors such as ambient lighting conditions, noise, motion blur, etc.
affect the quality of captured face images. Poor quality face images often
reduce the performance of face analysis and recognition systems. Hence, it is
important to enhance the quality of face images collected in such conditions.
We present a multi-task face restoration network, called Network Architecture
Search for Face Enhancement (NASFE), which can enhance poor quality face images
containing a single degradation (i.e. noise or blur) or multiple degradations
(noise+blur+low-light). During training, NASFE uses clean face images of a
person present in the degraded image to extract the identity information in
terms of features for restoring the image. Furthermore, the network is guided
by an identity-loss so that the identity in-formation is maintained in the
restored image. Additionally, we propose a network architecture search-based
fusion network in NASFE which fuses the task-specific features that are
extracted using the task-specific encoders. We introduce FFT-op and deveiling
operators in the fusion network to efficiently fuse the task-specific features.
Comprehensive experiments on synthetic and real images demonstrate that the
proposed method outperforms many recent state-of-the-art face restoration and
enhancement methods in terms of quantitative and visual performance.
Related papers
- OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.
We propose OSDFace, a novel one-step diffusion model for face restoration.
Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - W-Net: A Facial Feature-Guided Face Super-Resolution Network [8.037821981254389]
Face Super-Resolution aims to recover high-resolution (HR) face images from low-resolution (LR) ones.
Existing approaches are not ideal due to their low reconstruction efficiency and insufficient utilization of prior information.
This paper proposes a novel network architecture called W-Net to address this challenge.
arXiv Detail & Related papers (2024-06-02T09:05:40Z) - Multi-Prior Learning via Neural Architecture Search for Blind Face
Restoration [61.27907052910136]
Blind Face Restoration (BFR) aims to recover high-quality face images from low-quality ones.
Current methods still suffer from two major difficulties: 1) how to derive a powerful network architecture without extensive hand tuning; 2) how to capture complementary information from multiple facial priors in one network to improve restoration performance.
We propose a Face Restoration Searching Network (FRSNet) to adaptively search the suitable feature extraction architecture within our specified search space.
arXiv Detail & Related papers (2022-06-28T12:29:53Z) - Detecting High-Quality GAN-Generated Face Images using Neural Networks [23.388645531702597]
We propose a new strategy to differentiate GAN-generated images from authentic images by leveraging spectral band discrepancies.
In particular, we enable the digital preservation of face images using the Cross-band co-occurrence matrix and spatial co-occurrence matrix.
We show that the performance boost is particularly significant and achieves more than 92% in different post-processing environments.
arXiv Detail & Related papers (2022-03-03T13:53:27Z) - LTT-GAN: Looking Through Turbulence by Inverting GANs [86.25869403782957]
We propose the first turbulence mitigation method that makes use of visual priors encapsulated by a well-trained GAN.
Based on the visual priors, we propose to learn to preserve the identity of restored images on a periodic contextual distance.
Our method significantly outperforms prior art in both the visual quality and face verification accuracy of restored results.
arXiv Detail & Related papers (2021-12-04T16:42:13Z) - JDSR-GAN: Constructing A Joint and Collaborative Learning Network for
Masked Face Super-Resolution [28.022800882214803]
Face images obtained in most video surveillance scenarios are low resolution with mask simultaneously.
Most of the previous face super-resolution solutions can not handle both tasks in one model.
We construct a joint and collaborative learning network, called JDSR-GAN, for the masked face super-resolution task.
arXiv Detail & Related papers (2021-03-25T08:50:40Z) - Learning Spatial Attention for Face Super-Resolution [28.60619685892613]
General image super-resolution techniques have difficulties in recovering detailed face structures when applying to low resolution face images.
Recent deep learning based methods tailored for face images have achieved improved performance by jointly trained with additional task such as face parsing and landmark prediction.
We introduce a novel SPatial Attention Residual Network (SPARNet) built on our newly proposed Face Attention Units (FAUs) for face super-resolution.
arXiv Detail & Related papers (2020-12-02T13:54:25Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z) - Exploiting Semantics for Face Image Deblurring [121.44928934662063]
We propose an effective and efficient face deblurring algorithm by exploiting semantic cues via deep convolutional neural networks.
We incorporate face semantic labels as input priors and propose an adaptive structural loss to regularize facial local structures.
The proposed method restores sharp images with more accurate facial features and details.
arXiv Detail & Related papers (2020-01-19T13:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.