Face Super-Resolution with Progressive Embedding of Multi-scale Face
Priors
- URL: http://arxiv.org/abs/2210.06002v1
- Date: Wed, 12 Oct 2022 08:16:52 GMT
- Title: Face Super-Resolution with Progressive Embedding of Multi-scale Face
Priors
- Authors: Chenggong Zhang and Zhilei Liu
- Abstract summary: We propose a novel recurrent convolutional network based framework for face super-resolution.
We take full advantage of the intermediate outputs of the recurrent network, and landmarks information and facial action units (AUs) information are extracted.
Our proposed method significantly outperforms state-of-the-art FSR methods in terms of image quality and facial details restoration.
- Score: 4.649637261351803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The face super-resolution (FSR) task is to reconstruct high-resolution face
images from low-resolution inputs. Recent works have achieved success on this
task by utilizing facial priors such as facial landmarks. Most existing methods
pay more attention to global shape and structure information, but less to local
texture information, which makes them cannot recover local details well. In
this paper, we propose a novel recurrent convolutional network based framework
for face super-resolution, which progressively introduces both global shape and
local texture information. We take full advantage of the intermediate outputs
of the recurrent network, and landmarks information and facial action units
(AUs) information are extracted in the output of the first and second steps
respectively, rather than low-resolution input. Moreover, we introduced AU
classification results as a novel quantitative metric for facial details
restoration. Extensive experiments show that our proposed method significantly
outperforms state-of-the-art FSR methods in terms of image quality and facial
details restoration.
Related papers
- Prior Knowledge Distillation Network for Face Super-Resolution [25.188937155619886]
The purpose of face super-resolution (FSR) is to reconstruct high-resolution (HR) face images from low-resolution (LR) inputs.
We propose a prior knowledge distillation network (PKDN) for FSR, which involves transferring prior information from the teacher network to the student network.
arXiv Detail & Related papers (2024-09-22T09:58:20Z) - W-Net: A Facial Feature-Guided Face Super-Resolution Network [8.037821981254389]
Face Super-Resolution aims to recover high-resolution (HR) face images from low-resolution (LR) ones.
Existing approaches are not ideal due to their low reconstruction efficiency and insufficient utilization of prior information.
This paper proposes a novel network architecture called W-Net to address this challenge.
arXiv Detail & Related papers (2024-06-02T09:05:40Z) - Super-Resolving Face Image by Facial Parsing Information [52.1267613768555]
Face super-resolution is a technology that transforms a low-resolution face image into the corresponding high-resolution one.
We build a novel parsing map guided face super-resolution network which extracts the face prior from low-resolution face image.
High-resolution features contain more precise spatial information while low-resolution features provide strong contextual information.
arXiv Detail & Related papers (2023-04-06T08:19:03Z) - A Survey of Deep Face Restoration: Denoise, Super-Resolution, Deblur,
Artifact Removal [177.21001709272144]
Face Restoration (FR) aims to restore High-Quality (HQ) faces from Low-Quality (LQ) input images.
This paper comprehensively surveys recent advances in deep learning techniques for face restoration.
arXiv Detail & Related papers (2022-11-05T07:08:15Z) - Multi-Prior Learning via Neural Architecture Search for Blind Face
Restoration [61.27907052910136]
Blind Face Restoration (BFR) aims to recover high-quality face images from low-quality ones.
Current methods still suffer from two major difficulties: 1) how to derive a powerful network architecture without extensive hand tuning; 2) how to capture complementary information from multiple facial priors in one network to improve restoration performance.
We propose a Face Restoration Searching Network (FRSNet) to adaptively search the suitable feature extraction architecture within our specified search space.
arXiv Detail & Related papers (2022-06-28T12:29:53Z) - TANet: A new Paradigm for Global Face Super-resolution via
Transformer-CNN Aggregation Network [72.41798177302175]
We propose a novel paradigm based on the self-attention mechanism (i.e., the core of Transformer) to fully explore the representation capacity of the facial structure feature.
Specifically, we design a Transformer-CNN aggregation network (TANet) consisting of two paths, in which one path uses CNNs responsible for restoring fine-grained facial details.
By aggregating the features from the above two paths, the consistency of global facial structure and fidelity of local facial detail restoration are strengthened simultaneously.
arXiv Detail & Related papers (2021-09-16T18:15:07Z) - Deep Learning-based Face Super-resolution: A Survey [78.11274281686246]
Face super-resolution, also known as face hallucination, is a domain-specific image super-resolution problem.
To date, few summaries of the studies on the deep learning-based face super-resolution are available.
In this survey, we present a comprehensive review of deep learning techniques in face super-resolution in a systematic manner.
arXiv Detail & Related papers (2021-01-11T08:17:11Z) - Deep Face Super-Resolution with Iterative Collaboration between
Attentive Recovery and Landmark Estimation [92.86123832948809]
We propose a deep face super-resolution (FSR) method with iterative collaboration between two recurrent networks.
In each recurrent step, the recovery branch utilizes the prior knowledge of landmarks to yield higher-quality images.
A new attentive fusion module is designed to strengthen the guidance of landmark maps.
arXiv Detail & Related papers (2020-03-29T16:04:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.