RestoreFormer: High-Quality Blind Face Restoration From Undegraded
Key-Value Pairs
- URL: http://arxiv.org/abs/2201.06374v1
- Date: Mon, 17 Jan 2022 12:21:55 GMT
- Title: RestoreFormer: High-Quality Blind Face Restoration From Undegraded
Key-Value Pairs
- Authors: Zhouxia Wang, Jiawei Zhang, Runjian Chen, Wenping Wang and Ping Luo
- Abstract summary: We propose RestoreFormer, which explores fully-spatial attentions to model contextual information.
It learns fully-spatial interactions between corrupted queries and high-quality key-value pairs.
It outperforms advanced state-of-the-art methods on one synthetic dataset and three real-world datasets.
- Score: 48.33214614798882
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Blind face restoration is to recover a high-quality face image from unknown
degradations. As face image contains abundant contextual information, we
propose a method, RestoreFormer, which explores fully-spatial attentions to
model contextual information and surpasses existing works that use local
operators. RestoreFormer has several benefits compared to prior arts. First,
unlike the conventional multi-head self-attention in previous Vision
Transformers (ViTs), RestoreFormer incorporates a multi-head cross-attention
layer to learn fully-spatial interactions between corrupted queries and
high-quality key-value pairs. Second, the key-value pairs in ResotreFormer are
sampled from a reconstruction-oriented high-quality dictionary, whose elements
are rich in high-quality facial features specifically aimed for face
reconstruction, leading to superior restoration results. Third, RestoreFormer
outperforms advanced state-of-the-art methods on one synthetic dataset and
three real-world datasets, as well as produces images with better visual
quality.
Related papers
- Overcoming False Illusions in Real-World Face Restoration with Multi-Modal Guided Diffusion Model [55.46927355649013]
We introduce a novel Multi-modal Guided Real-World Face Restoration technique.
MGFR can mitigate the generation of false facial attributes and identities.
We present the Reface-HQ dataset, comprising over 23,000 high-resolution facial images across 5,000 identities.
arXiv Detail & Related papers (2024-10-05T13:46:56Z) - Effective Adapter for Face Recognition in the Wild [72.75516495170199]
We tackle the challenge of face recognition in the wild, where images often suffer from low quality and real-world distortions.
Traditional approaches-either training models directly on degraded images or their enhanced counterparts using face restoration techniques-have proven ineffective.
We propose an effective adapter for augmenting existing face recognition models trained on high-quality facial datasets.
arXiv Detail & Related papers (2023-12-04T08:55:46Z) - RestoreFormer++: Towards Real-World Blind Face Restoration from
Undegraded Key-Value Pairs [63.991802204929485]
Blind face restoration aims at recovering high-quality face images from those with unknown degradations.
Current algorithms mainly introduce priors to complement high-quality details and achieve impressive progress.
We propose RestoreFormer++, which introduces fully-spatial attention mechanisms to model the contextual information and the interplay with the priors.
We show that RestoreFormer++ outperforms state-of-the-art algorithms on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-08-14T16:04:53Z) - Learning Dual Memory Dictionaries for Blind Face Restoration [75.66195723349512]
Recent works mainly treat the two aspects, i.e., generic and specific restoration, separately.
This paper suggests a DMDNet by explicitly memorizing the generic and specific features through dual dictionaries.
A new high-quality dataset, termed CelebRef-HQ, is constructed to promote the exploration of specific face restoration in the high-resolution space.
arXiv Detail & Related papers (2022-10-15T01:55:41Z) - Deep Face Super-Resolution with Iterative Collaboration between
Attentive Recovery and Landmark Estimation [92.86123832948809]
We propose a deep face super-resolution (FSR) method with iterative collaboration between two recurrent networks.
In each recurrent step, the recovery branch utilizes the prior knowledge of landmarks to yield higher-quality images.
A new attentive fusion module is designed to strengthen the guidance of landmark maps.
arXiv Detail & Related papers (2020-03-29T16:04:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.