Learning Dual Memory Dictionaries for Blind Face Restoration
- URL: http://arxiv.org/abs/2210.08160v1
- Date: Sat, 15 Oct 2022 01:55:41 GMT
- Title: Learning Dual Memory Dictionaries for Blind Face Restoration
- Authors: Xiaoming Li, Shiguang Zhang, Shangchen Zhou, Lei Zhang, Wangmeng Zuo
- Abstract summary: Recent works mainly treat the two aspects, i.e., generic and specific restoration, separately.
This paper suggests a DMDNet by explicitly memorizing the generic and specific features through dual dictionaries.
A new high-quality dataset, termed CelebRef-HQ, is constructed to promote the exploration of specific face restoration in the high-resolution space.
- Score: 75.66195723349512
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: To improve the performance of blind face restoration, recent works mainly
treat the two aspects, i.e., generic and specific restoration, separately. In
particular, generic restoration attempts to restore the results through general
facial structure prior, while on the one hand, cannot generalize to real-world
degraded observations due to the limited capability of direct CNNs' mappings in
learning blind restoration, and on the other hand, fails to exploit the
identity-specific details. On the contrary, specific restoration aims to
incorporate the identity features from the reference of the same identity, in
which the requirement of proper reference severely limits the application
scenarios. Generally, it is a challenging and intractable task to improve the
photo-realistic performance of blind restoration and adaptively handle the
generic and specific restoration scenarios with a single unified model. Instead
of implicitly learning the mapping from a low-quality image to its high-quality
counterpart, this paper suggests a DMDNet by explicitly memorizing the generic
and specific features through dual dictionaries. First, the generic dictionary
learns the general facial priors from high-quality images of any identity,
while the specific dictionary stores the identity-belonging features for each
person individually. Second, to handle the degraded input with or without
specific reference, dictionary transform module is suggested to read the
relevant details from the dual dictionaries which are subsequently fused into
the input features. Finally, multi-scale dictionaries are leveraged to benefit
the coarse-to-fine restoration. Moreover, a new high-quality dataset, termed
CelebRef-HQ, is constructed to promote the exploration of specific face
restoration in the high-resolution space.
Related papers
- Overcoming False Illusions in Real-World Face Restoration with Multi-Modal Guided Diffusion Model [55.46927355649013]
We introduce a novel Multi-modal Guided Real-World Face Restoration technique.
MGFR can mitigate the generation of false facial attributes and identities.
We present the Reface-HQ dataset, comprising over 23,000 high-resolution facial images across 5,000 identities.
arXiv Detail & Related papers (2024-10-05T13:46:56Z) - Personalized Restoration via Dual-Pivot Tuning [18.912158172904654]
We propose a simple, yet effective, method for personalized restoration, called Dual-Pivot Tuning.
Our key observation is that for optimal personalization, the generative model should be tuned around a fixed text pivot.
This approach ensures that personalization does not interfere with the restoration process, resulting in a natural appearance with high fidelity to the person's identity and the attributes of the degraded image.
arXiv Detail & Related papers (2023-12-28T18:57:49Z) - SPIRE: Semantic Prompt-Driven Image Restoration [66.26165625929747]
We develop SPIRE, a Semantic and restoration Prompt-driven Image Restoration framework.
Our approach is the first framework that supports fine-level instruction through language-based quantitative specification of the restoration strength.
Our experiments demonstrate the superior restoration performance of SPIRE compared to the state of the arts.
arXiv Detail & Related papers (2023-12-18T17:02:30Z) - RestoreFormer++: Towards Real-World Blind Face Restoration from
Undegraded Key-Value Pairs [63.991802204929485]
Blind face restoration aims at recovering high-quality face images from those with unknown degradations.
Current algorithms mainly introduce priors to complement high-quality details and achieve impressive progress.
We propose RestoreFormer++, which introduces fully-spatial attention mechanisms to model the contextual information and the interplay with the priors.
We show that RestoreFormer++ outperforms state-of-the-art algorithms on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-08-14T16:04:53Z) - RestoreFormer: High-Quality Blind Face Restoration From Undegraded
Key-Value Pairs [48.33214614798882]
We propose RestoreFormer, which explores fully-spatial attentions to model contextual information.
It learns fully-spatial interactions between corrupted queries and high-quality key-value pairs.
It outperforms advanced state-of-the-art methods on one synthetic dataset and three real-world datasets.
arXiv Detail & Related papers (2022-01-17T12:21:55Z) - Orthonormal Product Quantization Network for Scalable Face Image
Retrieval [14.583846619121427]
This paper integrates product quantization with orthonormal constraints into an end-to-end deep learning framework to retrieve face images.
A novel scheme that uses predefined orthonormal vectors as codewords is proposed to enhance the quantization informativeness and reduce codewords' redundancy.
Experiments are conducted on four commonly-used face datasets under both seen and unseen identities retrieval settings.
arXiv Detail & Related papers (2021-07-01T09:30:39Z) - Blind Face Restoration via Deep Multi-scale Component Dictionaries [75.02640809505277]
We propose a deep face dictionary network (termed as DFDNet) to guide the restoration process of degraded observations.
DFDNet generates deep dictionaries for perceptually significant face components from high-quality images.
component AdaIN is leveraged to eliminate the style diversity between the input and dictionary features.
arXiv Detail & Related papers (2020-08-02T07:02:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.