Towards Real-World Blind Face Restoration with Generative Diffusion Prior
- URL: http://arxiv.org/abs/2312.15736v2
- Date: Mon, 18 Mar 2024 12:23:48 GMT
- Title: Towards Real-World Blind Face Restoration with Generative Diffusion Prior
- Authors: Xiaoxu Chen, Jingfan Tan, Tao Wang, Kaihao Zhang, Wenhan Luo, Xiaochun Cao,
- Abstract summary: Blind face restoration is an important task in computer vision and has gained significant attention due to its wide-range applications.
We propose BFRffusion which is thoughtfully designed to effectively extract features from low-quality face images.
We also build a privacy-preserving face dataset called PFHQ with balanced attributes like race, gender, and age.
- Score: 69.84480964328465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Blind face restoration is an important task in computer vision and has gained significant attention due to its wide-range applications. Previous works mainly exploit facial priors to restore face images and have demonstrated high-quality results. However, generating faithful facial details remains a challenging problem due to the limited prior knowledge obtained from finite data. In this work, we delve into the potential of leveraging the pretrained Stable Diffusion for blind face restoration. We propose BFRffusion which is thoughtfully designed to effectively extract features from low-quality face images and could restore realistic and faithful facial details with the generative prior of the pretrained Stable Diffusion. In addition, we build a privacy-preserving face dataset called PFHQ with balanced attributes like race, gender, and age. This dataset can serve as a viable alternative for training blind face restoration networks, effectively addressing privacy and bias concerns usually associated with the real face datasets. Through an extensive series of experiments, we demonstrate that our BFRffusion achieves state-of-the-art performance on both synthetic and real-world public testing datasets for blind face restoration and our PFHQ dataset is an available resource for training blind face restoration networks. The codes, pretrained models, and dataset are released at https://github.com/chenxx89/BFRffusion.
Related papers
- OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.
We propose OSDFace, a novel one-step diffusion model for face restoration.
Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - AuthFace: Towards Authentic Blind Face Restoration with Face-oriented Generative Diffusion Prior [13.27748226506837]
Blind face restoration (BFR) is a fundamental and challenging problem in computer vision.
Recent research endeavors rely on facial image priors from the powerful pretrained text-to-image (T2I) diffusion models.
We propose AuthFace, which achieves highly authentic face restoration results by exploring a face-oriented generative diffusion prior.
arXiv Detail & Related papers (2024-10-13T14:56:13Z) - DiffusionFace: Towards a Comprehensive Dataset for Diffusion-Based Face Forgery Analysis [71.40724659748787]
DiffusionFace is the first diffusion-based face forgery dataset.
It covers various forgery categories, including unconditional and Text Guide facial image generation, Img2Img, Inpaint, and Diffusion-based facial exchange algorithms.
It provides essential metadata and a real-world internet-sourced forgery facial image dataset for evaluation.
arXiv Detail & Related papers (2024-03-27T11:32:44Z) - Attribute-preserving Face Dataset Anonymization via Latent Code
Optimization [64.4569739006591]
We present a task-agnostic anonymization procedure that directly optimize the images' latent representation in the latent space of a pre-trained GAN.
We demonstrate through a series of experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes.
arXiv Detail & Related papers (2023-03-20T17:34:05Z) - A Survey of Deep Face Restoration: Denoise, Super-Resolution, Deblur,
Artifact Removal [177.21001709272144]
Face Restoration (FR) aims to restore High-Quality (HQ) faces from Low-Quality (LQ) input images.
This paper comprehensively surveys recent advances in deep learning techniques for face restoration.
arXiv Detail & Related papers (2022-11-05T07:08:15Z) - Multi-Prior Learning via Neural Architecture Search for Blind Face
Restoration [61.27907052910136]
Blind Face Restoration (BFR) aims to recover high-quality face images from low-quality ones.
Current methods still suffer from two major difficulties: 1) how to derive a powerful network architecture without extensive hand tuning; 2) how to capture complementary information from multiple facial priors in one network to improve restoration performance.
We propose a Face Restoration Searching Network (FRSNet) to adaptively search the suitable feature extraction architecture within our specified search space.
arXiv Detail & Related papers (2022-06-28T12:29:53Z) - Enhancing Quality of Pose-varied Face Restoration with Local Weak
Feature Sensing and GAN Prior [29.17397958948725]
We propose a well-designed blind face restoration network with generative facial prior.
Our model performs superior to the prior art for face restoration and face super-resolution tasks.
arXiv Detail & Related papers (2022-05-28T09:23:48Z) - Towards Real-World Blind Face Restoration with Generative Facial Prior [19.080349401153097]
Blind face restoration usually relies on facial priors, such as facial geometry prior or reference prior, to restore realistic and faithful details.
We propose GFP-GAN that leverages rich and diverse priors encapsulated in a pretrained face GAN for blind face restoration.
Our method achieves superior performance to prior art on both synthetic and real-world datasets.
arXiv Detail & Related papers (2021-01-11T17:54:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.