Reference-Guided Identity Preserving Face Restoration
- URL: http://arxiv.org/abs/2505.21905v1
- Date: Wed, 28 May 2025 02:46:34 GMT
- Title: Reference-Guided Identity Preserving Face Restoration
- Authors: Mo Zhou, Keren Ye, Viraj Shah, Kangfu Mei, Mauricio Delbracio, Peyman Milanfar, Vishal M. Patel, Hossein Talebi,
- Abstract summary: Preserving face identity is a critical yet persistent challenge in diffusion-based image restoration.<n>This paper introduces a novel approach that maximizes reference face utility for improved face restoration and identity preservation.
- Score: 54.10295747851343
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Preserving face identity is a critical yet persistent challenge in diffusion-based image restoration. While reference faces offer a path forward, existing reference-based methods often fail to fully exploit their potential. This paper introduces a novel approach that maximizes reference face utility for improved face restoration and identity preservation. Our method makes three key contributions: 1) Composite Context, a comprehensive representation that fuses multi-level (high- and low-level) information from the reference face, offering richer guidance than prior singular representations. 2) Hard Example Identity Loss, a novel loss function that leverages the reference face to address the identity learning inefficiencies found in the existing identity loss. 3) A training-free method to adapt the model to multi-reference inputs during inference. The proposed method demonstrably restores high-quality faces and achieves state-of-the-art identity preserving restoration on benchmarks such as FFHQ-Ref and CelebA-Ref-Test, consistently outperforming previous work.
Related papers
- RefSTAR: Blind Facial Image Restoration with Reference Selection, Transfer, and Reconstruction [75.00967931348409]
We present a novel blind facial image restoration method that considers reference selection, transfer, and reconstruction.<n>Experiments on various backbone models demonstrate superior performance, showing better identity preservation ability and reference feature transfer quality.
arXiv Detail & Related papers (2025-07-14T16:50:29Z) - HonestFace: Towards Honest Face Restoration with One-Step Diffusion Model [36.36629793211904]
HonestFace is a novel approach designed to restore faces with a strong emphasis on such honesty.<n>Masked face alignment method is presented to enhance fine-grained details and textural authenticity.<n>Our approach surpasses existing state-of-the-art methods, achieving superior performance in both visual quality and quantitative assessments.
arXiv Detail & Related papers (2025-05-24T02:19:20Z) - DiffusionReward: Enhancing Blind Face Restoration through Reward Feedback Learning [40.641049729447175]
We introduce a ReFL framework, named DiffusionReward, to the Blind Face Restoration task for the first time.<n>The core of our framework is the Face Reward Model (FRM), which is trained using carefully annotated data.<n>Experiments on synthetic and wild datasets demonstrate that our method outperforms state-of-the-art methods.
arXiv Detail & Related papers (2025-05-23T13:53:23Z) - FaceMe: Robust Blind Face Restoration with Personal Identification [27.295878867436688]
We propose a personalized face restoration method, FaceMe, based on a diffusion model.<n>Given a single or a few reference images, we use an identity encoder to extract identity-related features, which serve as prompts to guide the diffusion model in restoring high-quality facial images.<n> Experimental results demonstrate that our FaceMe can restore high-quality facial images while maintaining identity consistency, achieving excellent performance and robustness.
arXiv Detail & Related papers (2025-01-09T11:52:54Z) - OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.<n>We propose OSDFace, a novel one-step diffusion model for face restoration.<n>Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - Overcoming False Illusions in Real-World Face Restoration with Multi-Modal Guided Diffusion Model [55.46927355649013]
We introduce a novel Multi-modal Guided Real-World Face Restoration technique.<n>MGFR can mitigate the generation of false facial attributes and identities.<n>We present the Reface-HQ dataset, comprising over 21,000 high-resolution facial images across 4800 identities.
arXiv Detail & Related papers (2024-10-05T13:46:56Z) - Learning Dual Memory Dictionaries for Blind Face Restoration [75.66195723349512]
Recent works mainly treat the two aspects, i.e., generic and specific restoration, separately.
This paper suggests a DMDNet by explicitly memorizing the generic and specific features through dual dictionaries.
A new high-quality dataset, termed CelebRef-HQ, is constructed to promote the exploration of specific face restoration in the high-resolution space.
arXiv Detail & Related papers (2022-10-15T01:55:41Z) - RestoreFormer: High-Quality Blind Face Restoration From Undegraded
Key-Value Pairs [48.33214614798882]
We propose RestoreFormer, which explores fully-spatial attentions to model contextual information.
It learns fully-spatial interactions between corrupted queries and high-quality key-value pairs.
It outperforms advanced state-of-the-art methods on one synthetic dataset and three real-world datasets.
arXiv Detail & Related papers (2022-01-17T12:21:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.