Self-Supervised Selective-Guided Diffusion Model for Old-Photo Face Restoration
- URL: http://arxiv.org/abs/2510.12114v1
- Date: Tue, 14 Oct 2025 03:34:15 GMT
- Title: Self-Supervised Selective-Guided Diffusion Model for Old-Photo Face Restoration
- Authors: Wenjie Li, Xiangyi Wang, Heng Guo, Guangwei Gao, Zhanyu Ma,
- Abstract summary: Old-photo face restoration poses significant challenges due to compounded degradations such as breakage, fading, and severe blur.<n>We propose Self-Supervised Selective-Guided Diffusion, which leverages pseudo-reference faces generated by a pre-trained diffusion model under weak guidance.<n>SSDiff outperforms existing GAN-based and diffusion-based methods in perceptual quality, fidelity, and regional controllability.
- Score: 43.12011252251526
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Old-photo face restoration poses significant challenges due to compounded degradations such as breakage, fading, and severe blur. Existing pre-trained diffusion-guided methods either rely on explicit degradation priors or global statistical guidance, which struggle with localized artifacts or face color. We propose Self-Supervised Selective-Guided Diffusion (SSDiff), which leverages pseudo-reference faces generated by a pre-trained diffusion model under weak guidance. These pseudo-labels exhibit structurally aligned contours and natural colors, enabling region-specific restoration via staged supervision: structural guidance applied throughout the denoising process and color refinement in later steps, aligned with the coarse-to-fine nature of diffusion. By incorporating face parsing maps and scratch masks, our method selectively restores breakage regions while avoiding identity mismatch. We further construct VintageFace, a 300-image benchmark of real old face photos with varying degradation levels. SSDiff outperforms existing GAN-based and diffusion-based methods in perceptual quality, fidelity, and regional controllability. Code link: https://github.com/PRIS-CV/SSDiff.
Related papers
- Unlocking the Potential of Diffusion Priors in Blind Face Restoration [63.419272650578165]
In this work, we use a unified network FLIPNET that switches between two modes to resolve specific gaps.<n>In Restoration mode, the model gradually integrates BFR-oriented features and face embeddings from LQ images to achieve authentic and faithful face restoration.<n>In Degradation mode, the model synthesizes real-world like degraded images based on the knowledge learned from real-world degradation datasets.
arXiv Detail & Related papers (2025-08-12T01:50:55Z) - Harnessing Diffusion-Yielded Score Priors for Image Restoration [29.788482710572307]
Deep image restoration models aim to learn a mapping from degraded image space to natural image space.<n>Three major classes of methods have emerged, including MSE-based, GAN-based, and diffusion-based methods.<n>We propose a novel method, HYPIR, to address these challenges.
arXiv Detail & Related papers (2025-07-28T07:55:34Z) - DynFaceRestore: Balancing Fidelity and Quality in Diffusion-Guided Blind Face Restoration with Dynamic Blur-Level Mapping and Guidance [29.961208234237635]
Blind Face Restoration aims to recover high-fidelity, detail-rich facial images from unknown degraded inputs.<n>We propose DynFaceRestore, a novel blind face restoration approach that learns to map any blindly degraded input to blurry images.<n>DynFaceRestore achieves state-of-the-art performance in both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2025-07-18T10:16:08Z) - LAFR: Efficient Diffusion-based Blind Face Restoration via Latent Codebook Alignment Adapter [52.93785843453579]
Blind face restoration from low-quality (LQ) images is a challenging task that requires high-fidelity image reconstruction and the preservation of facial identity.<n>We propose LAFR, a novel codebook-based latent space adapter that aligns the latent distribution of LQ images with that of HQ counterparts.<n>We show that lightweight finetuning of diffusion prior on just 0.9% of FFHQ dataset is sufficient to achieve results comparable to state-of-the-art methods.
arXiv Detail & Related papers (2025-05-29T14:11:16Z) - Restoring Real-World Images with an Internal Detail Enhancement Diffusion Model [9.520471615470267]
Restoring real-world degraded images, such as old photographs or low-resolution images, presents a significant challenge.<n>Recent data-driven approaches have struggled with achieving high-fidelity restoration and providing object-level control over colorization.<n>We propose an internal detail-preserving diffusion model for high-fidelity restoration of real-world degraded images.
arXiv Detail & Related papers (2025-05-24T12:32:53Z) - DiffMAC: Diffusion Manifold Hallucination Correction for High Generalization Blind Face Restoration [62.44659039265439]
We propose a Diffusion-Information-Diffusion framework to tackle blind face restoration.
DiffMAC achieves high-generalization face restoration in diverse degraded scenes and heterogeneous domains.
Results demonstrate the superiority of DiffMAC over state-of-the-art methods.
arXiv Detail & Related papers (2024-03-15T08:44:15Z) - CLR-Face: Conditional Latent Refinement for Blind Face Restoration Using
Score-Based Diffusion Models [57.9771859175664]
Recent generative-prior-based methods have shown promising blind face restoration performance.
Generating fine-grained facial details faithful to inputs remains a challenging problem.
We introduce a diffusion-based-prior inside a VQGAN architecture that focuses on learning the distribution over uncorrupted latent embeddings.
arXiv Detail & Related papers (2024-02-08T23:51:49Z) - PGDiff: Guiding Diffusion Models for Versatile Face Restoration via
Partial Guidance [65.5618804029422]
Previous works have achieved noteworthy success by limiting the solution space using explicit degradation models.
We propose PGDiff by introducing partial guidance, a fresh perspective that is more adaptable to real-world degradations.
Our method not only outperforms existing diffusion-prior-based approaches but also competes favorably with task-specific models.
arXiv Detail & Related papers (2023-09-19T17:51:33Z) - ADIR: Adaptive Diffusion for Image Reconstruction [42.90778718695398]
Denoising diffusion models have recently achieved remarkable success in image generation, capturing rich information about natural image statistics.<n>We introduce a conditional sampling framework that leverages the powerful priors learned by diffusion models while enforcing consistency with the available measurements.<n>We employ LoRA-based adaptation using images that are semantically and visually similar to the degraded input, efficiently retrieved from a large and diverse dataset.
arXiv Detail & Related papers (2022-12-06T18:39:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.