Analysis and Benchmarking of Extending Blind Face Image Restoration to Videos
- URL: http://arxiv.org/abs/2410.11828v1
- Date: Tue, 15 Oct 2024 17:53:25 GMT
- Title: Analysis and Benchmarking of Extending Blind Face Image Restoration to Videos
- Authors: Zhouxia Wang, Jiawei Zhang, Xintao Wang, Tianshui Chen, Ying Shan, Wenping Wang, Ping Luo,
- Abstract summary: We first introduce a Real-world Low-Quality Face Video benchmark (RFV-LQ) to evaluate leading image-based face restoration algorithms.
We then conduct a thorough systematical analysis of the benefits and challenges associated with extending blind face image restoration algorithms to degraded face videos.
Our analysis identifies several key issues, primarily categorized into two aspects: significant jitters in facial components and noise-shape flickering between frames.
- Score: 99.42805906884499
- License:
- Abstract: Recent progress in blind face restoration has resulted in producing high-quality restored results for static images. However, efforts to extend these advancements to video scenarios have been minimal, partly because of the absence of benchmarks that allow for a comprehensive and fair comparison. In this work, we first present a fair evaluation benchmark, in which we first introduce a Real-world Low-Quality Face Video benchmark (RFV-LQ), evaluate several leading image-based face restoration algorithms, and conduct a thorough systematical analysis of the benefits and challenges associated with extending blind face image restoration algorithms to degraded face videos. Our analysis identifies several key issues, primarily categorized into two aspects: significant jitters in facial components and noise-shape flickering between frames. To address these issues, we propose a Temporal Consistency Network (TCN) cooperated with alignment smoothing to reduce jitters and flickers in restored videos. TCN is a flexible component that can be seamlessly plugged into the most advanced face image restoration algorithms, ensuring the quality of image-based restoration is maintained as closely as possible. Extensive experiments have been conducted to evaluate the effectiveness and efficiency of our proposed TCN and alignment smoothing operation. Project page: https://wzhouxiff.github.io/projects/FIR2FVR/FIR2FVR.
Related papers
- Towards Real-world Video Face Restoration: A New Benchmark [33.01372704755186]
We introduce new real-world datasets named FOS with a taxonomy of "Full, Occluded, and Side" faces.
FOS datasets cover more diverse degradations and involve face samples from more complex scenarios.
We benchmarked both the state-of-the-art BFR methods and the video super resolution (VSR) methods to comprehensively study current approaches.
arXiv Detail & Related papers (2024-04-30T12:37:01Z) - Survey on Deep Face Restoration: From Non-blind to Blind and Beyond [79.1398990834247]
Face restoration (FR) is a specialized field within image restoration that aims to recover low-quality (LQ) face images into high-quality (HQ) face images.
Recent advances in deep learning technology have led to significant progress in FR methods.
arXiv Detail & Related papers (2023-09-27T08:39:03Z) - A Survey of Deep Face Restoration: Denoise, Super-Resolution, Deblur,
Artifact Removal [177.21001709272144]
Face Restoration (FR) aims to restore High-Quality (HQ) faces from Low-Quality (LQ) input images.
This paper comprehensively surveys recent advances in deep learning techniques for face restoration.
arXiv Detail & Related papers (2022-11-05T07:08:15Z) - Multi-Prior Learning via Neural Architecture Search for Blind Face
Restoration [61.27907052910136]
Blind Face Restoration (BFR) aims to recover high-quality face images from low-quality ones.
Current methods still suffer from two major difficulties: 1) how to derive a powerful network architecture without extensive hand tuning; 2) how to capture complementary information from multiple facial priors in one network to improve restoration performance.
We propose a Face Restoration Searching Network (FRSNet) to adaptively search the suitable feature extraction architecture within our specified search space.
arXiv Detail & Related papers (2022-06-28T12:29:53Z) - Deep Tiny Network for Recognition-Oriented Face Image Quality Assessment [26.792481400792376]
In many face recognition (FR) scenarios, face images are acquired from a sequence with huge intra-variations.
We present an efficient non-reference image quality assessment for FR that directly links image quality assessment (IQA) and FR.
Based on the proposed quality measurement, we propose a deep Tiny Face Quality network (tinyFQnet) to learn a quality prediction function from data.
arXiv Detail & Related papers (2021-06-09T07:20:54Z) - Network Architecture Search for Face Enhancement [82.25775020564654]
We present a multi-task face restoration network, called Network Architecture Search for Face Enhancement (NASFE)
NASFE can enhance poor quality face images containing a single degradation (i.e. noise or blur) or multiple degradations (noise+blur+low-light)
arXiv Detail & Related papers (2021-05-13T19:46:05Z) - iSeeBetter: Spatio-temporal video super-resolution using recurrent
generative back-projection networks [0.0]
We present iSeeBetter, a novel GAN-based structural-temporal approach to video super-resolution (VSR)
iSeeBetter extracts spatial and temporal information from the current and neighboring frames using the concept of recurrent back-projection networks as its generator.
Our results demonstrate that iSeeBetter offers superior VSR fidelity and surpasses state-of-the-art performance.
arXiv Detail & Related papers (2020-06-13T01:36:30Z) - Deep Face Super-Resolution with Iterative Collaboration between
Attentive Recovery and Landmark Estimation [92.86123832948809]
We propose a deep face super-resolution (FSR) method with iterative collaboration between two recurrent networks.
In each recurrent step, the recovery branch utilizes the prior knowledge of landmarks to yield higher-quality images.
A new attentive fusion module is designed to strengthen the guidance of landmark maps.
arXiv Detail & Related papers (2020-03-29T16:04:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.