Dual Associated Encoder for Face Restoration
- URL: http://arxiv.org/abs/2308.07314v2
- Date: Sun, 21 Jan 2024 04:07:12 GMT
- Title: Dual Associated Encoder for Face Restoration
- Authors: Yu-Ju Tsai, Yu-Lun Liu, Lu Qi, Kelvin C.K. Chan, Ming-Hsuan Yang
- Abstract summary: We propose a novel dual-branch framework named DAEFR to restore facial details from low-quality (LQ) images.
Our method introduces an auxiliary LQ branch that extracts crucial information from the LQ inputs.
We evaluate the effectiveness of DAEFR on both synthetic and real-world datasets.
- Score: 68.49568459672076
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Restoring facial details from low-quality (LQ) images has remained a
challenging problem due to its ill-posedness induced by various degradations in
the wild. The existing codebook prior mitigates the ill-posedness by leveraging
an autoencoder and learned codebook of high-quality (HQ) features, achieving
remarkable quality. However, existing approaches in this paradigm frequently
depend on a single encoder pre-trained on HQ data for restoring HQ images,
disregarding the domain gap between LQ and HQ images. As a result, the encoding
of LQ inputs may be insufficient, resulting in suboptimal performance. To
tackle this problem, we propose a novel dual-branch framework named DAEFR. Our
method introduces an auxiliary LQ branch that extracts crucial information from
the LQ inputs. Additionally, we incorporate association training to promote
effective synergy between the two branches, enhancing code prediction and
output quality. We evaluate the effectiveness of DAEFR on both synthetic and
real-world datasets, demonstrating its superior performance in restoring facial
details. Project page: https://liagm.github.io/DAEFR/
Related papers
- CLR-Face: Conditional Latent Refinement for Blind Face Restoration Using
Score-Based Diffusion Models [57.9771859175664]
Recent generative-prior-based methods have shown promising blind face restoration performance.
Generating fine-grained facial details faithful to inputs remains a challenging problem.
We introduce a diffusion-based-prior inside a VQGAN architecture that focuses on learning the distribution over uncorrupted latent embeddings.
arXiv Detail & Related papers (2024-02-08T23:51:49Z) - VQCNIR: Clearer Night Image Restoration with Vector-Quantized Codebook [16.20461368096512]
Night photography often struggles with challenges like low light and blurring, stemming from dark environments and prolonged exposures.
We believe in the strength of data-driven high-quality priors and strive to offer a reliable and consistent prior, circumventing the restrictions of manual priors.
We propose Clearer Night Image Restoration with Vector-Quantized Codebook (VQCNIR) to achieve remarkable and consistent restoration outcomes on real-world and synthetic benchmarks.
arXiv Detail & Related papers (2023-12-14T02:16:27Z) - HQG-Net: Unpaired Medical Image Enhancement with High-Quality Guidance [45.84780456554191]
Unpaired Medical Image Enhancement (UMIE) aims to transform a low-quality (LQ) medical image into a high-quality (HQ) one without relying on paired images for training.
We propose a novel UMIE approach that avoids the above limitation of existing methods by directly encoding HQ cues into the LQ enhancement process.
We train the enhancement network adversarially with a discriminator to ensure the generated HQ image falls into the HQ domain.
arXiv Detail & Related papers (2023-07-15T15:26:25Z) - Collaborative Auto-encoding for Blind Image Quality Assessment [17.081262827258943]
Blind image quality assessment (BIQA) is a challenging problem with important real-world applications.
Recent efforts attempting to exploit powerful representations by deep neural networks (DNN) are hindered by the lack of subjectively annotated data.
This paper presents a novel BIQA method which overcomes this fundamental obstacle.
arXiv Detail & Related papers (2023-05-24T03:45:03Z) - Towards Accurate Image Coding: Improved Autoregressive Image Generation
with Dynamic Vector Quantization [73.52943587514386]
Existing vector quantization (VQ) based autoregressive models follow a two-stage generation paradigm.
We propose a novel two-stage framework: (1) Dynamic-Quantization VAE (DQ-VAE) which encodes image regions into variable-length codes based their information densities for accurate representation.
arXiv Detail & Related papers (2023-05-19T14:56:05Z) - RIDCP: Revitalizing Real Image Dehazing via High-Quality Codebook Priors [14.432465539590481]
Existing dehazing approaches struggle to process real-world hazy images owing to the lack of paired real data and robust priors.
We present a new paradigm for real image dehazing from the perspectives of synthesizing more realistic hazy data.
arXiv Detail & Related papers (2023-04-08T12:12:24Z) - Towards Robust Blind Face Restoration with Codebook Lookup Transformer [94.48731935629066]
Blind face restoration is a highly ill-posed problem that often requires auxiliary guidance.
We show that a learned discrete codebook prior in a small proxy space cast blind face restoration as a code prediction task.
We propose a Transformer-based prediction network, named CodeFormer, to model global composition and context of the low-quality faces.
arXiv Detail & Related papers (2022-06-22T17:58:01Z) - VQFR: Blind Face Restoration with Vector-Quantized Dictionary and
Parallel Decoder [83.63843671885716]
We propose a VQ-based face restoration method -- VQFR.
VQFR takes advantage of high-quality low-level feature banks extracted from high-quality faces.
To further fuse low-level features from inputs while not "contaminating" the realistic details generated from the VQ codebook, we proposed a parallel decoder.
arXiv Detail & Related papers (2022-05-13T17:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.