ENTED: Enhanced Neural Texture Extraction and Distribution for
Reference-based Blind Face Restoration
- URL: http://arxiv.org/abs/2401.06978v1
- Date: Sat, 13 Jan 2024 04:54:59 GMT
- Title: ENTED: Enhanced Neural Texture Extraction and Distribution for
Reference-based Blind Face Restoration
- Authors: Yuen-Fui Lau, Tianjia Zhang, Zhefan Rao, Qifeng Chen
- Abstract summary: We present ENTED, a new framework for blind face restoration that aims to restore high-quality and realistic portrait images.
We utilize a texture extraction and distribution framework to transfer high-quality texture features between the degraded input and reference image.
The StyleGAN-like architecture in our framework requires high-quality latent codes to generate realistic images.
- Score: 51.205673783866146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present ENTED, a new framework for blind face restoration that aims to
restore high-quality and realistic portrait images. Our method involves
repairing a single degraded input image using a high-quality reference image.
We utilize a texture extraction and distribution framework to transfer
high-quality texture features between the degraded input and reference image.
However, the StyleGAN-like architecture in our framework requires high-quality
latent codes to generate realistic images. The latent code extracted from the
degraded input image often contains corrupted features, making it difficult to
align the semantic information from the input with the high-quality textures
from the reference. To overcome this challenge, we employ two special
techniques. The first technique, inspired by vector quantization, replaces
corrupted semantic features with high-quality code words. The second technique
generates style codes that carry photorealistic texture information from a more
informative latent space developed using the high-quality features in the
reference image's manifold. Extensive experiments conducted on synthetic and
real-world datasets demonstrate that our method produces results with more
realistic contextual details and outperforms state-of-the-art methods. A
thorough ablation study confirms the effectiveness of each proposed module.
Related papers
- Toward Scalable Image Feature Compression: A Content-Adaptive and Diffusion-Based Approach [44.03561901593423]
This paper introduces a content-adaptive diffusion model for scalable image compression.
The proposed method encodes fine textures through a diffusion process, enhancing perceptual quality.
Experiments demonstrate the effectiveness of the proposed framework in both image reconstruction and downstream machine vision tasks.
arXiv Detail & Related papers (2024-10-08T15:48:34Z) - DaLPSR: Leverage Degradation-Aligned Language Prompt for Real-World Image Super-Resolution [19.33582308829547]
This paper proposes to leverage degradation-aligned language prompt for accurate, fine-grained, and high-fidelity image restoration.
The proposed method achieves a new state-of-the-art perceptual quality level.
arXiv Detail & Related papers (2024-06-24T09:30:36Z) - Multi-Modality Deep Network for JPEG Artifacts Reduction [33.02405073842042]
We propose a multimodal fusion learning method for text-guided JPEG artifacts reduction.
Our method can obtain better deblocking results compared to the state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T11:54:02Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Semantic Image Translation for Repairing the Texture Defects of Building
Models [16.764719266178655]
We introduce a novel approach for synthesizing faccade texture images that authentically reflect the architectural style from a structured label map.
Our proposed method is also capable of synthesizing texture images with specific styles for faccades that lack pre-existing textures.
arXiv Detail & Related papers (2023-03-30T14:38:53Z) - Unsupervised Structure-Consistent Image-to-Image Translation [6.282068591820945]
The Swapping Autoencoder achieved state-of-the-art performance in deep image manipulation and image-to-image translation.
We improve this work by introducing a simple yet effective auxiliary module based on gradient reversal layers.
The auxiliary module's loss forces the generator to learn to reconstruct an image with an all-zero texture code.
arXiv Detail & Related papers (2022-08-24T13:47:15Z) - Controllable Person Image Synthesis with Spatially-Adaptive Warped
Normalization [72.65828901909708]
Controllable person image generation aims to produce realistic human images with desirable attributes.
We introduce a novel Spatially-Adaptive Warped Normalization (SAWN), which integrates a learned flow-field to warp modulation parameters.
We propose a novel self-training part replacement strategy to refine the pretrained model for the texture-transfer task.
arXiv Detail & Related papers (2021-05-31T07:07:44Z) - Bridging Composite and Real: Towards End-to-end Deep Image Matting [88.79857806542006]
We study the roles of semantics and details for image matting.
We propose a novel Glance and Focus Matting network (GFM), which employs a shared encoder and two separate decoders.
Comprehensive empirical studies have demonstrated that GFM outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-30T10:57:13Z) - Region-adaptive Texture Enhancement for Detailed Person Image Synthesis [86.69934638569815]
RATE-Net is a novel framework for synthesizing person images with sharp texture details.
The proposed framework leverages an additional texture enhancing module to extract appearance information from the source image.
Experiments conducted on DeepFashion benchmark dataset have demonstrated the superiority of our framework compared with existing networks.
arXiv Detail & Related papers (2020-05-26T02:33:21Z) - Deep CG2Real: Synthetic-to-Real Translation via Image Disentanglement [78.58603635621591]
Training an unpaired synthetic-to-real translation network in image space is severely under-constrained.
We propose a semi-supervised approach that operates on the disentangled shading and albedo layers of the image.
Our two-stage pipeline first learns to predict accurate shading in a supervised fashion using physically-based renderings as targets.
arXiv Detail & Related papers (2020-03-27T21:45:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.