Facial Attribute Capsules for Noise Face Super Resolution
- URL: http://arxiv.org/abs/2002.06518v1
- Date: Sun, 16 Feb 2020 06:22:28 GMT
- Title: Facial Attribute Capsules for Noise Face Super Resolution
- Authors: Jingwei Xin, Nannan Wang, Xinrui Jiang, Jie Li, Xinbo Gao, Zhifeng Li
- Abstract summary: Existing face super-resolution (SR) methods mainly assume the input image to be noise-free.
We propose a Facial Attribute Capsules Network (FACN) to deal with the problem of high-scale super-resolution of noisy face image.
Our method achieves superior hallucination results and outperforms state-of-the-art for very low resolution (LR) noise face image super resolution.
- Score: 86.55076473929965
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing face super-resolution (SR) methods mainly assume the input image to
be noise-free. Their performance degrades drastically when applied to
real-world scenarios where the input image is always contaminated by noise. In
this paper, we propose a Facial Attribute Capsules Network (FACN) to deal with
the problem of high-scale super-resolution of noisy face image. Capsule is a
group of neurons whose activity vector models different properties of the same
entity. Inspired by the concept of capsule, we propose an integrated
representation model of facial information, which named Facial Attribute
Capsule (FAC). In the SR processing, we first generated a group of FACs from
the input LR face, and then reconstructed the HR face from this group of FACs.
Aiming to effectively improve the robustness of FAC to noise, we generate FAC
in semantic, probabilistic and facial attributes manners by means of integrated
learning strategy. Each FAC can be divided into two sub-capsules: Semantic
Capsule (SC) and Probabilistic Capsule (PC). Them describe an explicit facial
attribute in detail from two aspects of semantic representation and probability
distribution. The group of FACs model an image as a combination of facial
attribute information in the semantic space and probabilistic space by an
attribute-disentangling way. The diverse FACs could better combine the face
prior information to generate the face images with fine-grained semantic
attributes. Extensive benchmark experiments show that our method achieves
superior hallucination results and outperforms state-of-the-art for very low
resolution (LR) noise face image super resolution.
Related papers
- Text-Guided Face Recognition using Multi-Granularity Cross-Modal
Contrastive Learning [0.0]
We introduce text-guided face recognition (TGFR) to analyze the impact of integrating facial attributes in the form of natural language descriptions.
TGFR demonstrates remarkable improvements, particularly on low-quality images, over existing face recognition models.
arXiv Detail & Related papers (2023-12-14T22:04:22Z) - LR-to-HR Face Hallucination with an Adversarial Progressive
Attribute-Induced Network [67.64536397027229]
Face super-resolution is a challenging and highly ill-posed problem.
We propose an end-to-end progressive learning framework incorporating facial attributes.
We show that the proposed approach can yield satisfactory face hallucination images outperforming other state-of-the-art approaches.
arXiv Detail & Related papers (2021-09-29T19:50:45Z) - Pro-UIGAN: Progressive Face Hallucination from Occluded Thumbnails [53.080403912727604]
We propose a multi-stage Progressive Upsampling and Inpainting Generative Adversarial Network, dubbed Pro-UIGAN.
It exploits facial geometry priors to replenish and upsample (8*) the occluded and tiny faces.
Pro-UIGAN achieves visually pleasing HR faces, reaching superior performance in downstream tasks.
arXiv Detail & Related papers (2021-08-02T02:29:24Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - SuperFront: From Low-resolution to High-resolution Frontal Face
Synthesis [65.35922024067551]
We propose a generative adversarial network (GAN) -based model to generate high-quality, identity preserving frontal faces.
Specifically, we propose SuperFront-GAN to synthesize a high-resolution (HR), frontal face from one-to-many LR faces with various poses.
We integrate a super-resolution side-view module into SF-GAN to preserve identity information and fine details of the side-views in HR space.
arXiv Detail & Related papers (2020-12-07T23:30:28Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.