Face Hallucination via Split-Attention in Split-Attention Network
- URL: http://arxiv.org/abs/2010.11575v3
- Date: Wed, 7 Jul 2021 10:08:19 GMT
- Title: Face Hallucination via Split-Attention in Split-Attention Network
- Authors: Tao Lu, Yuanzhi Wang, Yanduo Zhang, Yu Wang, Wei Liu, Zhongyuan Wang,
Junjun Jiang
- Abstract summary: convolutional neural networks (CNNs) have been widely employed to promote the face hallucination.
We propose a novel external-internal split attention group (ESAG) to take into account the overall facial profile and fine texture details simultaneously.
By fusing the features from these two paths, the consistency of facial structure and the fidelity of facial details are strengthened.
- Score: 58.30436379218425
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, convolutional neural networks (CNNs) have been widely employed to
promote the face hallucination due to the ability to predict high-frequency
details from a large number of samples. However, most of them fail to take into
account the overall facial profile and fine texture details simultaneously,
resulting in reduced naturalness and fidelity of the reconstructed face, and
further impairing the performance of downstream tasks (e.g., face detection,
facial recognition). To tackle this issue, we propose a novel external-internal
split attention group (ESAG), which encompasses two paths responsible for
facial structure information and facial texture details, respectively. By
fusing the features from these two paths, the consistency of facial structure
and the fidelity of facial details are strengthened at the same time. Then, we
propose a split-attention in split-attention network (SISN) to reconstruct
photorealistic high-resolution facial images by cascading several ESAGs.
Experimental results on face hallucination and face recognition unveil that the
proposed method not only significantly improves the clarity of hallucinated
faces, but also encourages the subsequent face recognition performance
substantially. Codes have been released at
https://github.com/mdswyz/SISN-Face-Hallucination.
Related papers
- Identity-Preserving Pose-Robust Face Hallucination Through Face Subspace
Prior [14.353574903736343]
A novel face super-resolution approach will be introduced, in which the hallucinated face is forced to lie in a subspace spanned by the available training faces.
A 3D dictionary alignment scheme is also presented, through which the algorithm becomes capable of dealing with low-resolution faces taken in uncontrolled conditions.
In extensive experiments carried out on several well-known face datasets, the proposed algorithm shows remarkable performance by generating detailed and close to ground truth results.
arXiv Detail & Related papers (2021-11-20T17:08:38Z) - TANet: A new Paradigm for Global Face Super-resolution via
Transformer-CNN Aggregation Network [72.41798177302175]
We propose a novel paradigm based on the self-attention mechanism (i.e., the core of Transformer) to fully explore the representation capacity of the facial structure feature.
Specifically, we design a Transformer-CNN aggregation network (TANet) consisting of two paths, in which one path uses CNNs responsible for restoring fine-grained facial details.
By aggregating the features from the above two paths, the consistency of global facial structure and fidelity of local facial detail restoration are strengthened simultaneously.
arXiv Detail & Related papers (2021-09-16T18:15:07Z) - End2End Occluded Face Recognition by Masking Corrupted Features [82.27588990277192]
State-of-the-art general face recognition models do not generalize well to occluded face images.
This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network.
Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks.
arXiv Detail & Related papers (2021-08-21T09:08:41Z) - Pro-UIGAN: Progressive Face Hallucination from Occluded Thumbnails [53.080403912727604]
We propose a multi-stage Progressive Upsampling and Inpainting Generative Adversarial Network, dubbed Pro-UIGAN.
It exploits facial geometry priors to replenish and upsample (8*) the occluded and tiny faces.
Pro-UIGAN achieves visually pleasing HR faces, reaching superior performance in downstream tasks.
arXiv Detail & Related papers (2021-08-02T02:29:24Z) - Towards NIR-VIS Masked Face Recognition [47.00916333095693]
Near-infrared to visible (NIR-VIS) face recognition is the most common case in heterogeneous face recognition.
We propose a novel training method to maximize the mutual information shared by the face representation of two domains.
In addition, a 3D face reconstruction based approach is employed to synthesize masked face from the existing NIR image.
arXiv Detail & Related papers (2021-04-14T10:40:09Z) - Dual-Attention GAN for Large-Pose Face Frontalization [59.689836951934694]
We present a novel Dual-Attention Generative Adversarial Network (DA-GAN) for photo-realistic face frontalization.
Specifically, a self-attention-based generator is introduced to integrate local features with their long-range dependencies.
A novel face-attention-based discriminator is applied to emphasize local features of face regions.
arXiv Detail & Related papers (2020-02-17T20:00:56Z) - Face Hallucination with Finishing Touches [65.14864257585835]
We present a novel Vivid Face Hallucination Generative Adversarial Network (VividGAN) for simultaneously super-resolving and frontalizing tiny non-frontal face images.
VividGAN consists of coarse-level and fine-level Face Hallucination Networks (FHnet) and two discriminators, i.e., Coarse-D and Fine-D.
Experiments demonstrate that our VividGAN achieves photo-realistic frontal HR faces, reaching superior performance in downstream tasks.
arXiv Detail & Related papers (2020-02-09T07:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.