AGA-GAN: Attribute Guided Attention Generative Adversarial Network with
U-Net for Face Hallucination
- URL: http://arxiv.org/abs/2111.10591v1
- Date: Sat, 20 Nov 2021 13:43:03 GMT
- Title: AGA-GAN: Attribute Guided Attention Generative Adversarial Network with
U-Net for Face Hallucination
- Authors: Abhishek Srivastava, Sukalpa Chanda, Umapada Pal
- Abstract summary: We propose an Attribute Guided Attention Generative Adversarial Network which employs attribute guided attention (AGA) modules to identify and focus the generation process on various facial features in the image.
AGA-GAN and AGA-GAN+U-Net framework outperforms several other cutting-edge face hallucination state-of-the-art methods.
- Score: 15.010153819096056
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The performance of facial super-resolution methods relies on their ability to
recover facial structures and salient features effectively. Even though the
convolutional neural network and generative adversarial network-based methods
deliver impressive performances on face hallucination tasks, the ability to use
attributes associated with the low-resolution images to improve performance is
unsatisfactory. In this paper, we propose an Attribute Guided Attention
Generative Adversarial Network which employs novel attribute guided attention
(AGA) modules to identify and focus the generation process on various facial
features in the image. Stacking multiple AGA modules enables the recovery of
both high and low-level facial structures. We design the discriminator to learn
discriminative features exploiting the relationship between the high-resolution
image and their corresponding facial attribute annotations. We then explore the
use of U-Net based architecture to refine existing predictions and synthesize
further facial details. Extensive experiments across several metrics show that
our AGA-GAN and AGA-GAN+U-Net framework outperforms several other cutting-edge
face hallucination state-of-the-art methods. We also demonstrate the viability
of our method when every attribute descriptor is not known and thus,
establishing its application in real-world scenarios.
Related papers
- W-Net: A Facial Feature-Guided Face Super-Resolution Network [8.037821981254389]
Face Super-Resolution aims to recover high-resolution (HR) face images from low-resolution (LR) ones.
Existing approaches are not ideal due to their low reconstruction efficiency and insufficient utilization of prior information.
This paper proposes a novel network architecture called W-Net to address this challenge.
arXiv Detail & Related papers (2024-06-02T09:05:40Z) - TransFA: Transformer-based Representation for Face Attribute Evaluation [87.09529826340304]
We propose a novel textbftransformer-based representation for textbfattribute evaluation method (textbfTransFA)
The proposed TransFA achieves superior performances compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-07-12T10:58:06Z) - MGRR-Net: Multi-level Graph Relational Reasoning Network for Facial Action Units Detection [16.261362598190807]
The Facial Action Coding System (FACS) encodes the action units (AUs) in facial images.
We argue that encoding AU features just from one perspective may not capture the rich contextual information between regional and global face features.
We propose a novel Multi-level Graph Reasoning Network (termed MGRR-Net) for facial AU detection.
arXiv Detail & Related papers (2022-04-04T09:47:22Z) - Deep Collaborative Multi-Modal Learning for Unsupervised Kinship
Estimation [53.62256887837659]
Kinship verification is a long-standing research challenge in computer vision.
We propose a novel deep collaborative multi-modal learning (DCML) to integrate the underlying information presented in facial properties.
Our DCML method is always superior to some state-of-the-art kinship verification methods.
arXiv Detail & Related papers (2021-09-07T01:34:51Z) - Network Architecture Search for Face Enhancement [82.25775020564654]
We present a multi-task face restoration network, called Network Architecture Search for Face Enhancement (NASFE)
NASFE can enhance poor quality face images containing a single degradation (i.e. noise or blur) or multiple degradations (noise+blur+low-light)
arXiv Detail & Related papers (2021-05-13T19:46:05Z) - Face Anti-Spoofing Via Disentangled Representation Learning [90.90512800361742]
Face anti-spoofing is crucial to security of face recognition systems.
We propose a novel perspective of face anti-spoofing that disentangles the liveness features and content features from images.
arXiv Detail & Related papers (2020-08-19T03:54:23Z) - Generative Hierarchical Features from Synthesizing Images [65.66756821069124]
We show that learning to synthesize images can bring remarkable hierarchical visual features that are generalizable across a wide range of applications.
The visual feature produced by our encoder, termed as Generative Hierarchical Feature (GH-Feat), has strong transferability to both generative and discriminative tasks.
arXiv Detail & Related papers (2020-07-20T18:04:14Z) - Dual-Attention GAN for Large-Pose Face Frontalization [59.689836951934694]
We present a novel Dual-Attention Generative Adversarial Network (DA-GAN) for photo-realistic face frontalization.
Specifically, a self-attention-based generator is introduced to integrate local features with their long-range dependencies.
A novel face-attention-based discriminator is applied to emphasize local features of face regions.
arXiv Detail & Related papers (2020-02-17T20:00:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.