Feature Super-Resolution Based Facial Expression Recognition for
Multi-scale Low-Resolution Faces
- URL: http://arxiv.org/abs/2004.02234v1
- Date: Sun, 5 Apr 2020 15:38:47 GMT
- Title: Feature Super-Resolution Based Facial Expression Recognition for
Multi-scale Low-Resolution Faces
- Authors: Wei Jing, Feng Tian, Jizhong Zhang, Kuo-Ming Chao, Zhenxin Hong, Xu
Liu
- Abstract summary: Super-resolution method is often used to enhance low-resolution images, but the performance on FER task is limited when on images of very low resolution.
In this work, inspired by feature super-resolution methods for object detection, we proposed a novel generative adversary network-based super-resolution method for robust facial expression recognition.
- Score: 7.634398926381845
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facial Expressions Recognition(FER) on low-resolution images is necessary for
applications like group expression recognition in crowd scenarios(station,
classroom etc.). Classifying a small size facial image into the right
expression category is still a challenging task. The main cause of this problem
is the loss of discriminative feature due to reduced resolution.
Super-resolution method is often used to enhance low-resolution images, but the
performance on FER task is limited when on images of very low resolution. In
this work, inspired by feature super-resolution methods for object detection,
we proposed a novel generative adversary network-based feature level
super-resolution method for robust facial expression recognition(FSR-FER). In
particular, a pre-trained FER model was employed as feature extractor, and a
generator network G and a discriminator network D are trained with features
extracted from images of low resolution and original high resolution. Generator
network G tries to transform features of low-resolution images to more
discriminative ones by making them closer to the ones of corresponding
high-resolution images. For better classification performance, we also proposed
an effective classification-aware loss re-weighting strategy based on the
classification probability calculated by a fixed FER model to make our model
focus more on samples that are easily misclassified. Experiment results on
Real-World Affective Faces (RAF) Database demonstrate that our method achieves
satisfying results on various down-sample factors with a single model and has
better performance on low-resolution images compared with methods using image
super-resolution and expression recognition separately.
Related papers
- Dynamic Resolution Guidance for Facial Expression Recognition [2.0456513832600884]
This paper introduces a practical method called Dynamic Resolution Guidance for Facial Expression Recognition (DRGFER)
Our framework comprises two main components: the Resolution Recognition Network (RRN) and the Multi-Resolution Adaptation Facial Expression Recognition Network (MRAFER)
The proposed framework exhibits robustness against resolution variations and facial expressions, offering a promising solution for real-world applications.
arXiv Detail & Related papers (2024-04-09T15:02:01Z) - ACDMSR: Accelerated Conditional Diffusion Models for Single Image
Super-Resolution [84.73658185158222]
We propose a diffusion model-based super-resolution method called ACDMSR.
Our method adapts the standard diffusion model to perform super-resolution through a deterministic iterative denoising process.
Our approach generates more visually realistic counterparts for low-resolution images, emphasizing its effectiveness in practical scenarios.
arXiv Detail & Related papers (2023-07-03T06:49:04Z) - Learning from Multi-Perception Features for Real-Word Image
Super-resolution [87.71135803794519]
We propose a novel SR method called MPF-Net that leverages multiple perceptual features of input images.
Our method incorporates a Multi-Perception Feature Extraction (MPFE) module to extract diverse perceptual information.
We also introduce a contrastive regularization term (CR) that improves the model's learning capability.
arXiv Detail & Related papers (2023-05-26T07:35:49Z) - Cross-resolution Face Recognition via Identity-Preserving Network and
Knowledge Distillation [12.090322373964124]
Cross-resolution face recognition is a challenging problem for modern deep face recognition systems.
This paper proposes a new approach that enforces the network to focus on the discriminative information stored in the low-frequency components of a low-resolution image.
arXiv Detail & Related papers (2023-03-15T14:52:46Z) - MrSARP: A Hierarchical Deep Generative Prior for SAR Image
Super-resolution [0.5161531917413706]
We present a novel hierarchical deep-generative model MrSARP for SAR imagery.
MrSARP is trained in conjunction with a critic that scores multi resolution images jointly to decide if they are realistic images of a target at different resolutions.
We show how this deep generative model can be used to retrieve the high spatial resolution image from low resolution images of the same target.
arXiv Detail & Related papers (2022-11-30T19:12:21Z) - Semantic Encoder Guided Generative Adversarial Face Ultra-Resolution
Network [15.102899995465041]
We propose a novel face super-resolution method, namely Semantic guided Generative Adversarial Face Ultra-Resolution Network (SEGA-FURN)
The proposed network is composed of a novel semantic encoder that has the ability to capture the embedded semantics to guide adversarial learning and a novel generator that uses a hierarchical architecture named Residual in Internal Block (RIDB)
Experiments on large face datasets have proved that the proposed method can achieve superior super-resolution results and significantly outperform other state-of-the-art methods in both qualitative and quantitative comparisons.
arXiv Detail & Related papers (2022-11-18T23:16:57Z) - Hierarchical Similarity Learning for Aliasing Suppression Image
Super-Resolution [64.15915577164894]
A hierarchical image super-resolution network (HSRNet) is proposed to suppress the influence of aliasing.
HSRNet achieves better quantitative and visual performance than other works, and remits the aliasing more effectively.
arXiv Detail & Related papers (2022-06-07T14:55:32Z) - Single Image Internal Distribution Measurement Using Non-Local
Variational Autoencoder [11.985083962982909]
This paper proposes a novel image-specific solution, namely non-local variational autoencoder (textttNLVAE)
textttNLVAE is introduced as a self-supervised strategy that reconstructs high-resolution images using disentangled information from the non-local neighbourhood.
Experimental results from seven benchmark datasets demonstrate the effectiveness of the textttNLVAE model.
arXiv Detail & Related papers (2022-04-02T18:43:55Z) - Hierarchical Deep CNN Feature Set-Based Representation Learning for
Robust Cross-Resolution Face Recognition [59.29808528182607]
Cross-resolution face recognition (CRFR) is important in intelligent surveillance and biometric forensics.
Existing shallow learning-based and deep learning-based methods focus on mapping the HR-LR face pairs into a joint feature space.
In this study, we desire to fully exploit the multi-level deep convolutional neural network (CNN) feature set for robust CRFR.
arXiv Detail & Related papers (2021-03-25T14:03:42Z) - Invertible Image Rescaling [118.2653765756915]
We develop an Invertible Rescaling Net (IRN) to produce visually-pleasing low-resolution images.
We capture the distribution of the lost information using a latent variable following a specified distribution in the downscaling process.
arXiv Detail & Related papers (2020-05-12T09:55:53Z) - PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of
Generative Models [77.32079593577821]
PULSE (Photo Upsampling via Latent Space Exploration) generates high-resolution, realistic images at resolutions previously unseen in the literature.
Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.
arXiv Detail & Related papers (2020-03-08T16:44:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.