Adaptive Multiscale Illumination-Invariant Feature Representation for
Undersampled Face Recognition
- URL: http://arxiv.org/abs/2004.03153v1
- Date: Tue, 7 Apr 2020 06:48:44 GMT
- Title: Adaptive Multiscale Illumination-Invariant Feature Representation for
Undersampled Face Recognition
- Authors: Yang Zhang, Changhui Hu, Xiaobo Lu
- Abstract summary: This paper presents an illumination-invariant feature representation approach used to eliminate the varying illumination affection in undersampled face recognition.
A new illumination level classification technique based on Singular Value Decomposition (SVD) is proposed to judge the illumination level of input image.
The experimental results demonstrate that the JLEF-feature and AJLEF-face outperform other related approaches for undersampled face recognition under varying illumination.
- Score: 29.002873450422083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents an novel illumination-invariant feature representation
approach used to eliminate the varying illumination affection in undersampled
face recognition. Firstly, a new illumination level classification technique
based on Singular Value Decomposition (SVD) is proposed to judge the
illumination level of input image. Secondly, we construct the logarithm
edgemaps feature (LEF) based on lambertian model and local near neighbor
feature of the face image, applying to local region within multiple scales.
Then, the illumination level is referenced to construct the high performance
LEF as well realize adaptive fusion for multiple scales LEFs for the face
image, performing JLEF-feature. In addition, the constrain operation is used to
remove the useless high-frequency interference, disentangling useful facial
feature edges and constructing AJLEF-face. Finally, the effects of the our
methods and other state-of-the-art algorithms including deep learning methods
are tested on Extended Yale B, CMU PIE, AR as well as our Self-build Driver
database (SDB). The experimental results demonstrate that the JLEF-feature and
AJLEF-face outperform other related approaches for undersampled face
recognition under varying illumination.
Related papers
- Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Learning a Single Convolutional Layer Model for Low Light Image
Enhancement [43.411846299085575]
Low-light image enhancement (LLIE) aims to improve the illuminance of images due to insufficient light exposure.
A single convolutional layer model (SCLM) is proposed that provides global low-light enhancement as the coarsely enhanced results.
Experimental results demonstrate that the proposed method performs favorably against the state-of-the-art LLIE methods in both objective metrics and subjective visual effects.
arXiv Detail & Related papers (2023-05-23T13:12:00Z) - Occlusion-Robust FAU Recognition by Mining Latent Space of Masked
Autoencoders [23.39566752915331]
Facial action units (FAUs) are critical for fine-grained facial expression analysis.
New approach takes advantage of rich information from the latent space of masked autoencoder (MAE) and transforms it into FAU features.
FAUs can achieve comparable performance as state-of-the-art methods under normal conditions.
arXiv Detail & Related papers (2022-12-08T01:57:48Z) - Boosting Few-shot Fine-grained Recognition with Background Suppression
and Foreground Alignment [53.401889855278704]
Few-shot fine-grained recognition (FS-FGR) aims to recognize novel fine-grained categories with the help of limited available samples.
We propose a two-stage background suppression and foreground alignment framework, which is composed of a background activation suppression (BAS) module, a foreground object alignment (FOA) module, and a local to local (L2L) similarity metric.
Experiments conducted on multiple popular fine-grained benchmarks demonstrate that our method outperforms the existing state-of-the-art by a large margin.
arXiv Detail & Related papers (2022-10-04T07:54:40Z) - A Synthesis-Based Approach for Thermal-to-Visible Face Verification [105.63410428506536]
This paper presents an algorithm that achieves state-of-the-art performance on the ARL-VTF and TUFTS multi-spectral face datasets.
We also present MILAB-VTF(B), a challenging multi-spectral face dataset composed of paired thermal and visible videos.
arXiv Detail & Related papers (2021-08-21T17:59:56Z) - Unconstrained Face Recognition using ASURF and Cloud-Forest Classifier
optimized with VLAD [0.0]
The paper posits a computationally-efficient algorithm for multi-class facial image classification in which images are constrained with translation, rotation, scale, color, illumination and affine distortion.
The proposed method aims at improving the accuracy and the time taken for face recognition systems.
arXiv Detail & Related papers (2021-04-02T01:26:26Z) - Hierarchical Deep CNN Feature Set-Based Representation Learning for
Robust Cross-Resolution Face Recognition [59.29808528182607]
Cross-resolution face recognition (CRFR) is important in intelligent surveillance and biometric forensics.
Existing shallow learning-based and deep learning-based methods focus on mapping the HR-LR face pairs into a joint feature space.
In this study, we desire to fully exploit the multi-level deep convolutional neural network (CNN) feature set for robust CRFR.
arXiv Detail & Related papers (2021-03-25T14:03:42Z) - Unsupervised Learning Facial Parameter Regressor for Action Unit
Intensity Estimation via Differentiable Renderer [51.926868759681014]
We present a framework to predict the facial parameters based on a bone-driven face model (BDFM) under different views.
The proposed framework consists of a feature extractor, a generator, and a facial parameter regressor.
arXiv Detail & Related papers (2020-08-20T09:49:13Z) - Recurrent Exposure Generation for Low-Light Face Detection [113.25331155337759]
We propose a novel Recurrent Exposure Generation (REG) module and a Multi-Exposure Detection (MED) module.
REG produces progressively and efficiently intermediate images corresponding to various exposure settings.
Such pseudo-exposures are then fused by MED to detect faces across different lighting conditions.
arXiv Detail & Related papers (2020-07-21T17:30:51Z) - FakeLocator: Robust Localization of GAN-Based Face Manipulations [19.233930372590226]
We propose a novel approach, termed FakeLocator, to obtain high localization accuracy, at full resolution, on manipulated facial images.
This is the very first attempt to solve the GAN-based fake localization problem with a gray-scale fakeness map.
Experimental results on popular FaceForensics++, DFFD datasets and seven different state-of-the-art GAN-based face generation methods have shown the effectiveness of our method.
arXiv Detail & Related papers (2020-01-27T06:15:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.