Towards End-to-End Neural Face Authentication in the Wild - Quantifying
and Compensating for Directional Lighting Effects
- URL: http://arxiv.org/abs/2104.03854v1
- Date: Thu, 8 Apr 2021 15:58:09 GMT
- Title: Towards End-to-End Neural Face Authentication in the Wild - Quantifying
and Compensating for Directional Lighting Effects
- Authors: Viktor Varkarakis, Wang Yao, Peter Corcoran
- Abstract summary: This work examines the effects of directional lighting on a State-of-Art(SoA) neural face recognizer.
Top lighting and its variants are found to have minimal effect on accuracy, while bottom-left or bottom-right directional lighting has the most pronounced effects.
- Score: 2.4493299476776778
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The recent availability of low-power neural accelerator hardware, combined
with improvements in end-to-end neural facial recognition algorithms provides,
enabling technology for on-device facial authentication. The present research
work examines the effects of directional lighting on a State-of-Art(SoA) neural
face recognizer. A synthetic re-lighting technique is used to augment data
samples due to the lack of public data-sets with sufficient directional
lighting variations. Top lighting and its variants (top-left, top-right) are
found to have minimal effect on accuracy, while bottom-left or bottom-right
directional lighting has the most pronounced effects. Following the fine-tuning
of network weights, the face recognition model is shown to achieve close to the
original Receiver Operating Characteristic curve (ROC)performance across all
lighting conditions and demonstrates an ability to generalize beyond the
lighting augmentations used in the fine-tuning data-set. This work shows that
an SoA neural face recognition model can be tuned to compensate for directional
lighting effects, removing the need for a pre-processing step before applying
facial recognition.
Related papers
- Zero-Shot Enhancement of Low-Light Image Based on Retinex Decomposition [4.175396687130961]
We propose a new learning-based Retinex decomposition of zero-shot low-light enhancement method, called ZERRINNet.
Our method is a zero-reference enhancement method that is not affected by the training data of paired and unpaired datasets.
arXiv Detail & Related papers (2023-11-06T09:57:48Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - NeAI: A Pre-convoluted Representation for Plug-and-Play Neural Ambient
Illumination [28.433403714053103]
We propose a framework named neural ambient illumination (NeAI)
NeAI uses Neural Radiance Fields (NeRF) as a lighting model to handle complex lighting in a physically based way.
Experiments demonstrate the superior performance of novel-view rendering compared to previous works.
arXiv Detail & Related papers (2023-04-18T06:32:30Z) - Self-Aligned Concave Curve: Illumination Enhancement for Unsupervised
Adaptation [36.050270650417325]
We propose a learnable illumination enhancement model for high-level vision.
Inspired by real camera response functions, we assume that the illumination enhancement function should be a concave curve.
Our model architecture and training designs mutually benefit each other, forming a powerful unsupervised normal-to-low light adaptation framework.
arXiv Detail & Related papers (2022-10-07T19:32:55Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Toward Fast, Flexible, and Robust Low-Light Image Enhancement [87.27326390675155]
We develop a new Self-Calibrated Illumination (SCI) learning framework for fast, flexible, and robust brightening images in real-world low-light scenarios.
Considering the computational burden of the cascaded pattern, we construct the self-calibrated module which realizes the convergence between results of each stage.
We make comprehensive explorations to SCI's inherent properties including operation-insensitive adaptability and model-irrelevant generality.
arXiv Detail & Related papers (2022-04-21T14:40:32Z) - Lightness Modulated Deep Inverse Tone Mapping [18.31269649436267]
Single-image HDR reconstruction or inverse tone mapping (iTM) is a challenging task.
We present a deep learning based iTM method that takes advantage of the feature extraction and mapping power of deep convolutional neural networks (CNNs)
We present experimental results to demonstrate the effectiveness of the new technique.
arXiv Detail & Related papers (2021-07-16T13:56:20Z) - Uncalibrated Neural Inverse Rendering for Photometric Stereo of General
Surfaces [103.08512487830669]
This paper presents an uncalibrated deep neural network framework for the photometric stereo problem.
Existing neural network-based methods either require exact light directions or ground-truth surface normals of the object or both.
We propose an uncalibrated neural inverse rendering approach to this problem.
arXiv Detail & Related papers (2020-12-12T10:33:08Z) - LEGAN: Disentangled Manipulation of Directional Lighting and Facial
Expressions by Leveraging Human Perceptual Judgements [7.5603864775031004]
We propose LEGAN, a novel synthesis framework that leverages perceptual quality judgments for jointly manipulating lighting and expressions in face images.
LEGAN disentangles the lighting and expression subspaces and performs transformations in the feature space before upscaling to the desired output image.
We also conduct a perceptual study using images synthesized by LEGAN and other GAN models and show the correlation between our quality estimation and visual fidelity.
arXiv Detail & Related papers (2020-10-04T01:56:54Z) - Learning Flow-based Feature Warping for Face Frontalization with
Illumination Inconsistent Supervision [73.18554605744842]
Flow-based Feature Warping Model (FFWM) learns to synthesize photo-realistic and illumination preserving frontal images.
An Illumination Preserving Module (IPM) is proposed to learn illumination preserving image synthesis.
A Warp Attention Module (WAM) is introduced to reduce the pose discrepancy in the feature level.
arXiv Detail & Related papers (2020-08-16T06:07:00Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.