Towards End-to-End Neural Face Authentication in the Wild - Quantifying
and Compensating for Directional Lighting Effects
- URL: http://arxiv.org/abs/2104.03854v1
- Date: Thu, 8 Apr 2021 15:58:09 GMT
- Title: Towards End-to-End Neural Face Authentication in the Wild - Quantifying
and Compensating for Directional Lighting Effects
- Authors: Viktor Varkarakis, Wang Yao, Peter Corcoran
- Abstract summary: This work examines the effects of directional lighting on a State-of-Art(SoA) neural face recognizer.
Top lighting and its variants are found to have minimal effect on accuracy, while bottom-left or bottom-right directional lighting has the most pronounced effects.
- Score: 2.4493299476776778
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The recent availability of low-power neural accelerator hardware, combined
with improvements in end-to-end neural facial recognition algorithms provides,
enabling technology for on-device facial authentication. The present research
work examines the effects of directional lighting on a State-of-Art(SoA) neural
face recognizer. A synthetic re-lighting technique is used to augment data
samples due to the lack of public data-sets with sufficient directional
lighting variations. Top lighting and its variants (top-left, top-right) are
found to have minimal effect on accuracy, while bottom-left or bottom-right
directional lighting has the most pronounced effects. Following the fine-tuning
of network weights, the face recognition model is shown to achieve close to the
original Receiver Operating Characteristic curve (ROC)performance across all
lighting conditions and demonstrates an ability to generalize beyond the
lighting augmentations used in the fine-tuning data-set. This work shows that
an SoA neural face recognition model can be tuned to compensate for directional
lighting effects, removing the need for a pre-processing step before applying
facial recognition.
Related papers
- Unsupervised Low-light Image Enhancement with Lookup Tables and Diffusion Priors [38.96909959677438]
Low-light image enhancement (LIE) aims at precisely and efficiently recovering an image degraded in poor illumination environments.
Recent advanced LIE techniques are using deep neural networks, which require lots of low-normal light image pairs, network parameters, and computational resources.
We devise a novel unsupervised LIE framework based on diffusion priors and lookup tables to achieve efficient low-light image recovery.
arXiv Detail & Related papers (2024-09-27T16:37:27Z) - Generalizable Non-Line-of-Sight Imaging with Learnable Physical Priors [52.195637608631955]
Non-line-of-sight (NLOS) imaging has attracted increasing attention due to its potential applications.
Existing NLOS reconstruction approaches are constrained by the reliance on empirical physical priors.
We introduce a novel learning-based solution, comprising two key designs: Learnable Path Compensation (LPC) and Adaptive Phasor Field (APF)
arXiv Detail & Related papers (2024-09-21T04:39:45Z) - ALEN: A Dual-Approach for Uniform and Non-Uniform Low-Light Image Enhancement [6.191556429706728]
Inadequate illumination can lead to significant information loss and poor image quality, impacting various applications such as surveillance.
Current enhancement techniques often use specific datasets to enhance low-light images, but still present challenges when adapting to diverse real-world conditions.
The Adaptive Light Enhancement Network (ALEN) is introduced, whose main approach is the use of a classification mechanism to determine whether local or global illumination enhancement is required.
arXiv Detail & Related papers (2024-07-29T05:19:23Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - NeAI: A Pre-convoluted Representation for Plug-and-Play Neural Ambient
Illumination [28.433403714053103]
We propose a framework named neural ambient illumination (NeAI)
NeAI uses Neural Radiance Fields (NeRF) as a lighting model to handle complex lighting in a physically based way.
Experiments demonstrate the superior performance of novel-view rendering compared to previous works.
arXiv Detail & Related papers (2023-04-18T06:32:30Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Toward Fast, Flexible, and Robust Low-Light Image Enhancement [87.27326390675155]
We develop a new Self-Calibrated Illumination (SCI) learning framework for fast, flexible, and robust brightening images in real-world low-light scenarios.
Considering the computational burden of the cascaded pattern, we construct the self-calibrated module which realizes the convergence between results of each stage.
We make comprehensive explorations to SCI's inherent properties including operation-insensitive adaptability and model-irrelevant generality.
arXiv Detail & Related papers (2022-04-21T14:40:32Z) - LEGAN: Disentangled Manipulation of Directional Lighting and Facial
Expressions by Leveraging Human Perceptual Judgements [7.5603864775031004]
We propose LEGAN, a novel synthesis framework that leverages perceptual quality judgments for jointly manipulating lighting and expressions in face images.
LEGAN disentangles the lighting and expression subspaces and performs transformations in the feature space before upscaling to the desired output image.
We also conduct a perceptual study using images synthesized by LEGAN and other GAN models and show the correlation between our quality estimation and visual fidelity.
arXiv Detail & Related papers (2020-10-04T01:56:54Z) - Learning Flow-based Feature Warping for Face Frontalization with
Illumination Inconsistent Supervision [73.18554605744842]
Flow-based Feature Warping Model (FFWM) learns to synthesize photo-realistic and illumination preserving frontal images.
An Illumination Preserving Module (IPM) is proposed to learn illumination preserving image synthesis.
A Warp Attention Module (WAM) is introduced to reduce the pose discrepancy in the feature level.
arXiv Detail & Related papers (2020-08-16T06:07:00Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.