Neural Fuzzy Extractors: A Secure Way to Use Artificial Neural Networks
for Biometric User Authentication
- URL: http://arxiv.org/abs/2003.08433v2
- Date: Tue, 19 Dec 2023 00:22:29 GMT
- Title: Neural Fuzzy Extractors: A Secure Way to Use Artificial Neural Networks
for Biometric User Authentication
- Authors: Abhishek Jana, Md Kamruzzaman Sarker, Monireh Ebrahimi, Pascal
Hitzler, George T Amariucai
- Abstract summary: Biometric user authentication (and identification) is rapidly becoming ubiquitous.
Modern approaches to biometric authentication, based on machine learning techniques, cannot avoid storing either trained-classifier details or explicit user biometric data.
We introduce a secure way to handle user-specific information involved with the use of vector-space classifiers or artificial neural networks for biometric authentication.
- Score: 2.0118004993739067
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Powered by new advances in sensor development and artificial intelligence,
the decreasing cost of computation, and the pervasiveness of handheld
computation devices, biometric user authentication (and identification) is
rapidly becoming ubiquitous. Modern approaches to biometric authentication,
based on sophisticated machine learning techniques, cannot avoid storing either
trained-classifier details or explicit user biometric data, thus exposing
users' credentials to falsification. In this paper, we introduce a secure way
to handle user-specific information involved with the use of vector-space
classifiers or artificial neural networks for biometric authentication. Our
proposed architecture, called a Neural Fuzzy Extractor (NFE), allows the
coupling of pre-existing classifiers with fuzzy extractors, through a
artificial-neural-network-based buffer called an expander, with minimal or no
performance degradation. The NFE thus offers all the performance advantages of
modern deep-learning-based classifiers, and all the security of standard fuzzy
extractors. We demonstrate the NFE retrofit to a classic artificial neural
network for a simple scenario of fingerprint-based user authentication.
Related papers
- Real-world Edge Neural Network Implementations Leak Private Interactions Through Physical Side Channel [7.693037237501675]
We introduce a generic physical side-channel attack, ScaAR, that extracts user interactions with neural networks by leveraging electromagnetic (EM) emissions of physical devices.
Our proposed attack is implementation-agnostic, meaning it does not require the adversary to possess detailed knowledge of the hardware or software implementations.
arXiv Detail & Related papers (2025-01-24T14:15:51Z) - Evaluating Single Event Upsets in Deep Neural Networks for Semantic Segmentation: an embedded system perspective [1.474723404975345]
This paper delves into the robustness assessment in embedded Deep Neural Networks (DNNs)
By scrutinizing the layer-by-layer and bit-by-bit sensitivity of various encoder-decoder models to soft errors, this study thoroughly investigates the vulnerability of segmentation DNNs to SEUs.
We propose a set of practical lightweight error mitigation techniques with no memory or computational cost suitable for resource-constrained deployments.
arXiv Detail & Related papers (2024-12-04T18:28:38Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Biometrics Employing Neural Network [0.0]
Fingerprints, iris and retina patterns, facial recognition, hand shapes, palm prints, and voice recognition are frequently used forms of biometrics.
For systems to be effective and widely accepted, the error rate in recognition and verification must approach zero.
Artificial Neural Networks, which simulate the human brain's operations, present themselves as a promising approach.
arXiv Detail & Related papers (2024-02-01T03:59:04Z) - AFR-Net: Attention-Driven Fingerprint Recognition Network [47.87570819350573]
We improve initial studies on the use of vision transformers (ViT) for biometric recognition, including fingerprint recognition.
We propose a realignment strategy using local embeddings extracted from intermediate feature maps within the networks to refine the global embeddings in low certainty situations.
This strategy can be applied as a wrapper to any existing deep learning network (including attention-based, CNN-based, or both) to boost its performance.
arXiv Detail & Related papers (2022-11-25T05:10:39Z) - Evaluating Deep Learning Models and Adversarial Attacks on
Accelerometer-Based Gesture Authentication [6.961253535504979]
We use a deep convolutional generative adversarial network (DC-GAN) to create adversarial samples.
We show that our deep learning model is surprisingly robust to such an attack scenario.
arXiv Detail & Related papers (2021-10-03T00:15:50Z) - Towards a Safety Case for Hardware Fault Tolerance in Convolutional
Neural Networks Using Activation Range Supervision [1.7968112116887602]
Convolutional neural networks (CNNs) have become an established part of numerous safety-critical computer vision applications.
We build a prototypical safety case for CNNs by demonstrating that range supervision represents a highly reliable fault detector.
We explore novel, non-uniform range restriction methods that effectively suppress the probability of silent data corruptions and uncorrectable errors.
arXiv Detail & Related papers (2021-08-16T11:13:55Z) - Spotting adversarial samples for speaker verification by neural vocoders [102.1486475058963]
We adopt neural vocoders to spot adversarial samples for automatic speaker verification (ASV)
We find that the difference between the ASV scores for the original and re-synthesize audio is a good indicator for discrimination between genuine and adversarial samples.
Our codes will be made open-source for future works to do comparison.
arXiv Detail & Related papers (2021-07-01T08:58:16Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.