On the Resilience of Biometric Authentication Systems against Random
Inputs
- URL: http://arxiv.org/abs/2001.04056v2
- Date: Fri, 24 Jan 2020 03:00:27 GMT
- Title: On the Resilience of Biometric Authentication Systems against Random
Inputs
- Authors: Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Mohamed Ali Kaafar
- Abstract summary: We assess the security of machine learning based biometric authentication systems against an attacker who submits uniform random inputs.
In particular, for one reconstructed biometric system with an average FPR of 0.03, the success rate was as high as 0.78.
- Score: 6.249167635929514
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We assess the security of machine learning based biometric authentication
systems against an attacker who submits uniform random inputs, either as
feature vectors or raw inputs, in order to find an accepting sample of a target
user. The average false positive rate (FPR) of the system, i.e., the rate at
which an impostor is incorrectly accepted as the legitimate user, may be
interpreted as a measure of the success probability of such an attack. However,
we show that the success rate is often higher than the FPR. In particular, for
one reconstructed biometric system with an average FPR of 0.03, the success
rate was as high as 0.78. This has implications for the security of the system,
as an attacker with only the knowledge of the length of the feature space can
impersonate the user with less than 2 attempts on average. We provide detailed
analysis of why the attack is successful, and validate our results using four
different biometric modalities and four different machine learning classifiers.
Finally, we propose mitigation techniques that render such attacks ineffective,
with little to no effect on the accuracy of the system.
Related papers
- EaTVul: ChatGPT-based Evasion Attack Against Software Vulnerability Detection [19.885698402507145]
Adversarial examples can exploit vulnerabilities within deep neural networks.
This study showcases the susceptibility of deep learning models to adversarial attacks, which can achieve 100% attack success rate.
arXiv Detail & Related papers (2024-07-27T09:04:54Z) - Corpus Poisoning via Approximate Greedy Gradient Descent [48.5847914481222]
We propose Approximate Greedy Gradient Descent, a new attack on dense retrieval systems based on the widely used HotFlip method for generating adversarial passages.
We show that our method achieves a high attack success rate on several datasets and using several retrievers, and can generalize to unseen queries and new domains.
arXiv Detail & Related papers (2024-06-07T17:02:35Z) - DALA: A Distribution-Aware LoRA-Based Adversarial Attack against
Language Models [64.79319733514266]
Adversarial attacks can introduce subtle perturbations to input data.
Recent attack methods can achieve a relatively high attack success rate (ASR)
We propose a Distribution-Aware LoRA-based Adversarial Attack (DALA) method.
arXiv Detail & Related papers (2023-11-14T23:43:47Z) - t-EER: Parameter-Free Tandem Evaluation of Countermeasures and Biometric
Comparators [27.452032643800223]
Presentation attack (spoofing) detection (PAD) typically operates alongside biometric verification to improve reliablity in the face of spoofing attacks.
We introduce a new metric for the joint evaluation of PAD solutions operating in situ with biometric verification.
arXiv Detail & Related papers (2023-09-21T16:30:40Z) - Untargeted Near-collision Attacks on Biometrics: Real-world Bounds and
Theoretical Limits [0.0]
We focus on untargeted attacks that can be carried out both online and offline, and in both identification and verification modes.
We use the False Match Rate (FMR) and the False Positive Identification Rate (FPIR) to address the security of these systems.
The study of this metric space, and system parameters, gives us the complexity of untargeted attacks and the probability of a near-collision.
arXiv Detail & Related papers (2023-04-04T07:17:31Z) - Measuring Equality in Machine Learning Security Defenses: A Case Study
in Speech Recognition [56.69875958980474]
This work considers approaches to defending learned systems and how security defenses result in performance inequities across different sub-populations.
We find that many methods that have been proposed can cause direct harm, like false rejection and unequal benefits from robustness training.
We present a comparison of equality between two rejection-based defenses: randomized smoothing and neural rejection, finding randomized smoothing more equitable due to the sampling mechanism for minority groups.
arXiv Detail & Related papers (2023-02-17T16:19:26Z) - Analysis of Master Vein Attacks on Finger Vein Recognition Systems [42.63580709376905]
Finger vein recognition (FVR) systems have been commercially used, especially in ATMs, for customer verification.
It is essential to measure their robustness against various attack methods, especially when a hand-crafted FVR system is used without any countermeasure methods.
We are the first in the literature to introduce master vein attacks in which we craft a vein-looking image so that it can falsely match with as many identities as possible.
arXiv Detail & Related papers (2022-10-18T06:36:59Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Security and Privacy Enhanced Gait Authentication with Random
Representation Learning and Digital Lockers [3.3549957463189095]
Gait data captured by inertial sensors have demonstrated promising results on user authentication.
Most existing approaches stored the enrolled gait pattern insecurely for matching with the pattern, thus, posed critical security and privacy issues.
We present a gait cryptosystem that generates from gait data the random key for user authentication, meanwhile, secures the gait pattern.
arXiv Detail & Related papers (2021-08-05T06:34:42Z) - Transferable, Controllable, and Inconspicuous Adversarial Attacks on
Person Re-identification With Deep Mis-Ranking [83.48804199140758]
We propose a learning-to-mis-rank formulation to perturb the ranking of the system output.
We also perform a back-box attack by developing a novel multi-stage network architecture.
Our method can control the number of malicious pixels by using differentiable multi-shot sampling.
arXiv Detail & Related papers (2020-04-08T18:48:29Z) - Reliable evaluation of adversarial robustness with an ensemble of
diverse parameter-free attacks [65.20660287833537]
In this paper we propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function.
We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.
arXiv Detail & Related papers (2020-03-03T18:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.