Identity Deepfake Threats to Biometric Authentication Systems: Public and Expert Perspectives
- URL: http://arxiv.org/abs/2506.06825v1
- Date: Sat, 07 Jun 2025 15:02:23 GMT
- Title: Identity Deepfake Threats to Biometric Authentication Systems: Public and Expert Perspectives
- Authors: Shijing He, Yaxiong Lei, Zihan Zhang, Yuzhou Sun, Shujun Li, Chi Zhang, Juan Ye,
- Abstract summary: Generative AI (Gen-AI) deepfakes pose a rapidly evolving threat to biometric authentication.<n>Experts express grave concerns about the spoofing of static modalities like face and voice recognition.<n>We introduce a novel Deepfake Kill Chain model to map the specific attack vectors used by malicious actors against biometric systems.
- Score: 18.51811906909478
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Generative AI (Gen-AI) deepfakes pose a rapidly evolving threat to biometric authentication, yet a significant gap exists between expert understanding of these risks and public perception. This disconnection creates critical vulnerabilities in systems trusted by millions. To bridge this gap, we conducted a comprehensive mixed-method study, surveying 408 professionals across key sectors and conducting in-depth interviews with 37 participants (25 experts, 12 general public [non-experts]). Our findings reveal a paradox: while the public increasingly relies on biometrics for convenience, experts express grave concerns about the spoofing of static modalities like face and voice recognition. We found significant demographic and sector-specific divides in awareness and trust, with finance professionals, for example, showing heightened skepticism. To systematically analyze these threats, we introduce a novel Deepfake Kill Chain model, adapted from Hutchins et al.'s cybersecurity frameworks to map the specific attack vectors used by malicious actors against biometric systems. Based on this model and our empirical findings, we propose a tri-layer mitigation framework that prioritizes dynamic biometric signals (e.g., eye movements), robust privacy-preserving data governance, and targeted educational initiatives. This work provides the first empirically grounded roadmap for defending against AI-generated identity threats by aligning technical safeguards with human-centered insights.
Related papers
- Identity Theft in AI Conference Peer Review [50.18240135317708]
We discuss newly uncovered cases of identity theft in the scientific peer-review process within artificial intelligence (AI) research.<n>We detail how dishonest researchers exploit the peer-review system by creating fraudulent reviewer profiles to manipulate paper evaluations.
arXiv Detail & Related papers (2025-08-06T02:36:52Z) - Beyond Vulnerabilities: A Survey of Adversarial Attacks as Both Threats and Defenses in Computer Vision Systems [5.787505062263962]
Adversarial attacks against computer vision systems have emerged as a critical research area that challenges the fundamental assumptions about neural network robustness and security.<n>This comprehensive survey examines the evolving landscape of adversarial techniques, revealing their dual nature as both sophisticated security threats and valuable defensive tools.
arXiv Detail & Related papers (2025-08-03T17:02:05Z) - Real-Time Detection of Insider Threats Using Behavioral Analytics and Deep Evidential Clustering [0.0]
We propose a novel framework for real-time detection of insider threats using behavioral analytics combined with deep evidential clustering.<n>Our system captures and analyzes user activities, applies context-rich behavioral features, and classifies potential threats.<n>We evaluate our framework on benchmark insider threat datasets such as CERT and TWOS, achieving an average detection accuracy of 94.7% and a 38% reduction in false positives compared to traditional clustering methods.
arXiv Detail & Related papers (2025-05-21T11:21:33Z) - Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems [13.830575255066773]
Face recognition pipelines have been widely deployed in mission-critical systems in trust, equitable and responsible AI applications.
The emergence of adversarial attacks has threatened the security of the entire recognition pipeline.
We propose an effective yet easy-to-launch physical adversarial attack, named AdvColor, against black-box face recognition pipelines.
arXiv Detail & Related papers (2024-07-11T13:58:09Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - Eroding Trust In Aerial Imagery: Comprehensive Analysis and Evaluation
Of Adversarial Attacks In Geospatial Systems [24.953306643091484]
We show how adversarial attacks can degrade confidence in geospatial systems.
We empirically show their threat to remote sensing systems using high-quality SpaceNet datasets.
arXiv Detail & Related papers (2023-12-12T16:05:12Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z) - Differential Anomaly Detection for Facial Images [15.54185745912878]
Identity attacks pose a big security threat as they can be used to gain unauthorised access and spread misinformation.
Most algorithms for detecting identity attacks generalise poorly to attack types that are unknown at training time.
We introduce a differential anomaly detection framework in which deep face embeddings are first extracted from pairs of images.
arXiv Detail & Related papers (2021-10-07T13:45:13Z) - Exploring Robustness of Unsupervised Domain Adaptation in Semantic
Segmentation [74.05906222376608]
We propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space.
This paper is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks.
arXiv Detail & Related papers (2021-05-23T01:50:44Z) - Biometrics: Trust, but Verify [49.9641823975828]
Biometric recognition has exploded into a plethora of different applications around the globe.
There are a number of outstanding problems and concerns pertaining to the various sub-modules of biometric recognition systems.
arXiv Detail & Related papers (2021-05-14T03:07:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.