Swap It Like Its Hot: Segmentation-based spoof attacks on eye-tracking images
- URL: http://arxiv.org/abs/2404.13827v1
- Date: Mon, 22 Apr 2024 01:59:48 GMT
- Title: Swap It Like Its Hot: Segmentation-based spoof attacks on eye-tracking images
- Authors: Anish S. Narkar, Brendan David-John,
- Abstract summary: Biometric authentication is susceptible to spoofing through physical or digital manipulation.
Liveness detection classifies gaze data as real or fake, which is sufficient to detect physical presentation attacks.
We propose IrisSwap as a novel attack on gaze-based liveness detection.
- Score: 1.4732811715354455
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video-based eye trackers capture the iris biometric and enable authentication to secure user identity. However, biometric authentication is susceptible to spoofing another user's identity through physical or digital manipulation. The current standard to identify physical spoofing attacks on eye-tracking sensors uses liveness detection. Liveness detection classifies gaze data as real or fake, which is sufficient to detect physical presentation attacks. However, such defenses cannot detect a spoofing attack when real eye image inputs are digitally manipulated to swap the iris pattern of another person. We propose IrisSwap as a novel attack on gaze-based liveness detection. IrisSwap allows attackers to segment and digitally swap in a victim's iris pattern to fool iris authentication. Both offline and online attacks produce gaze data that deceives the current state-of-the-art defense models at rates up to 58% and motivates the need to develop more advanced authentication methods for eye trackers.
Related papers
- Principles of Designing Robust Remote Face Anti-Spoofing Systems [60.05766968805833]
This paper sheds light on the vulnerabilities of state-of-the-art face anti-spoofing methods against digital attacks.
It presents a comprehensive taxonomy of common threats encountered in face anti-spoofing systems.
arXiv Detail & Related papers (2024-06-06T02:05:35Z) - Biometrics Employing Neural Network [0.0]
Fingerprints, iris and retina patterns, facial recognition, hand shapes, palm prints, and voice recognition are frequently used forms of biometrics.
For systems to be effective and widely accepted, the error rate in recognition and verification must approach zero.
Artificial Neural Networks, which simulate the human brain's operations, present themselves as a promising approach.
arXiv Detail & Related papers (2024-02-01T03:59:04Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake
Detection [58.1263969438364]
We propose adversarial head turn (AdvHeat) as the first attempt at 3D adversarial face views against deepfake detectors.
Experiments validate the vulnerability of various detectors to AdvHeat in realistic, black-box scenarios.
Additional analyses demonstrate that AdvHeat is better than conventional attacks on both the cross-detector transferability and robustness to defenses.
arXiv Detail & Related papers (2023-09-03T07:01:34Z) - A Novel Active Solution for Two-Dimensional Face Presentation Attack
Detection [0.0]
We study state-of-the-art to cover the challenges and solutions related to presentation attack detection.
A presentation attack is an attempt to present a non-live face, such as a photo, video, mask, and makeup, to the camera.
We introduce an efficient active presentation attack detection approach that overcomes weaknesses in the existing literature.
arXiv Detail & Related papers (2022-12-14T00:30:09Z) - Hierarchical Perceptual Noise Injection for Social Media Fingerprint
Privacy Protection [106.5308793283895]
fingerprint leakage from social media raises a strong desire for anonymizing shared images.
To guard the fingerprint leakage, adversarial attack emerges as a solution by adding imperceptible perturbations on images.
We propose FingerSafe, a hierarchical perceptual protective noise injection framework to address the mentioned problems.
arXiv Detail & Related papers (2022-08-23T02:20:46Z) - An Ensemble Model for Face Liveness Detection [2.322052136673525]
We present a passive method to detect face presentation attack using an ensemble deep learning technique.
We propose an ensemble method where multiple features of the face and background regions are learned to predict whether the user is a bonafide or an attacker.
arXiv Detail & Related papers (2022-01-19T12:43:39Z) - Direct attacks using fake images in iris verification [59.68607707427014]
A database of fake iris images has been created from real iris of the BioSec baseline database.
We show that the system is vulnerable to direct attacks, pointing out the importance of having countermeasures.
arXiv Detail & Related papers (2021-10-30T05:01:06Z) - Differential Anomaly Detection for Facial Images [15.54185745912878]
Identity attacks pose a big security threat as they can be used to gain unauthorised access and spread misinformation.
Most algorithms for detecting identity attacks generalise poorly to attack types that are unknown at training time.
We introduce a differential anomaly detection framework in which deep face embeddings are first extracted from pairs of images.
arXiv Detail & Related papers (2021-10-07T13:45:13Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z) - An Overview of Fingerprint-Based Authentication: Liveness Detection and
Beyond [0.0]
We focus on methods to detect physical liveness, defined as techniques that can be used to ensure that a living human user is attempting to authenticate on a system.
We analyze how effective these methods are at preventing attacks where a malicious entity tries to trick a fingerprint-based authentication system to accept a fake finger as a real one.
arXiv Detail & Related papers (2020-01-24T20:07:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.