Differential Anomaly Detection for Facial Images
- URL: http://arxiv.org/abs/2110.03464v1
- Date: Thu, 7 Oct 2021 13:45:13 GMT
- Title: Differential Anomaly Detection for Facial Images
- Authors: Mathias Ibsen, L\'azaro J. Gonz\'alez-Soler, Christian Rathgeb, Pawel
Drozdowski, Marta Gomez-Barrero, Christoph Busch
- Abstract summary: Identity attacks pose a big security threat as they can be used to gain unauthorised access and spread misinformation.
Most algorithms for detecting identity attacks generalise poorly to attack types that are unknown at training time.
We introduce a differential anomaly detection framework in which deep face embeddings are first extracted from pairs of images.
- Score: 15.54185745912878
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to their convenience and high accuracy, face recognition systems are
widely employed in governmental and personal security applications to
automatically recognise individuals. Despite recent advances, face recognition
systems have shown to be particularly vulnerable to identity attacks (i.e.,
digital manipulations and attack presentations). Identity attacks pose a big
security threat as they can be used to gain unauthorised access and spread
misinformation. In this context, most algorithms for detecting identity attacks
generalise poorly to attack types that are unknown at training time. To tackle
this problem, we introduce a differential anomaly detection framework in which
deep face embeddings are first extracted from pairs of images (i.e., reference
and probe) and then combined for identity attack detection. The experimental
evaluation conducted over several databases shows a high generalisation
capability of the proposed method for detecting unknown attacks in both the
digital and physical domains.
Related papers
- Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - TetraLoss: Improving the Robustness of Face Recognition against Morphing
Attacks [7.092869001331781]
Face recognition systems are widely deployed in high-security applications.
Digital manipulations, such as face morphing, pose a security threat to face recognition systems.
We present a novel method for adapting deep learning-based face recognition systems to be more robust against face morphing attacks.
arXiv Detail & Related papers (2024-01-21T21:04:05Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - A Novel Active Solution for Two-Dimensional Face Presentation Attack
Detection [0.0]
We study state-of-the-art to cover the challenges and solutions related to presentation attack detection.
A presentation attack is an attempt to present a non-live face, such as a photo, video, mask, and makeup, to the camera.
We introduce an efficient active presentation attack detection approach that overcomes weaknesses in the existing literature.
arXiv Detail & Related papers (2022-12-14T00:30:09Z) - Face Presentation Attack Detection [59.05779913403134]
Face recognition technology has been widely used in daily interactive applications such as checking-in and mobile payment.
However, its vulnerability to presentation attacks (PAs) limits its reliable use in ultra-secure applicational scenarios.
arXiv Detail & Related papers (2022-12-07T14:51:17Z) - Psychophysical Evaluation of Human Performance in Detecting Digital Face
Image Manipulations [14.63266615325105]
This work introduces a web-based, remote visual discrimination experiment on the basis of principles adopted from the field of psychophysics.
We examine human proficiency in detecting different types of digitally manipulated face images, specifically face swapping, morphing, and retouching.
arXiv Detail & Related papers (2022-01-28T12:45:33Z) - Robust Physical-World Attacks on Face Recognition [52.403564953848544]
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
arXiv Detail & Related papers (2021-09-20T06:49:52Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.