Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning
- URL: http://arxiv.org/abs/2408.14829v1
- Date: Tue, 27 Aug 2024 07:26:10 GMT
- Title: Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning
- Authors: Moritz Finke, Alexandra Dmitrienko,
- Abstract summary: imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
- Score: 50.79277723970418
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Facial recognition systems have become an integral part of the modern world. These methods accomplish the task of human identification in an automatic, fast, and non-interfering way. Past research has uncovered high vulnerability to simple imitation attacks that could lead to erroneous identification and subsequent authentication of attackers. Similar to face recognition, imitation attacks can also be detected with Machine Learning. Attack detection systems use a variety of facial features and advanced machine learning models for uncovering the presence of attacks. In this work, we assess existing work on liveness detection and propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
Related papers
- UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Detect & Reject for Transferability of Black-box Adversarial Attacks
Against Network Intrusion Detection Systems [0.0]
We investigate the transferability of adversarial network traffic against machine learning-based intrusion detection systems.
We examine Detect & Reject as a defensive mechanism to limit the effect of the transferability property of adversarial network traffic against machine learning-based intrusion detection systems.
arXiv Detail & Related papers (2021-12-22T17:54:54Z) - Differential Anomaly Detection for Facial Images [15.54185745912878]
Identity attacks pose a big security threat as they can be used to gain unauthorised access and spread misinformation.
Most algorithms for detecting identity attacks generalise poorly to attack types that are unknown at training time.
We introduce a differential anomaly detection framework in which deep face embeddings are first extracted from pairs of images.
arXiv Detail & Related papers (2021-10-07T13:45:13Z) - Detection and Continual Learning of Novel Face Presentation Attacks [23.13064343026656]
State-of-the-art face antispoofing systems are still vulnerable to novel types of attacks that are never seen during training.
In this paper, we enable a deep neural network to detect anomalies in the observed input data points as potential new types of attacks.
We then use experience replay to update the model to incorporate knowledge about new types of attacks without forgetting the past learned attack types.
arXiv Detail & Related papers (2021-08-27T01:33:52Z) - Fortify Machine Learning Production Systems: Detect and Classify
Adversarial Attacks [0.0]
In this work, we propose one piece of the production protection system: detecting an incoming adversarial attack and its characteristics.
The underlying model can be trained in a structured manner to be robust from those attacks.
The adversarial image classification space is explored for models commonly used in transfer learning.
arXiv Detail & Related papers (2021-02-19T00:47:16Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.