Unknown Presentation Attack Detection against Rational Attackers
- URL: http://arxiv.org/abs/2010.01592v2
- Date: Fri, 2 Jul 2021 22:37:17 GMT
- Title: Unknown Presentation Attack Detection against Rational Attackers
- Authors: Ali Khodabakhsh, Zahid Akhtar
- Abstract summary: Presentation attack detection and multimedia forensics are still vulnerable to attacks in real-life settings.
Some of the challenges for existing solutions are the detection of unknown attacks, the ability to perform in adversarial settings, few-shot learning, and explainability.
New optimization criterion is proposed and a set of requirements are defined for improving the performance of these systems in real-life settings.
- Score: 6.351869353952288
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the impressive progress in the field of presentation attack detection
and multimedia forensics over the last decade, these systems are still
vulnerable to attacks in real-life settings. Some of the challenges for
existing solutions are the detection of unknown attacks, the ability to perform
in adversarial settings, few-shot learning, and explainability. In this study,
these limitations are approached by reliance on a game-theoretic view for
modeling the interactions between the attacker and the detector. Consequently,
a new optimization criterion is proposed and a set of requirements are defined
for improving the performance of these systems in real-life settings.
Furthermore, a novel detection technique is proposed using generator-based
feature sets that are not biased towards any specific attack species. To
further optimize the performance on known attacks, a new loss function coined
categorical margin maximization loss (C-marmax) is proposed which gradually
improves the performance against the most powerful attack. The proposed
approach provides a more balanced performance across known and unknown attacks
and achieves state-of-the-art performance in known and unknown attack detection
cases against rational attackers. Lastly, the few-shot learning potential of
the proposed approach is studied as well as its ability to provide pixel-level
explainability.
Related papers
- Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - MEAD: A Multi-Armed Approach for Evaluation of Adversarial Examples
Detectors [24.296350262025552]
We propose a novel framework, called MEAD, for evaluating detectors based on several attack strategies.
Among them, we make use of three new objectives to generate attacks.
The proposed performance metric is based on the worst-case scenario.
arXiv Detail & Related papers (2022-06-30T17:05:45Z) - Balancing detectability and performance of attacks on the control
channel of Markov Decision Processes [77.66954176188426]
We investigate the problem of designing optimal stealthy poisoning attacks on the control channel of Markov decision processes (MDPs)
This research is motivated by the recent interest of the research community for adversarial and poisoning attacks applied to MDPs, and reinforcement learning (RL) methods.
arXiv Detail & Related papers (2021-09-15T09:13:10Z) - Detection and Continual Learning of Novel Face Presentation Attacks [23.13064343026656]
State-of-the-art face antispoofing systems are still vulnerable to novel types of attacks that are never seen during training.
In this paper, we enable a deep neural network to detect anomalies in the observed input data points as potential new types of attacks.
We then use experience replay to update the model to incorporate knowledge about new types of attacks without forgetting the past learned attack types.
arXiv Detail & Related papers (2021-08-27T01:33:52Z) - Indicators of Attack Failure: Debugging and Improving Optimization of
Adversarial Examples [29.385242714424624]
evaluating robustness of machine-learning models to adversarial examples is a challenging problem.
We define a set of quantitative indicators which unveil common failures in the optimization of gradient-based attacks.
Our experimental analysis shows that the proposed indicators of failure can be used to visualize, debug and improve current adversarial robustness evaluations.
arXiv Detail & Related papers (2021-06-18T06:57:58Z) - Selective and Features based Adversarial Example Detection [12.443388374869745]
Security-sensitive applications that relay on Deep Neural Networks (DNNs) are vulnerable to small perturbations crafted to generate Adversarial Examples (AEs)
We propose a novel unsupervised detection mechanism that uses the selective prediction, processing model layers outputs, and knowledge transfer concepts in a multi-task learning setting.
Experimental results show that the proposed approach achieves comparable results to the state-of-the-art methods against tested attacks in white box scenario and better results in black and gray boxes scenarios.
arXiv Detail & Related papers (2021-03-09T11:06:15Z) - Adversarial Attacks against Face Recognition: A Comprehensive Study [3.766020696203255]
Face recognition (FR) systems have demonstrated outstanding verification performance.
Recent studies show that (deep) FR systems exhibit an intriguing vulnerability to imperceptible or perceptible but natural-looking adversarial input images.
arXiv Detail & Related papers (2020-07-22T22:46:00Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.