Unified Detection of Digital and Physical Face Attacks
- URL: http://arxiv.org/abs/2104.02156v1
- Date: Mon, 5 Apr 2021 21:08:28 GMT
- Title: Unified Detection of Digital and Physical Face Attacks
- Authors: Debayan Deb, Xiaoming Liu, Anil K. Jain
- Abstract summary: State-of-the-art defense mechanisms against face attacks achieve near perfect accuracies within one of three attack categories, namely adversarial, digital manipulation, or physical spoofs.
We propose a unified attack detection framework, namely UniFAD, that can automatically cluster 25 coherent attack types belonging to the three categories.
- Score: 61.6674266994173
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: State-of-the-art defense mechanisms against face attacks achieve near perfect
accuracies within one of three attack categories, namely adversarial, digital
manipulation, or physical spoofs, however, they fail to generalize well when
tested across all three categories. Poor generalization can be attributed to
learning incoherent attacks jointly. To overcome this shortcoming, we propose a
unified attack detection framework, namely UniFAD, that can automatically
cluster 25 coherent attack types belonging to the three categories. Using a
multi-task learning framework along with k-means clustering, UniFAD learns
joint representations for coherent attacks, while uncorrelated attack types are
learned separately. Proposed UniFAD outperforms prevailing defense methods and
their fusion with an overall TDR = 94.73% @ 0.2% FDR on a large fake face
dataset consisting of 341K bona fide images and 448K attack images of 25 types
across all 3 categories. Proposed method can detect an attack within 3
milliseconds on a Nvidia 2080Ti. UniFAD can also identify the attack types and
categories with 75.81% and 97.37% accuracies, respectively.
Related papers
- Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Unraveling Adversarial Examples against Speaker Identification --
Techniques for Attack Detection and Victim Model Classification [24.501269108193412]
Adversarial examples have proven to threaten speaker identification systems.
We propose a method to detect the presence of adversarial examples.
We also introduce a method for identifying the victim model on which the adversarial attack is carried out.
arXiv Detail & Related papers (2024-02-29T17:06:52Z) - MultiRobustBench: Benchmarking Robustness Against Multiple Attacks [86.70417016955459]
We present the first unified framework for considering multiple attacks against machine learning (ML) models.
Our framework is able to model different levels of learner's knowledge about the test-time adversary.
We evaluate the performance of 16 defended models for robustness against a set of 9 different attack types.
arXiv Detail & Related papers (2023-02-21T20:26:39Z) - PARL: Enhancing Diversity of Ensemble Networks to Resist Adversarial
Attacks via Pairwise Adversarially Robust Loss Function [13.417003144007156]
adversarial attacks tend to rely on the principle of transferability.
Ensemble methods against adversarial attacks demonstrate that an adversarial example is less likely to mislead multiple classifiers.
Recent ensemble methods have either been shown to be vulnerable to stronger adversaries or shown to lack an end-to-end evaluation.
arXiv Detail & Related papers (2021-12-09T14:26:13Z) - Attacking Adversarial Attacks as A Defense [40.8739589617252]
adversarial attacks can fool deep neural networks with imperceptible perturbations.
On adversarially-trained models, perturbing adversarial examples with a small random noise may invalidate their misled predictions.
We propose to counter attacks by crafting more effective defensive perturbations.
arXiv Detail & Related papers (2021-06-09T09:31:10Z) - Adversarial Attack and Defense in Deep Ranking [100.17641539999055]
We propose two attacks against deep ranking systems that can raise or lower the rank of chosen candidates by adversarial perturbations.
Conversely, an anti-collapse triplet defense is proposed to improve the ranking model robustness against all proposed attacks.
Our adversarial ranking attacks and defenses are evaluated on MNIST, Fashion-MNIST, CUB200-2011, CARS196 and Stanford Online Products datasets.
arXiv Detail & Related papers (2021-06-07T13:41:45Z) - Untargeted, Targeted and Universal Adversarial Attacks and Defenses on
Time Series [0.0]
We have performed untargeted, targeted and universal adversarial attacks on UCR time series datasets.
Our results show that deep learning based time series classification models are vulnerable to these attacks.
We also show that universal adversarial attacks have good generalization property as it need only a fraction of the training data.
arXiv Detail & Related papers (2021-01-13T13:00:51Z) - FaceGuard: A Self-Supervised Defense Against Adversarial Face Images [59.656264895721215]
We propose a new self-supervised adversarial defense framework, namely FaceGuard, that can automatically detect, localize, and purify a wide variety of adversarial faces.
During training, FaceGuard automatically synthesizes challenging and diverse adversarial attacks, enabling a classifier to learn to distinguish them from real faces.
Experimental results on LFW dataset show that FaceGuard can achieve 99.81% detection accuracy on six unseen adversarial attack types.
arXiv Detail & Related papers (2020-11-28T21:18:46Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.