Detecting Adversarial Faces Using Only Real Face Self-Perturbations
- URL: http://arxiv.org/abs/2304.11359v2
- Date: Thu, 4 May 2023 01:40:39 GMT
- Title: Detecting Adversarial Faces Using Only Real Face Self-Perturbations
- Authors: Qian Wang, Yongqin Xian, Hefei Ling, Jinyuan Zhang, Xiaorui Lin, Ping
Li, Jiazhong Chen, Ning Yu
- Abstract summary: Adrial attacks aim to disturb the functionality of a target system by adding specific noise to the input samples.
Existing defense techniques achieve high accuracy in detecting some specific adversarial faces (adv-faces)
New attack methods especially GAN-based attacks with completely different noise patterns circumvent them and reach a higher attack success rate.
- Score: 36.26178169550577
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks aim to disturb the functionality of a target system by
adding specific noise to the input samples, bringing potential threats to
security and robustness when applied to facial recognition systems. Although
existing defense techniques achieve high accuracy in detecting some specific
adversarial faces (adv-faces), new attack methods especially GAN-based attacks
with completely different noise patterns circumvent them and reach a higher
attack success rate. Even worse, existing techniques require attack data before
implementing the defense, making it impractical to defend newly emerging
attacks that are unseen to defenders. In this paper, we investigate the
intrinsic generality of adv-faces and propose to generate pseudo adv-faces by
perturbing real faces with three heuristically designed noise patterns. We are
the first to train an adv-face detector using only real faces and their
self-perturbations, agnostic to victim facial recognition systems, and agnostic
to unseen attacks. By regarding adv-faces as out-of-distribution data, we then
naturally introduce a novel cascaded system for adv-face detection, which
consists of training data self-perturbations, decision boundary regularization,
and a max-pooling-based binary classifier focusing on abnormal local color
aberrations. Experiments conducted on LFW and CelebA-HQ datasets with eight
gradient-based and two GAN-based attacks validate that our method generalizes
to a variety of unseen adversarial attacks.
Related papers
- UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Principles of Designing Robust Remote Face Anti-Spoofing Systems [60.05766968805833]
This paper sheds light on the vulnerabilities of state-of-the-art face anti-spoofing methods against digital attacks.
It presents a comprehensive taxonomy of common threats encountered in face anti-spoofing systems.
arXiv Detail & Related papers (2024-06-06T02:05:35Z) - Detecting Adversarial Data via Perturbation Forgery [28.637963515748456]
adversarial detection aims to identify and filter out adversarial data from the data flow based on discrepancies in distribution and noise patterns between natural and adversarial data.
New attacks based on generative models with imbalanced and anisotropic noise patterns evade detection.
We propose Perturbation Forgery, which includes noise distribution perturbation, sparse mask generation, and pseudo-adversarial data production, to train an adversarial detector capable of detecting unseen gradient-based, generative-model-based, and physical adversarial attacks.
arXiv Detail & Related papers (2024-05-25T13:34:16Z) - Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery
Detection [62.595450266262645]
This paper introduces a novel and previously unrecognized threat in face forgery detection scenarios caused by backdoor attack.
By embedding backdoors into models, attackers can deceive detectors into producing erroneous predictions for forged faces.
We propose emphPoisoned Forgery Face framework, which enables clean-label backdoor attacks on face forgery detectors.
arXiv Detail & Related papers (2024-02-18T06:31:05Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - A Random-patch based Defense Strategy Against Physical Attacks for Face
Recognition Systems [3.6202815454709536]
We propose a random-patch based defense strategy to robustly detect physical attacks for Face Recognition System (FRS)
Our method can be easily applied to the real world face recognition system and extended to other defense methods to boost the detection performance.
arXiv Detail & Related papers (2023-04-16T16:11:56Z) - RAF: Recursive Adversarial Attacks on Face Recognition Using Extremely
Limited Queries [2.8532545355403123]
Recent successful adversarial attacks on face recognition show that, despite the remarkable progress of face recognition models, they are still far behind the human intelligence for perception and recognition.
In this paper, we propose automatic face warping which needs extremely limited number of queries to fool the target model.
We evaluate the robustness of proposed method in the decision-based black-box attack setting.
arXiv Detail & Related papers (2022-07-04T00:22:45Z) - FaceHack: Triggering backdoored facial recognition systems using facial
characteristics [16.941198804770607]
Recent advances in Machine Learning have opened up new avenues for its extensive use in real-world applications.
Recent work demonstrated that Deep Neural Networks (DNNs), typically used in facial recognition systems, are susceptible to backdoor attacks.
In this work, we demonstrate that specific changes to facial characteristics may also be used to trigger malicious behavior in an ML model.
arXiv Detail & Related papers (2020-06-20T17:39:23Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.