FaceGuard: A Self-Supervised Defense Against Adversarial Face Images
- URL: http://arxiv.org/abs/2011.14218v2
- Date: Mon, 5 Apr 2021 20:37:56 GMT
- Title: FaceGuard: A Self-Supervised Defense Against Adversarial Face Images
- Authors: Debayan Deb, Xiaoming Liu, Anil K. Jain
- Abstract summary: We propose a new self-supervised adversarial defense framework, namely FaceGuard, that can automatically detect, localize, and purify a wide variety of adversarial faces.
During training, FaceGuard automatically synthesizes challenging and diverse adversarial attacks, enabling a classifier to learn to distinguish them from real faces.
Experimental results on LFW dataset show that FaceGuard can achieve 99.81% detection accuracy on six unseen adversarial attack types.
- Score: 59.656264895721215
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prevailing defense mechanisms against adversarial face images tend to overfit
to the adversarial perturbations in the training set and fail to generalize to
unseen adversarial attacks. We propose a new self-supervised adversarial
defense framework, namely FaceGuard, that can automatically detect, localize,
and purify a wide variety of adversarial faces without utilizing pre-computed
adversarial training samples. During training, FaceGuard automatically
synthesizes challenging and diverse adversarial attacks, enabling a classifier
to learn to distinguish them from real faces and a purifier attempts to remove
the adversarial perturbations in the image space. Experimental results on LFW
dataset show that FaceGuard can achieve 99.81% detection accuracy on six unseen
adversarial attack types. In addition, the proposed method can enhance the face
recognition performance of ArcFace from 34.27% TAR @ 0.1% FAR under no defense
to 77.46% TAR @ 0.1% FAR.
Related papers
- ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - PuFace: Defending against Facial Cloaking Attacks for Facial Recognition Models [1.455585466338228]
Recently proposed facial cloaking attacks add invisible perturbation (cloaks) to facial images to protect users from being recognized by unauthorized facial recognition models.
This paper introduces PuFace, an image purification system leveraging the generalization ability of neural networks to diminish the impact of cloaks.
Our empirical experiment shows PuFace can effectively defend against two state-of-the-art facial cloaking attacks and reduces the attack success rate from 69.84% to 7.61% on average.
arXiv Detail & Related papers (2024-06-04T12:19:09Z) - Generalized Attacks on Face Verification Systems [2.4259557752446637]
Face verification (FV) using deep neural network models has made tremendous progress in recent years.
FV systems are vulnerable to Adversarial Attacks, which manipulate input images to deceive these systems in ways usually unnoticeable to humans.
We introduce the DodgePersonation Attack that formulates the creation of face images that impersonate a set of given identities.
arXiv Detail & Related papers (2023-09-12T00:00:24Z) - Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake
Detection [58.1263969438364]
We propose adversarial head turn (AdvHeat) as the first attempt at 3D adversarial face views against deepfake detectors.
Experiments validate the vulnerability of various detectors to AdvHeat in realistic, black-box scenarios.
Additional analyses demonstrate that AdvHeat is better than conventional attacks on both the cross-detector transferability and robustness to defenses.
arXiv Detail & Related papers (2023-09-03T07:01:34Z) - Information-containing Adversarial Perturbation for Combating Facial
Manipulation Systems [19.259372985094235]
Malicious applications of deep learning systems pose a serious threat to individuals' privacy and reputation.
We propose a novel two-tier protection method named Information-containing Adversarial Perturbation (IAP)
We use an encoder to map a facial image and its identity message to a cross-model adversarial example which can disrupt multiple facial manipulation systems.
arXiv Detail & Related papers (2023-03-21T06:48:14Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Unified Detection of Digital and Physical Face Attacks [61.6674266994173]
State-of-the-art defense mechanisms against face attacks achieve near perfect accuracies within one of three attack categories, namely adversarial, digital manipulation, or physical spoofs.
We propose a unified attack detection framework, namely UniFAD, that can automatically cluster 25 coherent attack types belonging to the three categories.
arXiv Detail & Related papers (2021-04-05T21:08:28Z) - Vulnerability of Face Recognition Systems Against Composite Face
Reconstruction Attack [3.3707422585608953]
Rounding confidence score is considered trivial but a simple and effective countermeasure to stop gradient descent based image reconstruction attacks.
In this paper, we prove that, the face reconstruction attacks based on composite faces can reveal the inefficiency of rounding policy as countermeasure.
arXiv Detail & Related papers (2020-08-23T03:37:51Z) - Encryption Inspired Adversarial Defense for Visual Classification [17.551718914117917]
We propose a new adversarial defense inspired by image encryption methods.
The proposed method utilizes a block-wise pixel shuffling with a secret key.
It achieves high accuracy (91.55 on clean images and (89.66 on adversarial examples with noise distance of 8/255 on CIFAR-10 dataset)
arXiv Detail & Related papers (2020-05-16T14:18:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.