Similarity-based Gray-box Adversarial Attack Against Deep Face
Recognition
- URL: http://arxiv.org/abs/2201.04011v2
- Date: Wed, 12 Jan 2022 09:51:13 GMT
- Title: Similarity-based Gray-box Adversarial Attack Against Deep Face
Recognition
- Authors: Hanrui Wang, Shuo Wang, Zhe Jin, Yandan Wang, Cunjian Chen, Massimo
Tistarell
- Abstract summary: We propose a similarity-based gray-box adversarial attack (SGADV) technique with a newly developed objective function.
We conduct experiments on face datasets of LFW, CelebA, and CelebA-HQ against deep face recognition models of FaceNet and InsightFace.
The results suggest that the proposed method significantly outperforms the existing adversarial attack techniques in the gray-box setting.
- Score: 11.397740896235089
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The majority of adversarial attack techniques perform well against deep face
recognition when the full knowledge of the system is revealed
(\emph{white-box}). However, such techniques act unsuccessfully in the gray-box
setting where the face templates are unknown to the attackers. In this work, we
propose a similarity-based gray-box adversarial attack (SGADV) technique with a
newly developed objective function. SGADV utilizes the dissimilarity score to
produce the optimized adversarial example, i.e., similarity-based adversarial
attack. This technique applies to both white-box and gray-box attacks against
authentication systems that determine genuine or imposter users using the
dissimilarity score. To validate the effectiveness of SGADV, we conduct
extensive experiments on face datasets of LFW, CelebA, and CelebA-HQ against
deep face recognition models of FaceNet and InsightFace in both white-box and
gray-box settings. The results suggest that the proposed method significantly
outperforms the existing adversarial attack techniques in the gray-box setting.
We hence summarize that the similarity-base approaches to develop the
adversarial example could satisfactorily cater to the gray-box attack scenarios
for de-authentication.
Related papers
- LISArD: Learning Image Similarity to Defend Against Gray-box Adversarial Attacks [13.154512864498912]
Adversarial Training (AT) and Adversarial Distillation (AD) include adversarial examples during the training phase.
This paper considers an even more realistic evaluation scenario: gray-box attacks, which assume that the attacker knows the architecture and the dataset used to train the target network, but cannot access its gradients.
We provide empirical evidence that models are vulnerable to gray-box attacks and propose LISArD, a defense mechanism that does not increase computational and temporal costs but provides robustness against gray-box and white-box attacks without including AT.
arXiv Detail & Related papers (2025-02-27T22:02:06Z) - AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - Counter-Samples: A Stateless Strategy to Neutralize Black Box Adversarial Attacks [2.9815109163161204]
Our paper presents a novel defence against black box attacks, where attackers use the victim model as an oracle to craft their adversarial examples.
Unlike traditional preprocessing defences that rely on sanitizing input samples, our strategy counters the attack process itself.
We demonstrate that our approach is remarkably effective against state-of-the-art black box attacks and outperforms existing defences for both the CIFAR-10 and ImageNet datasets.
arXiv Detail & Related papers (2024-03-14T10:59:54Z) - A Random Ensemble of Encrypted Vision Transformers for Adversarially
Robust Defense [6.476298483207895]
Deep neural networks (DNNs) are well known to be vulnerable to adversarial examples (AEs)
We propose a novel method using the vision transformer (ViT) that is a random ensemble of encrypted models for enhancing robustness against both white-box and black-box attacks.
In experiments, the method was demonstrated to be robust against not only white-box attacks but also black-box ones in an image classification task.
arXiv Detail & Related papers (2024-02-11T12:35:28Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - RSTAM: An Effective Black-Box Impersonation Attack on Face Recognition
using a Mobile and Compact Printer [10.245536402327096]
We propose a new method to attack face recognition models or systems called RSTAM.
RSTAM enables an effective black-box impersonation attack using an adversarial mask printed by a mobile and compact printer.
The performance of the attacks is also evaluated on state-of-the-art commercial face recognition systems: Face++, Baidu, Aliyun, Tencent, and Microsoft.
arXiv Detail & Related papers (2022-06-25T08:16:55Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Adversarial Defense via Image Denoising with Chaotic Encryption [65.48888274263756]
We propose a novel defense that assumes everything but a private key will be made available to the attacker.
Our framework uses an image denoising procedure coupled with encryption via a discretized Baker map.
arXiv Detail & Related papers (2022-03-19T10:25:02Z) - Art-Attack: Black-Box Adversarial Attack via Evolutionary Art [5.760976250387322]
Deep neural networks (DNNs) have achieved state-of-the-art performance in many tasks but have shown extreme vulnerabilities to attacks generated by adversarial examples.
This paper proposes a gradient-free attack by using a concept of evolutionary art to generate adversarial examples.
arXiv Detail & Related papers (2022-03-07T12:54:09Z) - Parallel Rectangle Flip Attack: A Query-based Black-box Attack against
Object Detection [89.08832589750003]
We propose a Parallel Rectangle Flip Attack (PRFA) via random search to avoid sub-optimal detection near the attacked region.
Our method can effectively and efficiently attack various popular object detectors, including anchor-based and anchor-free, and generate transferable adversarial examples.
arXiv Detail & Related papers (2022-01-22T06:00:17Z) - Self-Supervised Iterative Contextual Smoothing for Efficient Adversarial
Defense against Gray- and Black-Box Attack [24.66829920826166]
We propose a novel input transformation based adversarial defense method against gray- and black-box attack.
Our defense is free of computationally expensive adversarial training, yet, can approach its robust accuracy via input transformation.
arXiv Detail & Related papers (2021-06-22T09:51:51Z) - QAIR: Practical Query-efficient Black-Box Attacks for Image Retrieval [56.51916317628536]
We study the query-based attack against image retrieval to evaluate its robustness against adversarial examples under the black-box setting.
A new relevance-based loss is designed to quantify the attack effects by measuring the set similarity on the top-k retrieval results before and after attacks.
Experiments show that the proposed attack achieves a high attack success rate with few queries against the image retrieval systems under the black-box setting.
arXiv Detail & Related papers (2021-03-04T10:18:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.