Geometrically Adaptive Dictionary Attack on Face Recognition
- URL: http://arxiv.org/abs/2111.04371v1
- Date: Mon, 8 Nov 2021 10:26:28 GMT
- Title: Geometrically Adaptive Dictionary Attack on Face Recognition
- Authors: Junyoung Byun, Hyojun Go, Changick Kim
- Abstract summary: We propose a strategy for query-efficient black-box attacks on face recognition.
Our core idea is to create an adversarial perturbation in the UV texture map and project it onto the face in the image.
We show overwhelming performance improvement in the experiments on the LFW and CPLFW datasets.
- Score: 23.712389625037442
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: CNN-based face recognition models have brought remarkable performance
improvement, but they are vulnerable to adversarial perturbations. Recent
studies have shown that adversaries can fool the models even if they can only
access the models' hard-label output. However, since many queries are needed to
find imperceptible adversarial noise, reducing the number of queries is crucial
for these attacks. In this paper, we point out two limitations of existing
decision-based black-box attacks. We observe that they waste queries for
background noise optimization, and they do not take advantage of adversarial
perturbations generated for other images. We exploit 3D face alignment to
overcome these limitations and propose a general strategy for query-efficient
black-box attacks on face recognition named Geometrically Adaptive Dictionary
Attack (GADA). Our core idea is to create an adversarial perturbation in the UV
texture map and project it onto the face in the image. It greatly improves
query efficiency by limiting the perturbation search space to the facial area
and effectively recycling previous perturbations. We apply the GADA strategy to
two existing attack methods and show overwhelming performance improvement in
the experiments on the LFW and CPLFW datasets. Furthermore, we also present a
novel attack strategy that can circumvent query similarity-based stateful
detection that identifies the process of query-based black-box attacks.
Related papers
- AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - AICAttack: Adversarial Image Captioning Attack with Attention-Based Optimization [13.045125782574306]
This paper presents a novel adversarial attack strategy, AICAttack, designed to attack image captioning models through subtle perturbations on images.
operating within a black-box attack scenario, our algorithm requires no access to the target model's architecture, parameters, or gradient information.
We demonstrate AICAttack's effectiveness through extensive experiments on benchmark datasets against multiple victim models.
arXiv Detail & Related papers (2024-02-19T08:27:23Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - QAIR: Practical Query-efficient Black-Box Attacks for Image Retrieval [56.51916317628536]
We study the query-based attack against image retrieval to evaluate its robustness against adversarial examples under the black-box setting.
A new relevance-based loss is designed to quantify the attack effects by measuring the set similarity on the top-k retrieval results before and after attacks.
Experiments show that the proposed attack achieves a high attack success rate with few queries against the image retrieval systems under the black-box setting.
arXiv Detail & Related papers (2021-03-04T10:18:43Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.