RAF: Recursive Adversarial Attacks on Face Recognition Using Extremely
Limited Queries
- URL: http://arxiv.org/abs/2207.01149v1
- Date: Mon, 4 Jul 2022 00:22:45 GMT
- Title: RAF: Recursive Adversarial Attacks on Face Recognition Using Extremely
Limited Queries
- Authors: Keshav Kasichainula, Hadi Mansourifar, Weidong Shi
- Abstract summary: Recent successful adversarial attacks on face recognition show that, despite the remarkable progress of face recognition models, they are still far behind the human intelligence for perception and recognition.
In this paper, we propose automatic face warping which needs extremely limited number of queries to fool the target model.
We evaluate the robustness of proposed method in the decision-based black-box attack setting.
- Score: 2.8532545355403123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent successful adversarial attacks on face recognition show that, despite
the remarkable progress of face recognition models, they are still far behind
the human intelligence for perception and recognition. It reveals the
vulnerability of deep convolutional neural networks (CNNs) as state-of-the-art
building block for face recognition models against adversarial examples, which
can cause certain consequences for secure systems. Gradient-based adversarial
attacks are widely studied before and proved to be successful against face
recognition models. However, finding the optimized perturbation per each face
needs to submitting the significant number of queries to the target model. In
this paper, we propose recursive adversarial attack on face recognition using
automatic face warping which needs extremely limited number of queries to fool
the target model. Instead of a random face warping procedure, the warping
functions are applied on specific detected regions of face like eyebrows, nose,
lips, etc. We evaluate the robustness of proposed method in the decision-based
black-box attack setting, where the attackers have no access to the model
parameters and gradients, but hard-label predictions and confidence scores are
provided by the target model.
Related papers
- Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Detecting Adversarial Faces Using Only Real Face Self-Perturbations [36.26178169550577]
Adrial attacks aim to disturb the functionality of a target system by adding specific noise to the input samples.
Existing defense techniques achieve high accuracy in detecting some specific adversarial faces (adv-faces)
New attack methods especially GAN-based attacks with completely different noise patterns circumvent them and reach a higher attack success rate.
arXiv Detail & Related papers (2023-04-22T09:55:48Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - On the (Limited) Generalization of MasterFace Attacks and Its Relation
to the Capacity of Face Representations [11.924504853735645]
We study the generalizability of MasterFace attacks in empirical and theoretical investigations.
We estimate the face capacity and the maximum MasterFace coverage under the assumption that identities in the face space are well separated.
We conclude that MasterFaces should not be seen as a threat to face recognition systems but as a tool to enhance the robustness of face recognition models.
arXiv Detail & Related papers (2022-03-23T13:02:41Z) - Robust Physical-World Attacks on Face Recognition [52.403564953848544]
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
arXiv Detail & Related papers (2021-09-20T06:49:52Z) - Vulnerability of Face Recognition Systems Against Composite Face
Reconstruction Attack [3.3707422585608953]
Rounding confidence score is considered trivial but a simple and effective countermeasure to stop gradient descent based image reconstruction attacks.
In this paper, we prove that, the face reconstruction attacks based on composite faces can reveal the inefficiency of rounding policy as countermeasure.
arXiv Detail & Related papers (2020-08-23T03:37:51Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z) - On the Robustness of Face Recognition Algorithms Against Attacks and
Bias [78.68458616687634]
Face recognition algorithms have demonstrated very high recognition performance, suggesting suitability for real world applications.
Despite the enhanced accuracies, robustness of these algorithms against attacks and bias has been challenged.
This paper summarizes different ways in which the robustness of a face recognition algorithm is challenged.
arXiv Detail & Related papers (2020-02-07T18:21:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.