Examining the Human Perceptibility of Black-Box Adversarial Attacks on
Face Recognition
- URL: http://arxiv.org/abs/2107.09126v1
- Date: Mon, 19 Jul 2021 19:45:44 GMT
- Title: Examining the Human Perceptibility of Black-Box Adversarial Attacks on
Face Recognition
- Authors: Benjamin Spetter-Goldstein, Nataniel Ruiz, Sarah Adel Bargal
- Abstract summary: Adversarial attacks are a promising way to grant users privacy from Face Recognition systems.
We show how the $ell$ norm and other metrics do not correlate with human perceptibility in a linear fashion.
- Score: 14.557859576234621
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The modern open internet contains billions of public images of human faces
across the web, especially on social media websites used by half the world's
population. In this context, Face Recognition (FR) systems have the potential
to match faces to specific names and identities, creating glaring privacy
concerns. Adversarial attacks are a promising way to grant users privacy from
FR systems by disrupting their capability to recognize faces. Yet, such attacks
can be perceptible to human observers, especially under the more challenging
black-box threat model. In the literature, the justification for the
imperceptibility of such attacks hinges on bounding metrics such as $\ell_p$
norms. However, there is not much research on how these norms match up with
human perception. Through examining and measuring both the effectiveness of
recent black-box attacks in the face recognition setting and their
corresponding human perceptibility through survey data, we demonstrate the
trade-offs in perceptibility that occur as attacks become more aggressive. We
also show how the $\ell_2$ norm and other metrics do not correlate with human
perceptibility in a linear fashion, thus making these norms suboptimal at
measuring adversarial attack perceptibility.
Related papers
- Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems [13.830575255066773]
Face recognition pipelines have been widely deployed in mission-critical systems in trust, equitable and responsible AI applications.
The emergence of adversarial attacks has threatened the security of the entire recognition pipeline.
We propose an effective yet easy-to-launch physical adversarial attack, named AdvColor, against black-box face recognition pipelines.
arXiv Detail & Related papers (2024-07-11T13:58:09Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Face Encryption via Frequency-Restricted Identity-Agnostic Attacks [25.198662208981467]
Malicious collectors use deep face recognition systems to easily steal biometric information.
We propose a frequency-restricted identity-agnostic (FRIA) framework to encrypt face images from unauthorized face recognition.
arXiv Detail & Related papers (2023-08-11T07:38:46Z) - Attribute-Guided Encryption with Facial Texture Masking [64.77548539959501]
We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T23:50:43Z) - Toward Face Biometric De-identification using Adversarial Examples [12.990538405629453]
Face recognition has endangered the privacy of internet users particularly in social media.
In this paper, we assess the effectiveness of using two widely known adversarial methods for de-identifying personal images.
arXiv Detail & Related papers (2023-02-07T18:17:41Z) - Is Face Recognition Safe from Realizable Attacks? [1.7214499647717132]
Face recognition is a popular form of biometric authentication and due to its widespread use, attacks have become more common as well.
Recent studies show that Face Recognition Systems are vulnerable to attacks and can lead to erroneous identification of faces.
We propose an attack scheme where the attacker can generate realistic synthesized face images with subtle perturbations and physically realize that onto his face to attack black-box face recognition systems.
arXiv Detail & Related papers (2022-10-15T03:52:53Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Fairness Properties of Face Recognition and Obfuscation Systems [19.195705814819306]
Face obfuscation systems generate imperceptible perturbations, when added to an image, cause the facial recognition system to misidentify the user.
This dependence of face obfuscation on metric embedding networks, which are known to be unfair in the context of facial recognition, surfaces the question of demographic fairness.
We find that metric embedding networks are demographically aware; they cluster faces in the embedding space based on their demographic attributes.
arXiv Detail & Related papers (2021-08-05T16:18:15Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z) - On the Robustness of Face Recognition Algorithms Against Attacks and
Bias [78.68458616687634]
Face recognition algorithms have demonstrated very high recognition performance, suggesting suitability for real world applications.
Despite the enhanced accuracies, robustness of these algorithms against attacks and bias has been challenged.
This paper summarizes different ways in which the robustness of a face recognition algorithm is challenged.
arXiv Detail & Related papers (2020-02-07T18:21:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.