Adversarial Attacks against Face Recognition: A Comprehensive Study
- URL: http://arxiv.org/abs/2007.11709v3
- Date: Sat, 6 Feb 2021 14:46:56 GMT
- Title: Adversarial Attacks against Face Recognition: A Comprehensive Study
- Authors: Fatemeh Vakhshiteh, Ahmad Nickabadi and Raghavendra Ramachandra
- Abstract summary: Face recognition (FR) systems have demonstrated outstanding verification performance.
Recent studies show that (deep) FR systems exhibit an intriguing vulnerability to imperceptible or perceptible but natural-looking adversarial input images.
- Score: 3.766020696203255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face recognition (FR) systems have demonstrated outstanding verification
performance, suggesting suitability for real-world applications ranging from
photo tagging in social media to automated border control (ABC). In an advanced
FR system with deep learning-based architecture, however, promoting the
recognition efficiency alone is not sufficient, and the system should also
withstand potential kinds of attacks designed to target its proficiency. Recent
studies show that (deep) FR systems exhibit an intriguing vulnerability to
imperceptible or perceptible but natural-looking adversarial input images that
drive the model to incorrect output predictions. In this article, we present a
comprehensive survey on adversarial attacks against FR systems and elaborate on
the competence of new countermeasures against them. Further, we propose a
taxonomy of existing attack and defense methods based on different criteria. We
compare attack methods on the orientation and attributes and defense approaches
on the category. Finally, we explore the challenges and potential research
direction.
Related papers
- Robust Image Classification: Defensive Strategies against FGSM and PGD Adversarial Attacks [0.0]
Adversarial attacks pose significant threats to the robustness of deep learning models in image classification.
This paper explores and refines defense mechanisms against these attacks to enhance the resilience of neural networks.
arXiv Detail & Related papers (2024-08-20T02:00:02Z) - MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Kick Bad Guys Out! Conditionally Activated Anomaly Detection in Federated Learning with Zero-Knowledge Proof Verification [22.078088272837068]
Federated Learning (FL) systems are susceptible to adversarial attacks.
Current defense methods are often impractical for real-world FL systems.
We propose a novel anomaly detection strategy that is designed for real-world FL systems.
arXiv Detail & Related papers (2023-10-06T07:09:05Z) - Physical Adversarial Attacks For Camera-based Smart Systems: Current
Trends, Categorization, Applications, Research Challenges, and Future Outlook [2.1771693754641013]
We aim to provide a thorough understanding of the concept of physical adversarial attacks, analyzing their key characteristics and distinguishing features.
Our article delves into various physical adversarial attack methods, categorized according to their target tasks in different applications.
We assess the performance of these attack methods in terms of their effectiveness, stealthiness, and robustness.
arXiv Detail & Related papers (2023-08-11T15:02:19Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - FACESEC: A Fine-grained Robustness Evaluation Framework for Face
Recognition Systems [49.577302852655144]
FACESEC is a framework for fine-grained robustness evaluation of face recognition systems.
We study five face recognition systems in both closed-set and open-set settings.
We find that accurate knowledge of neural architecture is significantly more important than knowledge of the training data in black-box attacks.
arXiv Detail & Related papers (2021-04-08T23:00:25Z) - Unknown Presentation Attack Detection against Rational Attackers [6.351869353952288]
Presentation attack detection and multimedia forensics are still vulnerable to attacks in real-life settings.
Some of the challenges for existing solutions are the detection of unknown attacks, the ability to perform in adversarial settings, few-shot learning, and explainability.
New optimization criterion is proposed and a set of requirements are defined for improving the performance of these systems in real-life settings.
arXiv Detail & Related papers (2020-10-04T14:37:10Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.