Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks
- URL: http://arxiv.org/abs/2008.11772v1
- Date: Wed, 26 Aug 2020 19:27:27 GMT
- Title: Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks
- Authors: Shasha Li, Karim Khalil, Rameswar Panda, Chengyu Song, Srikanth V.
Krishnamurthy, Amit K. Roy-Chowdhury, Ananthram Swami
- Abstract summary: We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
- Score: 54.727945432381716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The emergence of Internet of Things (IoT) brings about new security
challenges at the intersection of cyber and physical spaces. One prime example
is the vulnerability of Face Recognition (FR) based access control in IoT
systems. While previous research has shown that Deep Neural Network(DNN)-based
FR systems (FRS) are potentially susceptible to imperceptible impersonation
attacks, the potency of such attacks in a wide set of scenarios has not been
thoroughly investigated. In this paper, we present the first systematic,
wide-ranging measurement study of the exploitability of DNN-based FR systems
using a large scale dataset. We find that arbitrary impersonation attacks,
wherein an arbitrary attacker impersonates an arbitrary target, are hard if
imperceptibility is an auxiliary goal. Specifically, we show that factors such
as skin color, gender, and age, impact the ability to carry out an attack on a
specific target victim, to different extents. We also study the feasibility of
constructing universal attacks that are robust to different poses or views of
the attacker's face. Our results show that finding a universal perturbation is
a much harder problem from the attacker's perspective. Finally, we find that
the perturbed images do not generalize well across different DNN models. This
suggests security countermeasures that can dramatically reduce the
exploitability of DNN-based FR systems.
Related papers
- Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - Problem space structural adversarial attacks for Network Intrusion Detection Systems based on Graph Neural Networks [8.629862888374243]
We propose the first formalization of adversarial attacks specifically tailored for GNN in network intrusion detection.
We outline and model the problem space constraints that attackers need to consider to carry out feasible structural attacks in real-world scenarios.
Our findings demonstrate the increased robustness of the models against classical feature-based adversarial attacks.
arXiv Detail & Related papers (2024-03-18T14:40:33Z) - Detecting Unknown Attacks in IoT Environments: An Open Set Classifier
for Enhanced Network Intrusion Detection [5.787704156827843]
In this paper, we introduce a framework aimed at mitigating the open set recognition (OSR) problem in the realm of Network Intrusion Detection Systems (NIDS) tailored for IoT environments.
Our framework capitalizes on image-based representations of packet-level data, extracting spatial and temporal patterns from network traffic.
The empirical results prominently underscore the framework's efficacy, boasting an impressive 88% detection rate for previously unseen attacks.
arXiv Detail & Related papers (2023-09-14T06:41:45Z) - Not So Robust After All: Evaluating the Robustness of Deep Neural
Networks to Unseen Adversarial Attacks [5.024667090792856]
Deep neural networks (DNNs) have gained prominence in various applications, such as classification, recognition, and prediction.
A fundamental attribute of traditional DNNs is their vulnerability to modifications in input data, which has resulted in the investigation of adversarial attacks.
This study aims to challenge the efficacy and generalization of contemporary defense mechanisms against adversarial attacks.
arXiv Detail & Related papers (2023-08-12T05:21:34Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Towards Adversarial-Resilient Deep Neural Networks for False Data
Injection Attack Detection in Power Grids [7.351477761427584]
False data injection attacks (FDIAs) pose a significant security threat to power system state estimation.
Recent studies have proposed machine learning (ML) techniques, particularly deep neural networks (DNNs)
arXiv Detail & Related papers (2021-02-17T22:26:34Z) - Double Targeted Universal Adversarial Perturbations [83.60161052867534]
We introduce a double targeted universal adversarial perturbations (DT-UAPs) to bridge the gap between the instance-discriminative image-dependent perturbations and the generic universal perturbations.
We show the effectiveness of the proposed DTA algorithm on a wide range of datasets and also demonstrate its potential as a physical attack.
arXiv Detail & Related papers (2020-10-07T09:08:51Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.