Robust Attacks on Deep Learning Face Recognition in the Physical World
- URL: http://arxiv.org/abs/2011.13526v1
- Date: Fri, 27 Nov 2020 02:24:43 GMT
- Title: Robust Attacks on Deep Learning Face Recognition in the Physical World
- Authors: Meng Shen, Hao Yu, Liehuang Zhu, Ke Xu, Qi Li, Xiaojiang Du
- Abstract summary: FaceAdv is a physical-world attack that crafts adversarial stickers to deceive FR systems.
It mainly consists of a sticker generator and a transformer, where the former can craft several stickers with different shapes.
We conduct extensive experiments to evaluate the effectiveness of FaceAdv on attacking 3 typical FR systems.
- Score: 48.909604306342544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) have been increasingly used in face recognition
(FR) systems. Recent studies, however, show that DNNs are vulnerable to
adversarial examples, which can potentially mislead the FR systems using DNNs
in the physical world. Existing attacks on these systems either generate
perturbations working merely in the digital world, or rely on customized
equipments to generate perturbations and are not robust in varying physical
environments. In this paper, we propose FaceAdv, a physical-world attack that
crafts adversarial stickers to deceive FR systems. It mainly consists of a
sticker generator and a transformer, where the former can craft several
stickers with different shapes and the latter transformer aims to digitally
attach stickers to human faces and provide feedbacks to the generator to
improve the effectiveness of stickers. We conduct extensive experiments to
evaluate the effectiveness of FaceAdv on attacking 3 typical FR systems (i.e.,
ArcFace, CosFace and FaceNet). The results show that compared with a
state-of-the-art attack, FaceAdv can significantly improve success rate of both
dodging and impersonating attacks. We also conduct comprehensive evaluations to
demonstrate the robustness of FaceAdv.
Related papers
- Attack Anything: Blind DNNs via Universal Background Adversarial Attack [17.73886733971713]
It has been widely substantiated that deep neural networks (DNNs) are susceptible and vulnerable to adversarial perturbations.
We propose a background adversarial attack framework to attack anything, by which the attack efficacy generalizes well between diverse objects, models, and tasks.
We conduct comprehensive and rigorous experiments in both digital and physical domains across various objects, models, and tasks, demonstrating the effectiveness of attacking anything of the proposed method.
arXiv Detail & Related papers (2024-08-17T12:46:53Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - RSTAM: An Effective Black-Box Impersonation Attack on Face Recognition
using a Mobile and Compact Printer [10.245536402327096]
We propose a new method to attack face recognition models or systems called RSTAM.
RSTAM enables an effective black-box impersonation attack using an adversarial mask printed by a mobile and compact printer.
The performance of the attacks is also evaluated on state-of-the-art commercial face recognition systems: Face++, Baidu, Aliyun, Tencent, and Microsoft.
arXiv Detail & Related papers (2022-06-25T08:16:55Z) - Robust Physical-World Attacks on Face Recognition [52.403564953848544]
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
arXiv Detail & Related papers (2021-09-20T06:49:52Z) - Real-World Adversarial Examples involving Makeup Application [58.731070632586594]
We propose a physical adversarial attack with the use of full-face makeup.
Our attack can effectively overcome manual errors in makeup application, such as color and position-related errors.
arXiv Detail & Related papers (2021-09-04T05:29:28Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.