Real-World Adversarial Examples involving Makeup Application
- URL: http://arxiv.org/abs/2109.03329v1
- Date: Sat, 4 Sep 2021 05:29:28 GMT
- Title: Real-World Adversarial Examples involving Makeup Application
- Authors: Chang-Sheng Lin, Chia-Yi Hsu, Pin-Yu Chen, Chia-Mu Yu
- Abstract summary: We propose a physical adversarial attack with the use of full-face makeup.
Our attack can effectively overcome manual errors in makeup application, such as color and position-related errors.
- Score: 58.731070632586594
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have developed rapidly and have achieved outstanding
performance in several tasks, such as image classification and natural language
processing. However, recent studies have indicated that both digital and
physical adversarial examples can fool neural networks. Face-recognition
systems are used in various applications that involve security threats from
physical adversarial examples. Herein, we propose a physical adversarial attack
with the use of full-face makeup. The presence of makeup on the human face is a
reasonable possibility, which possibly increases the imperceptibility of
attacks. In our attack framework, we combine the cycle-adversarial generative
network (cycle-GAN) and a victimized classifier. The Cycle-GAN is used to
generate adversarial makeup, and the architecture of the victimized classifier
is VGG 16. Our experimental results show that our attack can effectively
overcome manual errors in makeup application, such as color and
position-related errors. We also demonstrate that the approaches used to train
the models can influence physical attacks; the adversarial perturbations
crafted from the pre-trained model are affected by the corresponding training
data.
Related papers
- Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Adv-Makeup: A New Imperceptible and Transferable Attack on Face
Recognition [20.34296242635234]
We propose a unified adversarial face generation method - Adv-Makeup.
Adv-Makeup can realize imperceptible and transferable attack under black-box setting.
It can significantly improve the attack success rate under black-box setting, even attacking commercial systems.
arXiv Detail & Related papers (2021-05-07T11:00:35Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Robust Attacks on Deep Learning Face Recognition in the Physical World [48.909604306342544]
FaceAdv is a physical-world attack that crafts adversarial stickers to deceive FR systems.
It mainly consists of a sticker generator and a transformer, where the former can craft several stickers with different shapes.
We conduct extensive experiments to evaluate the effectiveness of FaceAdv on attacking 3 typical FR systems.
arXiv Detail & Related papers (2020-11-27T02:24:43Z) - AdvFoolGen: Creating Persistent Troubles for Deep Classifiers [17.709146615433458]
We present a new black-box attack termed AdvFoolGen, which can generate attacking images from the same feature space as that of the natural images.
We demonstrate the effectiveness and robustness of our attack in the face of state-of-the-art defense techniques.
arXiv Detail & Related papers (2020-07-20T21:27:41Z) - Detection of Makeup Presentation Attacks based on Deep Face
Representations [16.44565034551196]
The application of makeup can be abused to launch so-called makeup presentation attacks.
It is shown that makeup presentation attacks might seriously impact the security of the face recognition system.
We propose an attack detection scheme which distinguishes makeup presentation attacks from genuine authentication attempts.
arXiv Detail & Related papers (2020-06-09T06:53:58Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.