Adv-Makeup: A New Imperceptible and Transferable Attack on Face
Recognition
- URL: http://arxiv.org/abs/2105.03162v1
- Date: Fri, 7 May 2021 11:00:35 GMT
- Title: Adv-Makeup: A New Imperceptible and Transferable Attack on Face
Recognition
- Authors: Bangjie Yin, Wenxuan Wang, Taiping Yao, Junfeng Guo, Zelun Kong,
Shouhong Ding, Jilin Li and Cong Liu
- Abstract summary: We propose a unified adversarial face generation method - Adv-Makeup.
Adv-Makeup can realize imperceptible and transferable attack under black-box setting.
It can significantly improve the attack success rate under black-box setting, even attacking commercial systems.
- Score: 20.34296242635234
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep neural networks, particularly face recognition models, have been shown
to be vulnerable to both digital and physical adversarial examples. However,
existing adversarial examples against face recognition systems either lack
transferability to black-box models, or fail to be implemented in practice. In
this paper, we propose a unified adversarial face generation method -
Adv-Makeup, which can realize imperceptible and transferable attack under
black-box setting. Adv-Makeup develops a task-driven makeup generation method
with the blending module to synthesize imperceptible eye shadow over the
orbital region on faces. And to achieve transferability, Adv-Makeup implements
a fine-grained meta-learning adversarial attack strategy to learn more general
attack features from various models. Compared to existing techniques,
sufficient visualization results demonstrate that Adv-Makeup is capable to
generate much more imperceptible attacks under both digital and physical
scenarios. Meanwhile, extensive quantitative experiments show that Adv-Makeup
can significantly improve the attack success rate under black-box setting, even
attacking commercial systems.
Related papers
- Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - DiffAM: Diffusion-based Adversarial Makeup Transfer for Facial Privacy Protection [60.73609509756533]
DiffAM is a novel approach to generate high-quality protected face images with adversarial makeup transferred from reference images.
Experiments demonstrate that DiffAM achieves higher visual quality and attack success rates with a gain of 12.98% under black-box setting.
arXiv Detail & Related papers (2024-05-16T08:05:36Z) - 3D-Aware Adversarial Makeup Generation for Facial Privacy Protection [23.915259014651337]
3D-Aware Adversarial Makeup Generation GAN (3DAM-GAN)
A UV-based generator consisting of a novel Makeup Adjustment Module (MAM) and Makeup Transfer Module (MTM) is designed to render realistic and robust makeup.
Experiment results on several benchmark datasets demonstrate that 3DAM-GAN could effectively protect faces against various FR models.
arXiv Detail & Related papers (2023-06-26T12:27:59Z) - RSTAM: An Effective Black-Box Impersonation Attack on Face Recognition
using a Mobile and Compact Printer [10.245536402327096]
We propose a new method to attack face recognition models or systems called RSTAM.
RSTAM enables an effective black-box impersonation attack using an adversarial mask printed by a mobile and compact printer.
The performance of the attacks is also evaluated on state-of-the-art commercial face recognition systems: Face++, Baidu, Aliyun, Tencent, and Microsoft.
arXiv Detail & Related papers (2022-06-25T08:16:55Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Real-World Adversarial Examples involving Makeup Application [58.731070632586594]
We propose a physical adversarial attack with the use of full-face makeup.
Our attack can effectively overcome manual errors in makeup application, such as color and position-related errors.
arXiv Detail & Related papers (2021-09-04T05:29:28Z) - Deepfake Forensics via An Adversarial Game [99.84099103679816]
We advocate adversarial training for improving the generalization ability to both unseen facial forgeries and unseen image/video qualities.
Considering that AI-based face manipulation often leads to high-frequency artifacts that can be easily spotted by models yet difficult to generalize, we propose a new adversarial training method that attempts to blur out these specific artifacts.
arXiv Detail & Related papers (2021-03-25T02:20:08Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.