Towards Effective Adversarial Textured 3D Meshes on Physical Face
Recognition
- URL: http://arxiv.org/abs/2303.15818v1
- Date: Tue, 28 Mar 2023 08:42:54 GMT
- Title: Towards Effective Adversarial Textured 3D Meshes on Physical Face
Recognition
- Authors: Xiao Yang, Chang Liu, Longlong Xu, Yikai Wang, Yinpeng Dong, Ning
Chen, Hang Su, Jun Zhu
- Abstract summary: The goal of this work is to develop a more reliable technique that can carry out an end-to-end evaluation of adversarial robustness for commercial systems.
We design adversarial textured 3D meshes (AT3D) with an elaborate topology on a human face, which can be 3D-printed and pasted on the attacker's face to evade the defenses.
To deviate from the mesh-based space, we propose to perturb the low-dimensional coefficient space based on 3D Morphable Model.
- Score: 42.60954035488262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face recognition is a prevailing authentication solution in numerous
biometric applications. Physical adversarial attacks, as an important
surrogate, can identify the weaknesses of face recognition systems and evaluate
their robustness before deployed. However, most existing physical attacks are
either detectable readily or ineffective against commercial recognition
systems. The goal of this work is to develop a more reliable technique that can
carry out an end-to-end evaluation of adversarial robustness for commercial
systems. It requires that this technique can simultaneously deceive black-box
recognition models and evade defensive mechanisms. To fulfill this, we design
adversarial textured 3D meshes (AT3D) with an elaborate topology on a human
face, which can be 3D-printed and pasted on the attacker's face to evade the
defenses. However, the mesh-based optimization regime calculates gradients in
high-dimensional mesh space, and can be trapped into local optima with
unsatisfactory transferability. To deviate from the mesh-based space, we
propose to perturb the low-dimensional coefficient space based on 3D Morphable
Model, which significantly improves black-box transferability meanwhile
enjoying faster search efficiency and better visual quality. Extensive
experiments in digital and physical scenarios show that our method effectively
explores the security vulnerabilities of multiple popular commercial services,
including three recognition APIs, four anti-spoofing APIs, two prevailing
mobile phones and two automated access control systems.
Related papers
- VoxAtnNet: A 3D Point Clouds Convolutional Neural Network for Generalizable Face Presentation Attack Detection [2.6118211807973157]
Face biometric systems are vulnerable to Presentation Attacks (PAs)
We propose a novel Presentation Attack Detection (PAD) algorithm based on 3D point clouds captured using the frontal camera of a smartphone.
arXiv Detail & Related papers (2024-04-19T07:30:36Z) - Towards Transferable Targeted 3D Adversarial Attack in the Physical World [34.36328985344749]
transferable targeted adversarial attacks could pose a greater threat to security-critical tasks.
We develop a novel framework named TT3D that could rapidly reconstruct from few multi-view images into Transferable Targeted 3D textured meshes.
Experimental results show that TT3D not only exhibits superior cross-model transferability but also maintains considerable adaptability across different renders and vision tasks.
arXiv Detail & Related papers (2023-12-15T06:33:14Z) - AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware
Robust Adversarial Training [64.14759275211115]
We propose a depth-aware robust adversarial training method for monocular 3D object detection, dubbed DART3D.
Our adversarial training approach capitalizes on the inherent uncertainty, enabling the model to significantly improve its robustness against adversarial attacks.
arXiv Detail & Related papers (2023-09-03T07:05:32Z) - M3FAS: An Accurate and Robust MultiModal Mobile Face Anti-Spoofing System [39.37647248710612]
Face presentation attacks (FPA) have brought increasing concerns to the public through various malicious applications.
We devise an accurate and robust MultiModal Mobile Face Anti-Spoofing system named M3FAS.
arXiv Detail & Related papers (2023-01-30T12:37:04Z) - Face Presentation Attack Detection [59.05779913403134]
Face recognition technology has been widely used in daily interactive applications such as checking-in and mobile payment.
However, its vulnerability to presentation attacks (PAs) limits its reliable use in ultra-secure applicational scenarios.
arXiv Detail & Related papers (2022-12-07T14:51:17Z) - Controllable Evaluation and Generation of Physical Adversarial Patch on
Face Recognition [49.42127182149948]
Recent studies have revealed the vulnerability of face recognition models against physical adversarial patches.
We propose to simulate the complex transformations of faces in the physical world via 3D-face modeling.
We further propose a Face3DAdv method considering the 3D face transformations and realistic physical variations.
arXiv Detail & Related papers (2022-03-09T10:21:40Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.