NeRFTAP: Enhancing Transferability of Adversarial Patches on Face
Recognition using Neural Radiance Fields
- URL: http://arxiv.org/abs/2311.17332v1
- Date: Wed, 29 Nov 2023 03:17:14 GMT
- Title: NeRFTAP: Enhancing Transferability of Adversarial Patches on Face
Recognition using Neural Radiance Fields
- Authors: Xiaoliang Liu, Furao Shen, Feng Han, Jian Zhao, Changhai Nie
- Abstract summary: We propose a novel adversarial attack method that considers both the transferability to the FR model and the victim's face image.
We generate new view face images for the source and target subjects to enhance transferability of adversarial patches.
Our work provides valuable insights for enhancing the robustness of FR systems in practical adversarial settings.
- Score: 15.823538329365348
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face recognition (FR) technology plays a crucial role in various
applications, but its vulnerability to adversarial attacks poses significant
security concerns. Existing research primarily focuses on transferability to
different FR models, overlooking the direct transferability to victim's face
images, which is a practical threat in real-world scenarios. In this study, we
propose a novel adversarial attack method that considers both the
transferability to the FR model and the victim's face image, called NeRFTAP.
Leveraging NeRF-based 3D-GAN, we generate new view face images for the source
and target subjects to enhance transferability of adversarial patches. We
introduce a style consistency loss to ensure the visual similarity between the
adversarial UV map and the target UV map under a 0-1 mask, enhancing the
effectiveness and naturalness of the generated adversarial face images.
Extensive experiments and evaluations on various FR models demonstrate the
superiority of our approach over existing attack techniques. Our work provides
valuable insights for enhancing the robustness of FR systems in practical
adversarial settings.
Related papers
- DiffFAS: Face Anti-Spoofing via Generative Diffusion Models [27.533334690705733]
Face anti-spoofing (FAS) plays a vital role in preventing face recognition (FR) systems from presentation attacks.
We propose DiffFAS framework, which quantifies quality as prior information input into the network to counter image quality shift.
We demonstrate the effectiveness of our framework on challenging cross-domain and cross-attack FAS datasets.
arXiv Detail & Related papers (2024-09-13T06:45:23Z) - Transferable Adversarial Facial Images for Privacy Protection [15.211743719312613]
We present a novel face privacy protection scheme with improved transferability while maintain high visual quality.
We first exploit global adversarial latent search to traverse the latent space of the generative model.
We then introduce a key landmark regularization module to preserve the visual identity information.
arXiv Detail & Related papers (2024-07-18T02:16:11Z) - DiffAM: Diffusion-based Adversarial Makeup Transfer for Facial Privacy Protection [60.73609509756533]
DiffAM is a novel approach to generate high-quality protected face images with adversarial makeup transferred from reference images.
Experiments demonstrate that DiffAM achieves higher visual quality and attack success rates with a gain of 12.98% under black-box setting.
arXiv Detail & Related papers (2024-05-16T08:05:36Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - Attribute-Guided Encryption with Facial Texture Masking [64.77548539959501]
We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T23:50:43Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Protecting Facial Privacy: Generating Adversarial Identity Masks via
Style-robust Makeup Transfer [24.25863892897547]
adversarial makeup transfer GAN (AMT-GAN) is a novel face protection method aiming at constructing adversarial face images.
In this paper, we introduce a new regularization module along with a joint training strategy to reconcile the conflicts between the adversarial noises and the cycle consistence loss in makeup transfer.
arXiv Detail & Related papers (2022-03-07T03:56:17Z) - Improving Transferability of Adversarial Patches on Face Recognition
with Generative Models [43.51625789744288]
We evaluate the robustness of face recognition models using adversarial patches based on transferability.
We show that the gaps between the responses of substitute models and the target models dramatically decrease, exhibiting a better transferability.
arXiv Detail & Related papers (2021-06-29T02:13:05Z) - Face Anti-Spoofing Via Disentangled Representation Learning [90.90512800361742]
Face anti-spoofing is crucial to security of face recognition systems.
We propose a novel perspective of face anti-spoofing that disentangles the liveness features and content features from images.
arXiv Detail & Related papers (2020-08-19T03:54:23Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.