Face Reconstruction Transfer Attack as Out-of-Distribution Generalization
- URL: http://arxiv.org/abs/2407.02403v1
- Date: Tue, 2 Jul 2024 16:21:44 GMT
- Title: Face Reconstruction Transfer Attack as Out-of-Distribution Generalization
- Authors: Yoon Gyo Jung, Jaewoo Park, Xingbo Dong, Hojin Park, Andrew Beng Jin Teoh, Octavia Camps,
- Abstract summary: We aim to reconstruct face images which are capable of transferring face attacks on unseen encoders.
Inspired by its OOD nature, we propose to solve Face Reconstruction Transfer Attack (FRTA) by Averaged Latent Search and Unsupervised Validation with pseudo target (ALSUV)
- Score: 15.258162177124317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding the vulnerability of face recognition systems to malicious attacks is of critical importance. Previous works have focused on reconstructing face images that can penetrate a targeted verification system. Even in the white-box scenario, however, naively reconstructed images misrepresent the identity information, hence the attacks are easily neutralized once the face system is updated or changed. In this paper, we aim to reconstruct face images which are capable of transferring face attacks on unseen encoders. We term this problem as Face Reconstruction Transfer Attack (FRTA) and show that it can be formulated as an out-of-distribution (OOD) generalization problem. Inspired by its OOD nature, we propose to solve FRTA by Averaged Latent Search and Unsupervised Validation with pseudo target (ALSUV). To strengthen the reconstruction attack on OOD unseen encoders, ALSUV reconstructs the face by searching the latent of amortized generator StyleGAN2 through multiple latent optimization, latent optimization trajectory averaging, and unsupervised validation with a pseudo target. We demonstrate the efficacy and generalization of our method on widely used face datasets, accompanying it with extensive ablation studies and visually, qualitatively, and quantitatively analyses. The source code will be released.
Related papers
- DiffusionFake: Enhancing Generalization in Deepfake Detection via Guided Stable Diffusion [94.46904504076124]
Deepfake technology has made face swapping highly realistic, raising concerns about the malicious use of fabricated facial content.
Existing methods often struggle to generalize to unseen domains due to the diverse nature of facial manipulations.
We introduce DiffusionFake, a novel framework that reverses the generative process of face forgeries to enhance the generalization of detection models.
arXiv Detail & Related papers (2024-10-06T06:22:43Z) - ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Hierarchical Generative Network for Face Morphing Attacks [7.34597796509503]
Face morphing attacks circumvent face recognition systems (FRSs) by creating a morphed image that contains multiple identities.
We propose a novel morphing attack method to improve the quality of morphed images and better preserve the contributing identities.
arXiv Detail & Related papers (2024-03-17T06:09:27Z) - AS-FIBA: Adaptive Selective Frequency-Injection for Backdoor Attack on
Deep Face Restoration [43.953370132140904]
Deep learning-based face restoration models have become targets for sophisticated backdoor attacks.
We introduce a unique degradation objective tailored for attacking restoration models.
We propose the Adaptive Selective Frequency Injection Backdoor Attack (AS-FIBA) framework.
arXiv Detail & Related papers (2024-03-11T04:44:26Z) - CLR-Face: Conditional Latent Refinement for Blind Face Restoration Using
Score-Based Diffusion Models [57.9771859175664]
Recent generative-prior-based methods have shown promising blind face restoration performance.
Generating fine-grained facial details faithful to inputs remains a challenging problem.
We introduce a diffusion-based-prior inside a VQGAN architecture that focuses on learning the distribution over uncorrupted latent embeddings.
arXiv Detail & Related papers (2024-02-08T23:51:49Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Improving Transferability of Adversarial Patches on Face Recognition
with Generative Models [43.51625789744288]
We evaluate the robustness of face recognition models using adversarial patches based on transferability.
We show that the gaps between the responses of substitute models and the target models dramatically decrease, exhibiting a better transferability.
arXiv Detail & Related papers (2021-06-29T02:13:05Z) - Black-Box Face Recovery from Identity Features [61.950765357647605]
We attack the state-of-the-art face recognition system (ArcFace) to test our algorithm.
Our algorithm requires a significantly less number of queries compared to the state-of-the-art solution.
arXiv Detail & Related papers (2020-07-27T15:25:38Z) - OGAN: Disrupting Deepfakes with an Adversarial Attack that Survives
Training [0.0]
We introduce a class of adversarial attacks that can disrupt face-swapping autoencoders.
We propose the Oscillating GAN (OGAN) attack, a novel attack optimized to be training-resistant.
These results demonstrate the existence of training-resistant adversarial attacks, potentially applicable to a wide range of domains.
arXiv Detail & Related papers (2020-06-17T17:18:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.