NullSwap: Proactive Identity Cloaking Against Deepfake Face Swapping
- URL: http://arxiv.org/abs/2503.18678v1
- Date: Mon, 24 Mar 2025 13:49:39 GMT
- Title: NullSwap: Proactive Identity Cloaking Against Deepfake Face Swapping
- Authors: Tianyi Wang, Harry Cheng, Xiao Zhang, Yinglong Wang,
- Abstract summary: We analyze the essence of Deepfake face swapping and argue the necessity of protecting source identities rather than target images.<n>We propose NullSwap, a novel proactive defense approach that cloaks source image identities and nullifies face swapping under a pure black-box scenario.<n> Experiments demonstrate the outstanding ability of our approach to fool various identity recognition models.
- Score: 8.284351945561099
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Suffering from performance bottlenecks in passively detecting high-quality Deepfake images due to the advancement of generative models, proactive perturbations offer a promising approach to disabling Deepfake manipulations by inserting signals into benign images. However, existing proactive perturbation approaches remain unsatisfactory in several aspects: 1) visual degradation due to direct element-wise addition; 2) limited effectiveness against face swapping manipulation; 3) unavoidable reliance on white- and grey-box settings to involve generative models during training. In this study, we analyze the essence of Deepfake face swapping and argue the necessity of protecting source identities rather than target images, and we propose NullSwap, a novel proactive defense approach that cloaks source image identities and nullifies face swapping under a pure black-box scenario. We design an Identity Extraction module to obtain facial identity features from the source image, while a Perturbation Block is then devised to generate identity-guided perturbations accordingly. Meanwhile, a Feature Block extracts shallow-level image features, which are then fused with the perturbation in the Cloaking Block for image reconstruction. Furthermore, to ensure adaptability across different identity extractors in face swapping algorithms, we propose Dynamic Loss Weighting to adaptively balance identity losses. Experiments demonstrate the outstanding ability of our approach to fool various identity recognition models, outperforming state-of-the-art proactive perturbations in preventing face swapping models from generating images with correct source identities.
Related papers
- High-Fidelity Diffusion Face Swapping with ID-Constrained Facial Conditioning [39.09330483562798]
Face swapping aims to seamlessly transfer a source facial identity onto a target while preserving target attributes such as pose and expression.
Diffusion models, known for their superior generative capabilities, have recently shown promise in advancing face-swapping quality.
This paper addresses two key challenges in diffusion-based face swapping: the prioritized preservation of identity over target attributes and the inherent conflict between identity and attribute conditioning.
arXiv Detail & Related papers (2025-03-28T06:50:17Z) - iFADIT: Invertible Face Anonymization via Disentangled Identity Transform [51.123936665445356]
Face anonymization aims to conceal the visual identity of a face to safeguard the individual's privacy.<n>This paper proposes a novel framework named iFADIT, an acronym for Invertible Face Anonymization via Disentangled Identity Transform.
arXiv Detail & Related papers (2025-01-08T10:08:09Z) - ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - PortraitBooth: A Versatile Portrait Model for Fast Identity-preserved
Personalization [92.90392834835751]
PortraitBooth is designed for high efficiency, robust identity preservation, and expression-editable text-to-image generation.
PortraitBooth eliminates computational overhead and mitigates identity distortion.
It incorporates emotion-aware cross-attention control for diverse facial expressions in generated images.
arXiv Detail & Related papers (2023-12-11T13:03:29Z) - HFORD: High-Fidelity and Occlusion-Robust De-identification for Face
Privacy Protection [60.63915939982923]
Face de-identification is a practical way to solve the identity protection problem.
The existing facial de-identification methods have revealed several problems.
We present a High-Fidelity and Occlusion-Robust De-identification (HFORD) method to deal with these issues.
arXiv Detail & Related papers (2023-11-15T08:59:02Z) - Robust Identity Perceptual Watermark Against Deepfake Face Swapping [8.276177968730549]
Deepfake face swapping has caused critical privacy issues with the rapid development of deep generative models.
We propose the first robust identity perceptual watermarking framework that concurrently performs detection and source tracing against Deepfake face swapping.
arXiv Detail & Related papers (2023-11-02T16:04:32Z) - Controllable Inversion of Black-Box Face Recognition Models via
Diffusion [8.620807177029892]
We tackle the task of inverting the latent space of pre-trained face recognition models without full model access.
We show that the conditional diffusion model loss naturally emerges and that we can effectively sample from the inverse distribution.
Our method is the first black-box face recognition model inversion method that offers intuitive control over the generation process.
arXiv Detail & Related papers (2023-03-23T03:02:09Z) - Attribute-preserving Face Dataset Anonymization via Latent Code
Optimization [64.4569739006591]
We present a task-agnostic anonymization procedure that directly optimize the images' latent representation in the latent space of a pre-trained GAN.
We demonstrate through a series of experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes.
arXiv Detail & Related papers (2023-03-20T17:34:05Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Dual Spoof Disentanglement Generation for Face Anti-spoofing with Depth
Uncertainty Learning [54.15303628138665]
Face anti-spoofing (FAS) plays a vital role in preventing face recognition systems from presentation attacks.
Existing face anti-spoofing datasets lack diversity due to the insufficient identity and insignificant variance.
We propose Dual Spoof Disentanglement Generation framework to tackle this challenge by "anti-spoofing via generation"
arXiv Detail & Related papers (2021-12-01T15:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.