Beauty and the Beast: Imperceptible Perturbations Against Diffusion-Based Face Swapping via Directional Attribute Editing
- URL: http://arxiv.org/abs/2601.22744v1
- Date: Fri, 30 Jan 2026 09:24:47 GMT
- Title: Beauty and the Beast: Imperceptible Perturbations Against Diffusion-Based Face Swapping via Directional Attribute Editing
- Authors: Yilong Huang, Songze Li,
- Abstract summary: Diffusion-based face swapping achieves state-of-the-art performance, yet it exacerbates the potential harm of malicious face swapping to violate portraiture right or undermine personal reputation.<n>We propose FaceDefense, an enhanced proactive defense framework against diffusion-based face swapping.<n>Our method introduces a new diffusion loss to strengthen the defensive efficacy of adversarial examples, and employs a directional facial attribute editing to restore perturbation-induced distortions.
- Score: 21.375408098632615
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion-based face swapping achieves state-of-the-art performance, yet it also exacerbates the potential harm of malicious face swapping to violate portraiture right or undermine personal reputation. This has spurred the development of proactive defense methods. However, existing approaches face a core trade-off: large perturbations distort facial structures, while small ones weaken protection effectiveness. To address these issues, we propose FaceDefense, an enhanced proactive defense framework against diffusion-based face swapping. Our method introduces a new diffusion loss to strengthen the defensive efficacy of adversarial examples, and employs a directional facial attribute editing to restore perturbation-induced distortions, thereby enhancing visual imperceptibility. A two-phase alternating optimization strategy is designed to generate final perturbed face images. Extensive experiments show that FaceDefense significantly outperforms existing methods in both imperceptibility and defense effectiveness, achieving a superior trade-off.
Related papers
- Safeguarding Facial Identity against Diffusion-based Face Swapping via Cascading Pathway Disruption [21.37567715195999]
We propose VoidFace, a systemic defense method that views face swapping as a coupled identity pathway.<n>We first introduce localization disruption and identity erasure to degrade physical regression and semantic embeddings, thereby impairing the accurate modeling of the source face.<n>We then intervene in the generative domain by decoupling attention mechanisms to sever identity injection, and corrupting intermediate diffusion features to prevent the reconstruction of source identity.
arXiv Detail & Related papers (2026-01-21T07:52:56Z) - Towards Transferable Defense Against Malicious Image Edits [70.17363183107604]
Transferable Defense Against Malicious Image Edits (TDAE) is a novel bimodal framework that enhances image immunity against malicious edits.<n>We introduce FlatGrad Defense Mechanism (FDM), which incorporates gradient regularization into the adversarial objective.<n>For textual enhancement protection, we propose Dynamic Prompt Defense (DPD), which periodically refines text embeddings to align the editing outcomes of immunized images with those of the original images.
arXiv Detail & Related papers (2025-12-16T12:10:16Z) - Towards Imperceptible Adversarial Defense: A Gradient-Driven Shield against Facial Manipulations [18.932757222449673]
proactive defense strategies embed adversarial perturbations into facial images to counter deepfake manipulation.<n>Existing methods often face a tradeoff between imperceptibility and defense effectiveness-strong perturbations may disrupt forgeries but degrade visual fidelity.<n>We propose a gradient-projection-based adversarial proactive defense (GRASP) method that effectively counters facial deepfakes while minimizing perceptual degradation.
arXiv Detail & Related papers (2025-10-02T06:09:46Z) - Diffusion-based Adversarial Identity Manipulation for Facial Privacy Protection [14.797807196805607]
Face recognition has led to serious privacy concerns due to potential unauthorized surveillance and user tracking on social networks.<n>Existing methods for enhancing privacy fail to generate natural face images that can protect facial privacy.<n>We propose DiffAIM to generate natural and highly transferable adversarial faces against malicious FR systems.
arXiv Detail & Related papers (2025-04-30T13:49:59Z) - DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing [103.40147707280585]
DiffusionGuard is a robust and effective defense method against unauthorized edits by diffusion-based image editing models.<n>We introduce a novel objective that generates adversarial noise targeting the early stage of the diffusion process.<n>We also introduce a mask-augmentation technique to enhance robustness against various masks during test time.
arXiv Detail & Related papers (2024-10-08T05:19:19Z) - ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a significant threat to civil rights.<n>To prevent this fraud at its source, proactive defense has been proposed to disrupt the manipulation process.<n>This paper proposes a universal framework for combating facial manipulation, termed ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - Towards Prompt-robust Face Privacy Protection via Adversarial Decoupling
Augmentation Framework [20.652130361862053]
We propose the Adversarial Decoupling Augmentation Framework (ADAF) to enhance the defensive performance of facial privacy protection algorithms.
ADAF introduces multi-level text-related augmentations for defense stability against various attacker prompts.
arXiv Detail & Related papers (2023-05-06T09:00:50Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.