FaceSwapGuard: Safeguarding Facial Privacy from DeepFake Threats through Identity Obfuscation
- URL: http://arxiv.org/abs/2502.10801v1
- Date: Sat, 15 Feb 2025 13:45:19 GMT
- Title: FaceSwapGuard: Safeguarding Facial Privacy from DeepFake Threats through Identity Obfuscation
- Authors: Li Wang, Zheng Li, Xuhong Zhang, Shouling Ji, Shanqing Guo,
- Abstract summary: FaceSwapGuard (FSG) is a black-box defense mechanism against deepfake face-swapping threats.
FSG introduces imperceptible perturbations to a user's facial image, disrupting the features extracted by identity encoders.
Extensive experiments demonstrate the effectiveness of FSG against multiple face-swapping techniques.
- Score: 42.52406793874506
- License:
- Abstract: DeepFakes pose a significant threat to our society. One representative DeepFake application is face-swapping, which replaces the identity in a facial image with that of a victim. Although existing methods partially mitigate these risks by degrading the quality of swapped images, they often fail to disrupt the identity transformation effectively. To fill this gap, we propose FaceSwapGuard (FSG), a novel black-box defense mechanism against deepfake face-swapping threats. Specifically, FSG introduces imperceptible perturbations to a user's facial image, disrupting the features extracted by identity encoders. When shared online, these perturbed images mislead face-swapping techniques, causing them to generate facial images with identities significantly different from the original user. Extensive experiments demonstrate the effectiveness of FSG against multiple face-swapping techniques, reducing the face match rate from 90\% (without defense) to below 10\%. Both qualitative and quantitative studies further confirm its ability to confuse human perception, highlighting its practical utility. Additionally, we investigate key factors that may influence FSG and evaluate its robustness against various adaptive adversaries.
Related papers
- iFADIT: Invertible Face Anonymization via Disentangled Identity Transform [51.123936665445356]
Face anonymization aims to conceal the visual identity of a face to safeguard the individual's privacy.
This paper proposes a novel framework named iFADIT, an acronym for Invertible Face Anonymization via Disentangled Identity Transform.
arXiv Detail & Related papers (2025-01-08T10:08:09Z) - FaceTracer: Unveiling Source Identities from Swapped Face Images and Videos for Fraud Prevention [68.07489215110894]
FaceTracer is a framework specifically designed to trace the identity of the source person from swapped face images or videos.
In experiments, FaceTracer successfully identified the source person in swapped content and enabling the tracing of malicious actors involved in fraudulent activities.
arXiv Detail & Related papers (2024-12-11T04:00:17Z) - ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - Hierarchical Generative Network for Face Morphing Attacks [7.34597796509503]
Face morphing attacks circumvent face recognition systems (FRSs) by creating a morphed image that contains multiple identities.
We propose a novel morphing attack method to improve the quality of morphed images and better preserve the contributing identities.
arXiv Detail & Related papers (2024-03-17T06:09:27Z) - FIVA: Facial Image and Video Anonymization and Anonymization Defense [47.941023805223786]
We present a new approach for facial anonymization in images and videos, abbreviated as FIVA.
Our proposed method is able to maintain the same face anonymization consistently over frames with our suggested identity-tracking.
FIVA allows for 0 true positives for a false acceptance rate of 0.001.
arXiv Detail & Related papers (2023-09-08T09:34:48Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Protecting Facial Privacy: Generating Adversarial Identity Masks via
Style-robust Makeup Transfer [24.25863892897547]
adversarial makeup transfer GAN (AMT-GAN) is a novel face protection method aiming at constructing adversarial face images.
In this paper, we introduce a new regularization module along with a joint training strategy to reconcile the conflicts between the adversarial noises and the cycle consistence loss in makeup transfer.
arXiv Detail & Related papers (2022-03-07T03:56:17Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.