Scapegoat Generation for Privacy Protection from Deepfake
- URL: http://arxiv.org/abs/2303.02930v1
- Date: Mon, 6 Mar 2023 06:52:00 GMT
- Title: Scapegoat Generation for Privacy Protection from Deepfake
- Authors: Gido Kato, Yoshihiro Fukuhara, Mariko Isogawa, Hideki Tsunashima,
Hirokatsu Kataoka, Shigeo Morishima
- Abstract summary: We propose a new problem formulation for deepfake prevention: generating a scapegoat image'' by modifying the style of the original input.
Even in the case of malicious deepfake, the privacy of the users is still protected.
- Score: 21.169776378130635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To protect privacy and prevent malicious use of deepfake, current studies
propose methods that interfere with the generation process, such as detection
and destruction approaches. However, these methods suffer from sub-optimal
generalization performance to unseen models and add undesirable noise to the
original image. To address these problems, we propose a new problem formulation
for deepfake prevention: generating a ``scapegoat image'' by modifying the
style of the original input in a way that is recognizable as an avatar by the
user, but impossible to reconstruct the real face. Even in the case of
malicious deepfake, the privacy of the users is still protected. To achieve
this, we introduce an optimization-based editing method that utilizes GAN
inversion to discourage deepfake models from generating similar scapegoats. We
validate the effectiveness of our proposed method through quantitative and user
studies.
Related papers
- DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing [93.45507533317405]
DiffusionGuard is a robust and effective defense method against unauthorized edits by diffusion-based image editing models.
We introduce a novel objective that generates adversarial noise targeting the early stage of the diffusion process.
We also introduce a mask-augmentation technique to enhance robustness against various masks during test time.
arXiv Detail & Related papers (2024-10-08T05:19:19Z) - PixelFade: Privacy-preserving Person Re-identification with Noise-guided Progressive Replacement [41.05432008027312]
Online person re-identification services privacy breaches from potential data leakage recovery attacks.
Previous privacy-preserving person re-identification methods are unable to resist recovery attacks and compromise accuracy.
We propose an iterative (PixelFade) method to protect pedestrian images.
arXiv Detail & Related papers (2024-08-10T12:52:54Z) - IMPRESS: Evaluating the Resilience of Imperceptible Perturbations
Against Unauthorized Data Usage in Diffusion-Based Generative AI [52.90082445349903]
Diffusion-based image generation models can create artistic images that mimic the style of an artist or maliciously edit the original images for fake content.
Several attempts have been made to protect the original images from such unauthorized data usage by adding imperceptible perturbations.
In this work, we introduce a purification perturbation platform, named IMPRESS, to evaluate the effectiveness of imperceptible perturbations as a protective measure.
arXiv Detail & Related papers (2023-10-30T03:33:41Z) - PrivacyGAN: robust generative image privacy [0.0]
We introduce a novel approach, PrivacyGAN, to safeguard privacy while maintaining image usability.
Drawing inspiration from Fawkes, our method entails shifting the original image within the embedding space towards a decoy image.
We demonstrate that our approach is effective even in unknown embedding transfer scenarios.
arXiv Detail & Related papers (2023-10-19T08:56:09Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - On the Vulnerability of DeepFake Detectors to Attacks Generated by
Denoising Diffusion Models [0.5827521884806072]
We investigate the vulnerability of single-image deepfake detectors to black-box attacks created by the newest generation of generative methods.
Our experiments are run on FaceForensics++, a widely used deepfake benchmark consisting of manipulated images.
Our findings indicate that employing just a single denoising diffusion step in the reconstruction process of a deepfake can significantly reduce the likelihood of detection.
arXiv Detail & Related papers (2023-07-11T15:57:51Z) - CLIP2Protect: Protecting Facial Privacy using Text-Guided Makeup via
Adversarial Latent Search [10.16904417057085]
Deep learning based face recognition systems can enable unauthorized tracking of users in the digital world.
Existing methods for enhancing privacy fail to generate naturalistic images that can protect facial privacy without compromising user experience.
We propose a novel two-step approach for facial privacy protection that relies on finding adversarial latent codes in the low-dimensional manifold of a pretrained generative model.
arXiv Detail & Related papers (2023-06-16T17:58:15Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.