DeepBlur: A Simple and Effective Method for Natural Image Obfuscation
- URL: http://arxiv.org/abs/2104.02655v1
- Date: Wed, 31 Mar 2021 19:31:26 GMT
- Title: DeepBlur: A Simple and Effective Method for Natural Image Obfuscation
- Authors: Tao Li and Min Soo Choi
- Abstract summary: We present DeepBlur, a simple yet effective method for image obfuscation by blurring in the latent space of an unconditionally pre-trained generative model.
We compare it with existing methods by efficiency and image quality, and evaluate against both state-of-the-art deep learning models and industrial products.
- Score: 4.80165284612342
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is a growing privacy concern due to the popularity of social media and
surveillance systems, along with advances in face recognition software.
However, established image obfuscation techniques are either vulnerable to
re-identification attacks by human or deep learning models, insufficient in
preserving image fidelity, or too computationally intensive to be practical. To
tackle these issues, we present DeepBlur, a simple yet effective method for
image obfuscation by blurring in the latent space of an unconditionally
pre-trained generative model that is able to synthesize photo-realistic facial
images. We compare it with existing methods by efficiency and image quality,
and evaluate against both state-of-the-art deep learning models and industrial
products (e.g., Face++, Microsoft face service). Experiments show that our
method produces high quality outputs and is the strongest defense for most test
cases.
Related papers
- Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - CLIP2Protect: Protecting Facial Privacy using Text-Guided Makeup via
Adversarial Latent Search [10.16904417057085]
Deep learning based face recognition systems can enable unauthorized tracking of users in the digital world.
Existing methods for enhancing privacy fail to generate naturalistic images that can protect facial privacy without compromising user experience.
We propose a novel two-step approach for facial privacy protection that relies on finding adversarial latent codes in the low-dimensional manifold of a pretrained generative model.
arXiv Detail & Related papers (2023-06-16T17:58:15Z) - Building an Invisible Shield for Your Portrait against Deepfakes [34.65356811439098]
We propose a novel framework - Integrity Encryptor, aiming to protect portraits in a proactive strategy.
Our methodology involves covertly encoding messages that are closely associated with key facial attributes into authentic images.
The modified facial attributes serve as a mean of detecting manipulated images through a comparison of the decoded messages.
arXiv Detail & Related papers (2023-05-22T10:01:28Z) - Face Morphing Attack Detection Using Privacy-Aware Training Data [0.991629944808926]
Images of morphed faces pose a serious threat to face recognition--based security systems.
Modern detection algorithms learn to identify such morphing attacks using authentic images of real individuals.
This approach raises various privacy concerns and limits the amount of publicly available training data.
arXiv Detail & Related papers (2022-07-02T19:00:48Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Deepfake Forensics via An Adversarial Game [99.84099103679816]
We advocate adversarial training for improving the generalization ability to both unseen facial forgeries and unseen image/video qualities.
Considering that AI-based face manipulation often leads to high-frequency artifacts that can be easily spotted by models yet difficult to generalize, we propose a new adversarial training method that attempts to blur out these specific artifacts.
arXiv Detail & Related papers (2021-03-25T02:20:08Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.