Adversarial Privacy-preserving Filter
- URL: http://arxiv.org/abs/2007.12861v2
- Date: Tue, 4 Aug 2020 05:12:11 GMT
- Title: Adversarial Privacy-preserving Filter
- Authors: Jiaming Zhang, Jitao Sang, Xian Zhao, Xiaowen Huang, Yanfeng Sun,
Yongli Hu
- Abstract summary: Face recognition has been critically discussed regarding the malicious use of face images and the potential privacy problems.
Online photo sharing services unintentionally act as the main repository for malicious crawler and face recognition applications.
This work aims to develop a privacy-preserving solution, called Adversarial Privacy-preserving Filter (APF), to protect the online shared face images from being maliciously used.
- Score: 33.957912657446485
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While widely adopted in practical applications, face recognition has been
critically discussed regarding the malicious use of face images and the
potential privacy problems, e.g., deceiving payment system and causing personal
sabotage. Online photo sharing services unintentionally act as the main
repository for malicious crawler and face recognition applications. This work
aims to develop a privacy-preserving solution, called Adversarial
Privacy-preserving Filter (APF), to protect the online shared face images from
being maliciously used.We propose an end-cloud collaborated adversarial attack
solution to satisfy requirements of privacy, utility and nonaccessibility.
Specifically, the solutions consist of three modules: (1) image-specific
gradient generation, to extract image-specific gradient in the user end with a
compressed probe model; (2) adversarial gradient transfer, to fine-tune the
image-specific gradient in the server cloud; and (3) universal adversarial
perturbation enhancement, to append image-independent perturbation to derive
the final adversarial noise. Extensive experiments on three datasets validate
the effectiveness and efficiency of the proposed solution. A prototype
application is also released for further evaluation.We hope the end-cloud
collaborated attack framework could shed light on addressing the issue of
online multimedia sharing privacy-preserving issues from user side.
Related papers
- PersGuard: Preventing Malicious Personalization via Backdoor Attacks on Pre-trained Text-to-Image Diffusion Models [51.458089902581456]
We introduce PersGuard, a novel backdoor-based approach that prevents malicious personalization of specific images.
Our method significantly outperforms existing techniques, offering a more robust solution for privacy and copyright protection.
arXiv Detail & Related papers (2025-02-22T09:47:55Z) - A Survey on Facial Image Privacy Preservation in Cloud-Based Services [22.38855934169858]
Facial recognition models are increasingly employed by commercial enterprises, government agencies, and cloud service providers for identity verification, consumer services, and surveillance.
Users' facial images may be exploited without their consent, leading to potential data breaches and misuse.
This survey presents a comprehensive review of current methods aimed at preserving facial image privacy in cloud-based services.
arXiv Detail & Related papers (2025-01-15T09:00:32Z) - Transferable Adversarial Facial Images for Privacy Protection [15.211743719312613]
We present a novel face privacy protection scheme with improved transferability while maintain high visual quality.
We first exploit global adversarial latent search to traverse the latent space of the generative model.
We then introduce a key landmark regularization module to preserve the visual identity information.
arXiv Detail & Related papers (2024-07-18T02:16:11Z) - Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - LDP-Feat: Image Features with Local Differential Privacy [10.306943706927006]
We propose two novel inversion attacks to show that it is possible to recover the original image features from embeddings.
We propose the first method to privatize image features via local differential privacy, which, unlike prior approaches, provides a guaranteed bound for privacy leakage.
arXiv Detail & Related papers (2023-08-22T06:28:55Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - CLIP2Protect: Protecting Facial Privacy using Text-Guided Makeup via
Adversarial Latent Search [10.16904417057085]
Deep learning based face recognition systems can enable unauthorized tracking of users in the digital world.
Existing methods for enhancing privacy fail to generate naturalistic images that can protect facial privacy without compromising user experience.
We propose a novel two-step approach for facial privacy protection that relies on finding adversarial latent codes in the low-dimensional manifold of a pretrained generative model.
arXiv Detail & Related papers (2023-06-16T17:58:15Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Privacy-Preserving Image Features via Adversarial Affine Subspace
Embeddings [72.68801373979943]
Many computer vision systems require users to upload image features to the cloud for processing and storage.
We propose a new privacy-preserving feature representation.
Compared to the original features, our approach makes it significantly more difficult for an adversary to recover private information.
arXiv Detail & Related papers (2020-06-11T17:29:48Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.