LDP-Feat: Image Features with Local Differential Privacy
- URL: http://arxiv.org/abs/2308.11223v1
- Date: Tue, 22 Aug 2023 06:28:55 GMT
- Title: LDP-Feat: Image Features with Local Differential Privacy
- Authors: Francesco Pittaluga and Bingbing Zhuang
- Abstract summary: We propose two novel inversion attacks to show that it is possible to recover the original image features from embeddings.
We propose the first method to privatize image features via local differential privacy, which, unlike prior approaches, provides a guaranteed bound for privacy leakage.
- Score: 10.306943706927006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern computer vision services often require users to share raw feature
descriptors with an untrusted server. This presents an inherent privacy risk,
as raw descriptors may be used to recover the source images from which they
were extracted. To address this issue, researchers recently proposed
privatizing image features by embedding them within an affine subspace
containing the original feature as well as adversarial feature samples. In this
paper, we propose two novel inversion attacks to show that it is possible to
(approximately) recover the original image features from these embeddings,
allowing us to recover privacy-critical image content. In light of such
successes and the lack of theoretical privacy guarantees afforded by existing
visual privacy methods, we further propose the first method to privatize image
features via local differential privacy, which, unlike prior approaches,
provides a guaranteed bound for privacy leakage regardless of the strength of
the attacks. In addition, our method yields strong performance in visual
localization as a downstream task while enjoying the privacy guarantee.
Related papers
- PrivacyGAN: robust generative image privacy [0.0]
We introduce a novel approach, PrivacyGAN, to safeguard privacy while maintaining image usability.
Drawing inspiration from Fawkes, our method entails shifting the original image within the embedding space towards a decoy image.
We demonstrate that our approach is effective even in unknown embedding transfer scenarios.
arXiv Detail & Related papers (2023-10-19T08:56:09Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - Attribute-preserving Face Dataset Anonymization via Latent Code
Optimization [64.4569739006591]
We present a task-agnostic anonymization procedure that directly optimize the images' latent representation in the latent space of a pre-trained GAN.
We demonstrate through a series of experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes.
arXiv Detail & Related papers (2023-03-20T17:34:05Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Differentially Private Imaging via Latent Space Manipulation [5.446368808660037]
We present a novel approach for image obfuscation by manipulating latent spaces of an unconditionally trained generative model.
This is the first approach to image privacy that satisfies $varepsilon$-differential privacy emphfor the person.
arXiv Detail & Related papers (2021-03-08T17:32:08Z) - Privacy-Preserving Image Features via Adversarial Affine Subspace
Embeddings [72.68801373979943]
Many computer vision systems require users to upload image features to the cloud for processing and storage.
We propose a new privacy-preserving feature representation.
Compared to the original features, our approach makes it significantly more difficult for an adversary to recover private information.
arXiv Detail & Related papers (2020-06-11T17:29:48Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.