DP-Image: Differential Privacy for Image Data in Feature Space
- URL: http://arxiv.org/abs/2103.07073v2
- Date: Tue, 20 Jun 2023 06:26:31 GMT
- Title: DP-Image: Differential Privacy for Image Data in Feature Space
- Authors: Hanyu Xue, Bo Liu, Ming Ding, Tianqing Zhu, Dayong Ye, Li Song, Wanlei
Zhou
- Abstract summary: We introduce a novel notion of image-aware differential privacy, referred to as DP-image, that can protect user's personal information in images.
Our results show that the proposed DP-Image method provides excellent DP protection on images, with a controllable distortion to faces.
- Score: 23.593790091283225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The excessive use of images in social networks, government databases, and
industrial applications has posed great privacy risks and raised serious
concerns from the public. Even though differential privacy (DP) is a widely
accepted criterion that can provide a provable privacy guarantee, the
application of DP on unstructured data such as images is not trivial due to the
lack of a clear qualification on the meaningful difference between any two
images. In this paper, for the first time, we introduce a novel notion of
image-aware differential privacy, referred to as DP-image, that can protect
user's personal information in images, from both human and AI adversaries. The
DP-Image definition is formulated as an extended version of traditional
differential privacy, considering the distance measurements between feature
space vectors of images. Then we propose a mechanism to achieve DP-Image by
adding noise to an image feature vector. Finally, we conduct experiments with a
case study on face image privacy. Our results show that the proposed DP-Image
method provides excellent DP protection on images, with a controllable
distortion to faces.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Personalized DP-SGD using Sampling Mechanisms [5.50042037663784]
We extend Differentially Private Gradient Descent (DP-SGD) to support a recent privacy notion called ($Phi$,$Delta$)- Personalized Differential Privacy (($Phi$,$Delta$)- PDP.
Our algorithm uses a multi-round personalized sampling mechanism and embeds it within the DP-SGD iteration.
Experiments on real datasets show that our algorithm outperforms DP-SGD and simple combinations of DP-SGD with existing PDP mechanisms.
arXiv Detail & Related papers (2023-05-24T13:56:57Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Contextualize differential privacy in image database: a lightweight
image differential privacy approach based on principle component analysis
inverse [35.06571163816982]
Differential privacy (DP) has been the de-facto standard to preserve privacy-sensitive information in database.
The privacy-accuracy trade-off due to integrating DP is insufficiently demonstrated in the context of differentially-private image database.
This work aims at contextualizing DP in images by an explicit and intuitive demonstration of integrating conceptional differential privacy with images.
arXiv Detail & Related papers (2022-02-16T19:36:49Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - Differentially Private Imaging via Latent Space Manipulation [5.446368808660037]
We present a novel approach for image obfuscation by manipulating latent spaces of an unconditionally trained generative model.
This is the first approach to image privacy that satisfies $varepsilon$-differential privacy emphfor the person.
arXiv Detail & Related papers (2021-03-08T17:32:08Z) - Toward Privacy and Utility Preserving Image Representation [26.768476643200664]
We study the novel problem of creating privacy-preserving image representations with respect to a given utility task.
We propose a principled framework called the Adversarial Image Anonymizer (AIA)
AIA first creates an image representation using a generative model, then enhances the learned image representations using adversarial learning to preserve privacy and utility for a given task.
arXiv Detail & Related papers (2020-09-30T01:25:00Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.