Contextualize differential privacy in image database: a lightweight
image differential privacy approach based on principle component analysis
inverse
- URL: http://arxiv.org/abs/2202.08309v1
- Date: Wed, 16 Feb 2022 19:36:49 GMT
- Title: Contextualize differential privacy in image database: a lightweight
image differential privacy approach based on principle component analysis
inverse
- Authors: Shiliang Zhang, Xuehui Ma, Hui Cao, Tengyuan Zhao, Yajie Yu, Zhuzhu
Wang
- Abstract summary: Differential privacy (DP) has been the de-facto standard to preserve privacy-sensitive information in database.
The privacy-accuracy trade-off due to integrating DP is insufficiently demonstrated in the context of differentially-private image database.
This work aims at contextualizing DP in images by an explicit and intuitive demonstration of integrating conceptional differential privacy with images.
- Score: 35.06571163816982
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Differential privacy (DP) has been the de-facto standard to preserve
privacy-sensitive information in database. Nevertheless, there lacks a clear
and convincing contextualization of DP in image database, where individual
images' indistinguishable contribution to a certain analysis can be achieved
and observed when DP is exerted. As a result, the privacy-accuracy trade-off
due to integrating DP is insufficiently demonstrated in the context of
differentially-private image database. This work aims at contextualizing DP in
image database by an explicit and intuitive demonstration of integrating
conceptional differential privacy with images. To this end, we design a
lightweight approach dedicating to privatizing image database as a whole and
preserving the statistical semantics of the image database to an adjustable
level, while making individual images' contribution to such statistics
indistinguishable. The designed approach leverages principle component analysis
(PCA) to reduce the raw image with large amount of attributes to a lower
dimensional space whereby DP is performed, so as to decrease the DP load of
calculating sensitivity attribute-by-attribute. The DP-exerted image data,
which is not visible in its privatized format, is visualized through PCA
inverse such that both a human and machine inspector can evaluate the
privatization and quantify the privacy-accuracy trade-off in an analysis on the
privatized image database. Using the devised approach, we demonstrate the
contextualization of DP in images by two use cases based on deep learning
models, where we show the indistinguishability of individual images induced by
DP and the privatized images' retention of statistical semantics in deep
learning tasks, which is elaborated by quantitative analyses on the
privacy-accuracy trade-off under different privatization settings.
Related papers
- SemDP: Semantic-level Differential Privacy Protection for Face Datasets [4.694266441149191]
We propose a semantic-level differential privacy protection scheme that applies to the entire face dataset.
We first extract semantic information from the face dataset to build an attribute database, then apply differential perturbations to obscure this attribute data, and finally use an image model to generate a protected face dataset.
arXiv Detail & Related papers (2024-12-20T06:00:59Z) - Activity Recognition on Avatar-Anonymized Datasets with Masked Differential Privacy [64.32494202656801]
Privacy-preserving computer vision is an important emerging problem in machine learning and artificial intelligence.
We present anonymization pipeline that replaces sensitive human subjects in video datasets with synthetic avatars within context.
We also proposeMaskDP to protect non-anonymized but privacy sensitive background information.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Provable Privacy with Non-Private Pre-Processing [56.770023668379615]
We propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms.
Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions.
arXiv Detail & Related papers (2024-03-19T17:54:49Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Partial sensitivity analysis in differential privacy [58.730520380312676]
We investigate the impact of each input feature on the individual's privacy loss.
We experimentally evaluate our approach on queries over private databases.
We also explore our findings in the context of neural network training on synthetic data.
arXiv Detail & Related papers (2021-09-22T08:29:16Z) - Sensitivity analysis in differentially private machine learning using
hybrid automatic differentiation [54.88777449903538]
We introduce a novel textithybrid automatic differentiation (AD) system for sensitivity analysis.
This enables modelling the sensitivity of arbitrary differentiable function compositions, such as the training of neural networks on private data.
Our approach can enable the principled reasoning about privacy loss in the setting of data processing.
arXiv Detail & Related papers (2021-07-09T07:19:23Z) - DP-Image: Differential Privacy for Image Data in Feature Space [23.593790091283225]
We introduce a novel notion of image-aware differential privacy, referred to as DP-image, that can protect user's personal information in images.
Our results show that the proposed DP-Image method provides excellent DP protection on images, with a controllable distortion to faces.
arXiv Detail & Related papers (2021-03-12T04:02:23Z) - Differentially Private Representation for NLP: Formal Guarantee and An
Empirical Study on Privacy and Fairness [38.90014773292902]
It has been demonstrated that hidden representation learned by a deep model can encode private information of the input.
We propose Differentially Private Neural Representation (DPNR) to preserve the privacy of the extracted representation from text.
arXiv Detail & Related papers (2020-10-03T05:58:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.