Perceptual Indistinguishability-Net (PI-Net): Facial Image Obfuscation
with Manipulable Semantics
- URL: http://arxiv.org/abs/2104.01753v2
- Date: Wed, 7 Apr 2021 09:06:15 GMT
- Title: Perceptual Indistinguishability-Net (PI-Net): Facial Image Obfuscation
with Manipulable Semantics
- Authors: Jia-Wei Chen, Li-Ju Chen, Chia-Mu Yu, Chun-Shien Lu
- Abstract summary: We propose perceptual indistinguishability (PI) as a formal privacy notion particularly for images.
We also propose PI-Net, a privacy-preserving mechanism that achieves image obfuscation with PI guarantee.
- Score: 15.862524532287397
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the growing use of camera devices, the industry has many image datasets
that provide more opportunities for collaboration between the machine learning
community and industry. However, the sensitive information in the datasets
discourages data owners from releasing these datasets. Despite recent research
devoted to removing sensitive information from images, they provide neither
meaningful privacy-utility trade-off nor provable privacy guarantees. In this
study, with the consideration of the perceptual similarity, we propose
perceptual indistinguishability (PI) as a formal privacy notion particularly
for images. We also propose PI-Net, a privacy-preserving mechanism that
achieves image obfuscation with PI guarantee. Our study shows that PI-Net
achieves significantly better privacy utility trade-off through public image
data.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Region of Interest Loss for Anonymizing Learned Image Compression [3.0936354370614607]
We show how to achieve sufficient anonymization such that human faces become unrecognizable while persons are kept detectable.
This approach enables compression and anonymization in one step on the capture device, instead of transmitting sensitive, nonanonymized data over the network.
arXiv Detail & Related papers (2024-06-09T10:36:06Z) - Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining [75.25943383604266]
We question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving.
We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
arXiv Detail & Related papers (2022-12-13T10:41:12Z) - ConfounderGAN: Protecting Image Data Privacy with Causal Confounder [85.6757153033139]
We propose ConfounderGAN, a generative adversarial network (GAN) that can make personal image data unlearnable to protect the data privacy of its owners.
Experiments are conducted in six image classification datasets, consisting of three natural object datasets and three medical datasets.
arXiv Detail & Related papers (2022-12-04T08:49:14Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Privacy-Preserving Face Recognition with Learnable Privacy Budgets in
Frequency Domain [77.8858706250075]
This paper proposes a privacy-preserving face recognition method using differential privacy in the frequency domain.
Our method performs very well with several classical face recognition test sets.
arXiv Detail & Related papers (2022-07-15T07:15:36Z) - Privacy Enhancement for Cloud-Based Few-Shot Learning [4.1579007112499315]
We study the privacy enhancement for the few-shot learning in an untrusted environment, e.g., the cloud.
We propose a method that learns privacy-preserved representation through the joint loss.
The empirical results show how privacy-performance trade-off can be negotiated for privacy-enhanced few-shot learning.
arXiv Detail & Related papers (2022-05-10T18:48:13Z) - DP-Image: Differential Privacy for Image Data in Feature Space [23.593790091283225]
We introduce a novel notion of image-aware differential privacy, referred to as DP-image, that can protect user's personal information in images.
Our results show that the proposed DP-Image method provides excellent DP protection on images, with a controllable distortion to faces.
arXiv Detail & Related papers (2021-03-12T04:02:23Z) - Subverting Privacy-Preserving GANs: Hiding Secrets in Sanitized Images [13.690485523871855]
State-of-the-art approaches use privacy-preserving generative adversarial networks (PP-GANs) to enable reliable facial expression recognition without leaking users' identity.
We show that it is possible to hide the sensitive identification data in the sanitized output images of such PP-GANs for later extraction.
arXiv Detail & Related papers (2020-09-19T19:02:17Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z) - Privacy-Preserving Image Classification in the Local Setting [17.375582978294105]
Local Differential Privacy (LDP) brings us a promising solution, which allows the data owners to randomly perturb their input to provide the plausible deniability of the data before releasing.
In this paper, we consider a two-party image classification problem, in which data owners hold the image and the untrustworthy data user would like to fit a machine learning model with these images as input.
We propose a supervised image feature extractor, DCAConv, which produces an image representation with scalable domain size.
arXiv Detail & Related papers (2020-02-09T01:25:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.