Privacy-Preserving Image Classification in the Local Setting
- URL: http://arxiv.org/abs/2002.03261v1
- Date: Sun, 9 Feb 2020 01:25:52 GMT
- Title: Privacy-Preserving Image Classification in the Local Setting
- Authors: Sen Wang, J.Morris Chang
- Abstract summary: Local Differential Privacy (LDP) brings us a promising solution, which allows the data owners to randomly perturb their input to provide the plausible deniability of the data before releasing.
In this paper, we consider a two-party image classification problem, in which data owners hold the image and the untrustworthy data user would like to fit a machine learning model with these images as input.
We propose a supervised image feature extractor, DCAConv, which produces an image representation with scalable domain size.
- Score: 17.375582978294105
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image data has been greatly produced by individuals and commercial vendors in
the daily life, and it has been used across various domains, like advertising,
medical and traffic analysis. Recently, image data also appears to be greatly
important in social utility, like emergency response. However, the privacy
concern becomes the biggest obstacle that prevents further exploration of image
data, due to that the image could reveal sensitive information, like the
personal identity and locations. The recent developed Local Differential
Privacy (LDP) brings us a promising solution, which allows the data owners to
randomly perturb their input to provide the plausible deniability of the data
before releasing. In this paper, we consider a two-party image classification
problem, in which data owners hold the image and the untrustworthy data user
would like to fit a machine learning model with these images as input. To
protect the image privacy, we propose to locally perturb the image
representation before revealing to the data user. Subsequently, we analyze how
the perturbation satisfies {\epsilon}-LDP and affect the data utility regarding
count-based and distance-based machine learning algorithm, and propose a
supervised image feature extractor, DCAConv, which produces an image
representation with scalable domain size. Our experiments show that DCAConv
could maintain a high data utility while preserving the privacy regarding
multiple image benchmark datasets.
Related papers
- Enhancing User-Centric Privacy Protection: An Interactive Framework through Diffusion Models and Machine Unlearning [54.30994558765057]
The study pioneers a comprehensive privacy protection framework that safeguards image data privacy concurrently during data sharing and model publication.
We propose an interactive image privacy protection framework that utilizes generative machine learning models to modify image information at the attribute level.
Within this framework, we instantiate two modules: a differential privacy diffusion model for protecting attribute information in images and a feature unlearning algorithm for efficient updates of the trained model on the revised image dataset.
arXiv Detail & Related papers (2024-09-05T07:55:55Z) - Assessing the Impact of Image Dataset Features on Privacy-Preserving Machine Learning [1.3604778572442302]
This study identifies image dataset characteristics that affect the utility and vulnerability of private and non-private Convolutional Neural Network (CNN) models.
We find that imbalanced datasets increase vulnerability in minority classes, but DP mitigates this issue.
arXiv Detail & Related papers (2024-09-02T15:30:27Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - ConfounderGAN: Protecting Image Data Privacy with Causal Confounder [85.6757153033139]
We propose ConfounderGAN, a generative adversarial network (GAN) that can make personal image data unlearnable to protect the data privacy of its owners.
Experiments are conducted in six image classification datasets, consisting of three natural object datasets and three medical datasets.
arXiv Detail & Related papers (2022-12-04T08:49:14Z) - Privacy Enhancement for Cloud-Based Few-Shot Learning [4.1579007112499315]
We study the privacy enhancement for the few-shot learning in an untrusted environment, e.g., the cloud.
We propose a method that learns privacy-preserved representation through the joint loss.
The empirical results show how privacy-performance trade-off can be negotiated for privacy-enhanced few-shot learning.
arXiv Detail & Related papers (2022-05-10T18:48:13Z) - Data privacy protection in microscopic image analysis for material data
mining [8.266759895003279]
In this study, a material microstructure image feature extraction algorithm FedTransfer based on data privacy protection is proposed.
The core contributions are as follows: 1) the federated learning algorithm is introduced into the polycrystalline microstructure image segmentation task to make full use of different user data to carry out machine learning, break the data island and improve the model generalization ability under the condition of ensuring the privacy and security of user data.
By sharing style information of images that is not urgent for user confidentiality, it can reduce the performance penalty caused by the distribution difference of data among different users.
arXiv Detail & Related papers (2021-11-09T11:16:33Z) - Personalized Image Semantic Segmentation [58.980245748434]
We generate more accurate segmentation results on unlabeled personalized images by investigating the data's personalized traits.
We propose a baseline method that incorporates the inter-image context when segmenting certain images.
The code and the PIS dataset will be made publicly available.
arXiv Detail & Related papers (2021-07-24T04:03:11Z) - Perceptual Indistinguishability-Net (PI-Net): Facial Image Obfuscation
with Manipulable Semantics [15.862524532287397]
We propose perceptual indistinguishability (PI) as a formal privacy notion particularly for images.
We also propose PI-Net, a privacy-preserving mechanism that achieves image obfuscation with PI guarantee.
arXiv Detail & Related papers (2021-04-05T03:40:07Z) - Privacy-preserving Object Detection [52.77024349608834]
We show that for object detection on COCO, both anonymizing the dataset by blurring faces, as well as swapping faces in a balanced manner along the gender and skin tone dimension, can retain object detection performances while preserving privacy and partially balancing bias.
arXiv Detail & Related papers (2021-03-11T10:34:54Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.