You Can Use But Cannot Recognize: Preserving Visual Privacy in Deep Neural Networks
- URL: http://arxiv.org/abs/2404.04098v1
- Date: Fri, 5 Apr 2024 13:49:27 GMT
- Title: You Can Use But Cannot Recognize: Preserving Visual Privacy in Deep Neural Networks
- Authors: Qiushi Li, Yan Zhang, Ju Ren, Qi Li, Yaoxue Zhang,
- Abstract summary: Existing privacy protection techniques are unable to efficiently protect such data.
We propose a novel privacy-preserving framework VisualMixer.
VisualMixer shuffles pixels in the spatial domain and in the chromatic channel space in the regions without injecting noises.
Experiments on real-world datasets demonstrate that VisualMixer can effectively preserve the visual privacy with negligible accuracy loss.
- Score: 29.03438707988713
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image data have been extensively used in Deep Neural Network (DNN) tasks in various scenarios, e.g., autonomous driving and medical image analysis, which incurs significant privacy concerns. Existing privacy protection techniques are unable to efficiently protect such data. For example, Differential Privacy (DP) that is an emerging technique protects data with strong privacy guarantee cannot effectively protect visual features of exposed image dataset. In this paper, we propose a novel privacy-preserving framework VisualMixer that protects the training data of visual DNN tasks by pixel shuffling, while not injecting any noises. VisualMixer utilizes a new privacy metric called Visual Feature Entropy (VFE) to effectively quantify the visual features of an image from both biological and machine vision aspects. In VisualMixer, we devise a task-agnostic image obfuscation method to protect the visual privacy of data for DNN training and inference. For each image, it determines regions for pixel shuffling in the image and the sizes of these regions according to the desired VFE. It shuffles pixels both in the spatial domain and in the chromatic channel space in the regions without injecting noises so that it can prevent visual features from being discerned and recognized, while incurring negligible accuracy loss. Extensive experiments on real-world datasets demonstrate that VisualMixer can effectively preserve the visual privacy with negligible accuracy loss, i.e., at average 2.35 percentage points of model accuracy loss, and almost no performance degradation on model training.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Region of Interest Loss for Anonymizing Learned Image Compression [3.0936354370614607]
We show how to achieve sufficient anonymization such that human faces become unrecognizable while persons are kept detectable.
This approach enables compression and anonymization in one step on the capture device, instead of transmitting sensitive, nonanonymized data over the network.
arXiv Detail & Related papers (2024-06-09T10:36:06Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Attribute-preserving Face Dataset Anonymization via Latent Code
Optimization [64.4569739006591]
We present a task-agnostic anonymization procedure that directly optimize the images' latent representation in the latent space of a pre-trained GAN.
We demonstrate through a series of experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes.
arXiv Detail & Related papers (2023-03-20T17:34:05Z) - Privacy-Preserving Feature Coding for Machines [32.057586389777185]
Automated machine vision pipelines do not need the exact visual content to perform their tasks.
We present a novel method to create a privacy-preserving latent representation of an image.
arXiv Detail & Related papers (2022-10-03T06:13:43Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Unintended memorisation of unique features in neural networks [15.174895411434026]
We show that unique features occurring only once in training data are memorised by discriminative multi-layer perceptrons and convolutional neural networks.
We develop a score estimating a model's sensitivity to a unique feature by comparing the KL divergences of the model's output distributions.
We find that typical strategies to prevent overfitting do not prevent unique feature memorisation.
arXiv Detail & Related papers (2022-05-20T10:48:18Z) - Privacy Enhancement for Cloud-Based Few-Shot Learning [4.1579007112499315]
We study the privacy enhancement for the few-shot learning in an untrusted environment, e.g., the cloud.
We propose a method that learns privacy-preserved representation through the joint loss.
The empirical results show how privacy-performance trade-off can be negotiated for privacy-enhanced few-shot learning.
arXiv Detail & Related papers (2022-05-10T18:48:13Z) - Syfer: Neural Obfuscation for Private Data Release [58.490998583666276]
We develop Syfer, a neural obfuscation method to protect against re-identification attacks.
Syfer composes trained layers with random neural networks to encode the original data.
It maintains the ability to predict diagnoses from the encoded data.
arXiv Detail & Related papers (2022-01-28T20:32:04Z) - Image Transformation Network for Privacy-Preserving Deep Neural Networks
and Its Security Evaluation [17.134566958534634]
We propose a transformation network for generating visually-protected images for privacy-preserving DNNs.
The proposed network enables us not only to strongly protect visual information but also to maintain the image classification accuracy that using plain images achieves.
arXiv Detail & Related papers (2020-08-07T12:58:45Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.