Training privacy-preserving video analytics pipelines by suppressing
features that reveal information about private attributes
- URL: http://arxiv.org/abs/2203.02635v1
- Date: Sat, 5 Mar 2022 01:31:07 GMT
- Title: Training privacy-preserving video analytics pipelines by suppressing
features that reveal information about private attributes
- Authors: Chau Yi Li and Andrea Cavallaro
- Abstract summary: We consider an adversary with access to the features extracted by a deployed deep neural network and use these features to predict private attributes.
We modify the training of the network using a confusion loss that encourages the extraction of features that make it difficult for the adversary to accurately predict private attributes.
Results show that, compared to the original network, the proposed PrivateNet can reduce the leakage of private information of a state-of-the-art emotion recognition by 2.88% for gender and by 13.06% for age group.
- Score: 40.31692020706419
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks are increasingly deployed for scene analytics, including
to evaluate the attention and reaction of people exposed to out-of-home
advertisements. However, the features extracted by a deep neural network that
was trained to predict a specific, consensual attribute (e.g. emotion) may also
encode and thus reveal information about private, protected attributes (e.g.
age or gender). In this work, we focus on such leakage of private information
at inference time. We consider an adversary with access to the features
extracted by the layers of a deployed neural network and use these features to
predict private attributes. To prevent the success of such an attack, we modify
the training of the network using a confusion loss that encourages the
extraction of features that make it difficult for the adversary to accurately
predict private attributes. We validate this training approach on image-based
tasks using a publicly available dataset. Results show that, compared to the
original network, the proposed PrivateNet can reduce the leakage of private
information of a state-of-the-art emotion recognition classifier by 2.88% for
gender and by 13.06% for age group, with a minimal effect on task accuracy.
Related papers
- Attribute-preserving Face Dataset Anonymization via Latent Code
Optimization [64.4569739006591]
We present a task-agnostic anonymization procedure that directly optimize the images' latent representation in the latent space of a pre-trained GAN.
We demonstrate through a series of experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes.
arXiv Detail & Related papers (2023-03-20T17:34:05Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Unintended memorisation of unique features in neural networks [15.174895411434026]
We show that unique features occurring only once in training data are memorised by discriminative multi-layer perceptrons and convolutional neural networks.
We develop a score estimating a model's sensitivity to a unique feature by comparing the KL divergences of the model's output distributions.
We find that typical strategies to prevent overfitting do not prevent unique feature memorisation.
arXiv Detail & Related papers (2022-05-20T10:48:18Z) - Measuring Unintended Memorisation of Unique Private Features in Neural
Networks [15.174895411434026]
We show that neural networks unintentionally memorise unique features even when they occur only once in training data.
An example of a unique feature is a person's name that is accidentally present on a training image.
arXiv Detail & Related papers (2022-02-16T14:39:05Z) - Honest-but-Curious Nets: Sensitive Attributes of Private Inputs can be
Secretly Coded into the Entropy of Classifiers' Outputs [1.0742675209112622]
Deep neural networks, trained for the classification of a non-sensitive target attribute, can reveal sensitive attributes of their input data.
We show that deep classifiers can be trained to secretly encode a sensitive attribute of users' input data, at inference time.
arXiv Detail & Related papers (2021-05-25T16:27:57Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - Subverting Privacy-Preserving GANs: Hiding Secrets in Sanitized Images [13.690485523871855]
State-of-the-art approaches use privacy-preserving generative adversarial networks (PP-GANs) to enable reliable facial expression recognition without leaking users' identity.
We show that it is possible to hide the sensitive identification data in the sanitized output images of such PP-GANs for later extraction.
arXiv Detail & Related papers (2020-09-19T19:02:17Z) - TIPRDC: Task-Independent Privacy-Respecting Data Crowdsourcing Framework
for Deep Learning with Anonymized Intermediate Representations [49.20701800683092]
We present TIPRDC, a task-independent privacy-respecting data crowdsourcing framework with anonymized intermediate representation.
The goal of this framework is to learn a feature extractor that can hide the privacy information from the intermediate representations; while maximally retaining the original information embedded in the raw data for the data collector to accomplish unknown learning tasks.
arXiv Detail & Related papers (2020-05-23T06:21:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.