Privacy Adversarial Network: Representation Learning for Mobile Data
Privacy
- URL: http://arxiv.org/abs/2006.06535v1
- Date: Mon, 8 Jun 2020 09:42:04 GMT
- Title: Privacy Adversarial Network: Representation Learning for Mobile Data
Privacy
- Authors: Sicong Liu, Junzhao Du, Anshumali Shrivastava, Lin Zhong
- Abstract summary: A growing number of cloud-based intelligent services for mobile users require user data to be sent to the provider.
Prior works either obfuscate the data, e.g. add noise and remove identity information, or send representations extracted from the data, e.g. anonymized features.
This work departs from prior works in methodology: we leverage adversarial learning to a better balance between privacy and utility.
- Score: 33.75500773909694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The remarkable success of machine learning has fostered a growing number of
cloud-based intelligent services for mobile users. Such a service requires a
user to send data, e.g. image, voice and video, to the provider, which presents
a serious challenge to user privacy. To address this, prior works either
obfuscate the data, e.g. add noise and remove identity information, or send
representations extracted from the data, e.g. anonymized features. They
struggle to balance between the service utility and data privacy because
obfuscated data reduces utility and extracted representation may still reveal
sensitive information.
This work departs from prior works in methodology: we leverage adversarial
learning to a better balance between privacy and utility. We design a
\textit{representation encoder} that generates the feature representations to
optimize against the privacy disclosure risk of sensitive information (a
measure of privacy) by the \textit{privacy adversaries}, and concurrently
optimize with the task inference accuracy (a measure of utility) by the
\textit{utility discriminator}. The result is the privacy adversarial network
(\systemname), a novel deep model with the new training algorithm, that can
automatically learn representations from the raw data.
Intuitively, PAN adversarially forces the extracted representations to only
convey the information required by the target task. Surprisingly, this
constitutes an implicit regularization that actually improves task accuracy. As
a result, PAN achieves better utility and better privacy at the same time! We
report extensive experiments on six popular datasets and demonstrate the
superiority of \systemname compared with alternative methods reported in prior
work.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - Efficient and Privacy Preserving Group Signature for Federated Learning [2.121963121603413]
Federated Learning (FL) is a Machine Learning (ML) technique that aims to reduce the threats to user data privacy.
This paper proposes an efficient and privacy-preserving protocol for FL based on group signature.
arXiv Detail & Related papers (2022-07-12T04:12:10Z) - Mixed Differential Privacy in Computer Vision [133.68363478737058]
AdaMix is an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data.
A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset.
arXiv Detail & Related papers (2022-03-22T06:15:43Z) - Privacy-Utility Trades in Crowdsourced Signal Map Obfuscation [20.58763760239068]
Crowdsource celluar signal strength measurements can be used to generate signal maps to improve network performance.
We consider obfuscating such data before the data leaves the mobile device.
Our evaluation results, based on multiple, diverse, real-world signal map datasets, demonstrate the feasibility of concurrently achieving adequate privacy and utility.
arXiv Detail & Related papers (2022-01-13T03:46:22Z) - Adversarial representation learning for synthetic replacement of private
attributes [0.7619404259039281]
We propose a novel approach for data privatization, which involves two steps: in the first step, it removes the sensitive information, and in the second step, it replaces this information with an independent random sample.
Our method builds on adversarial representation learning which ensures strong privacy by training the model to fool an increasingly strong adversary.
arXiv Detail & Related papers (2020-06-14T22:07:19Z) - TIPRDC: Task-Independent Privacy-Respecting Data Crowdsourcing Framework
for Deep Learning with Anonymized Intermediate Representations [49.20701800683092]
We present TIPRDC, a task-independent privacy-respecting data crowdsourcing framework with anonymized intermediate representation.
The goal of this framework is to learn a feature extractor that can hide the privacy information from the intermediate representations; while maximally retaining the original information embedded in the raw data for the data collector to accomplish unknown learning tasks.
arXiv Detail & Related papers (2020-05-23T06:21:26Z) - Differentially Private Deep Learning with Smooth Sensitivity [144.31324628007403]
We study privacy concerns through the lens of differential privacy.
In this framework, privacy guarantees are generally obtained by perturbing models in such a way that specifics of data used to train the model are made ambiguous.
One of the most important techniques used in previous works involves an ensemble of teacher models, which return information to a student based on a noisy voting procedure.
In this work, we propose a novel voting mechanism with smooth sensitivity, which we call Immutable Noisy ArgMax, that, under certain conditions, can bear very large random noising from the teacher without affecting the useful information transferred to the student
arXiv Detail & Related papers (2020-03-01T15:38:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.