Active Privacy-Utility Trade-off Against Inference in Time-Series Data
Sharing
- URL: http://arxiv.org/abs/2202.05833v1
- Date: Fri, 11 Feb 2022 18:57:31 GMT
- Title: Active Privacy-Utility Trade-off Against Inference in Time-Series Data
Sharing
- Authors: Ecenaz Erdemir, Pier Luigi Dragotti, and Deniz Gunduz
- Abstract summary: We consider a user releasing her data containing personal information in return of a service from an honest-but-curious service provider (SP)
We formulate both problems as partially observable Markov decision processes (POMDPs) and numerically solve them by advantage actor-critic (A2C) deep reinforcement learning (DRL)
We evaluate the privacy-utility trade-off (PUT) of the proposed policies on both the synthetic data and smoking activity dataset, and show their validity by testing the activity detection accuracy of the SP modeled by a long short-term memory (LSTM) neural network.
- Score: 29.738666406095074
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Internet of things (IoT) devices, such as smart meters, smart speakers and
activity monitors, have become highly popular thanks to the services they
offer. However, in addition to their many benefits, they raise privacy concerns
since they share fine-grained time-series user data with untrusted third
parties. In this work, we consider a user releasing her data containing
personal information in return of a service from an honest-but-curious service
provider (SP). We model user's personal information as two correlated random
variables (r.v.'s), one of them, called the secret variable, is to be kept
private, while the other, called the useful variable, is to be disclosed for
utility. We consider active sequential data release, where at each time step
the user chooses from among a finite set of release mechanisms, each revealing
some information about the user's personal information, i.e., the true values
of the r.v.'s, albeit with different statistics. The user manages data release
in an online fashion such that the maximum amount of information is revealed
about the latent useful variable as quickly as possible, while the confidence
for the sensitive variable is kept below a predefined level. For privacy
measure, we consider both the probability of correctly detecting the true value
of the secret and the mutual information (MI) between the secret and the
released data. We formulate both problems as partially observable Markov
decision processes (POMDPs), and numerically solve them by advantage
actor-critic (A2C) deep reinforcement learning (DRL). We evaluate the
privacy-utility trade-off (PUT) of the proposed policies on both the synthetic
data and smoking activity dataset, and show their validity by testing the
activity detection accuracy of the SP modeled by a long short-term memory
(LSTM) neural network.
Related papers
- Differentially Private Data Release on Graphs: Inefficiencies and Unfairness [48.96399034594329]
This paper characterizes the impact of Differential Privacy on bias and unfairness in the context of releasing information about networks.
We consider a network release problem where the network structure is known to all, but the weights on edges must be released privately.
Our work provides theoretical foundations and empirical evidence into the bias and unfairness arising due to privacy in these networked decision problems.
arXiv Detail & Related papers (2024-08-08T08:37:37Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - Protecting User Privacy in Online Settings via Supervised Learning [69.38374877559423]
We design an intelligent approach to online privacy protection that leverages supervised learning.
By detecting and blocking data collection that might infringe on a user's privacy, we can restore a degree of digital privacy to the user.
arXiv Detail & Related papers (2023-04-06T05:20:16Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - How Much Privacy Does Federated Learning with Secure Aggregation
Guarantee? [22.7443077369789]
Federated learning (FL) has attracted growing interest for enabling privacy-preserving machine learning on data stored at multiple users.
While data never leaves users' devices, privacy still cannot be guaranteed since significant computations on users' training data are shared in the form of trained local models.
Secure Aggregation (SA) has been developed as a framework to preserve privacy in FL.
arXiv Detail & Related papers (2022-08-03T18:44:17Z) - Production of Categorical Data Verifying Differential Privacy:
Conception and Applications to Machine Learning [0.0]
Differential privacy is a formal definition that allows quantifying the privacy-utility trade-off.
With the local DP (LDP) model, users can sanitize their data locally before transmitting it to the server.
In all cases, we concluded that differentially private ML models achieve nearly the same utility metrics as non-private ones.
arXiv Detail & Related papers (2022-04-02T12:50:14Z) - Active Privacy-utility Trade-off Against a Hypothesis Testing Adversary [34.6578234382717]
We consider a user releasing her data containing some personal information in return of a service.
We model user's personal information as two correlated random variables, one of them, called the secret variable, is to be kept private.
For the utility, we consider both the probability of correct detection of the useful variable and the mutual information (MI) between the useful variable and released data.
arXiv Detail & Related papers (2021-02-16T17:49:31Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Deep Directed Information-Based Learning for Privacy-Preserving Smart
Meter Data Release [30.409342804445306]
We study the problem in the context of time series data and smart meters (SMs) power consumption measurements.
We introduce the Directed Information (DI) as a more meaningful measure of privacy in the considered setting.
Our empirical studies on real-world data sets from SMs measurements in the worst-case scenario show the existing trade-offs between privacy and utility.
arXiv Detail & Related papers (2020-11-20T13:41:11Z) - Privacy-Aware Time-Series Data Sharing with Deep Reinforcement Learning [33.42328078385098]
We study the privacy-utility trade-off (PUT) in time-series data sharing.
Methods that preserve the privacy for the current time may leak significant amount of information at the trace level.
We consider sharing the distorted version of a user's true data sequence with an untrusted third party.
arXiv Detail & Related papers (2020-03-04T18:47:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.