Differentially Private Multivariate Time Series Forecasting of
Aggregated Human Mobility With Deep Learning: Input or Gradient Perturbation?
- URL: http://arxiv.org/abs/2205.00436v1
- Date: Sun, 1 May 2022 10:11:04 GMT
- Title: Differentially Private Multivariate Time Series Forecasting of
Aggregated Human Mobility With Deep Learning: Input or Gradient Perturbation?
- Authors: H\'eber H. Arcolezi, Jean-Fran\c{c}ois Couchot, Denis Renaud, Bechara
Al Bouna, Xiaokui Xiao
- Abstract summary: This paper investigates the problem of forecasting multivariate aggregated human mobility while preserving the privacy of the individuals concerned.
Differential privacy, a state-of-the-art formal notion, has been used as the privacy guarantee in two different and independent steps when training deep learning models.
As shown in the results, differentially private deep learning models trained under gradient or input perturbation achieve nearly the same performance as non-private deep learning models.
- Score: 14.66445694852729
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper investigates the problem of forecasting multivariate aggregated
human mobility while preserving the privacy of the individuals concerned.
Differential privacy, a state-of-the-art formal notion, has been used as the
privacy guarantee in two different and independent steps when training deep
learning models. On one hand, we considered \textit{gradient perturbation},
which uses the differentially private stochastic gradient descent algorithm to
guarantee the privacy of each time series sample in the learning stage. On the
other hand, we considered \textit{input perturbation}, which adds differential
privacy guarantees in each sample of the series before applying any learning.
We compared four state-of-the-art recurrent neural networks: Long Short-Term
Memory, Gated Recurrent Unit, and their Bidirectional architectures, i.e.,
Bidirectional-LSTM and Bidirectional-GRU. Extensive experiments were conducted
with a real-world multivariate mobility dataset, which we published openly
along with this paper. As shown in the results, differentially private deep
learning models trained under gradient or input perturbation achieve nearly the
same performance as non-private deep learning models, with loss in performance
varying between $0.57\%$ to $2.8\%$. The contribution of this paper is
significant for those involved in urban planning and decision-making, providing
a solution to the human mobility multivariate forecast problem through
differentially private deep learning models.
Related papers
- HRNet: Differentially Private Hierarchical and Multi-Resolution Network for Human Mobility Data Synthesization [19.017342515321918]
We introduce the Hierarchical and Multi-Resolution Network (HRNet), a novel deep generative model designed to synthesize realistic human mobility data.
We first identify the key difficulties inherent in learning human mobility data under differential privacy.
HRNet integrates three components: a hierarchical location encoding mechanism, multi-task learning across multiple resolutions, and private pre-training.
arXiv Detail & Related papers (2024-05-13T12:56:24Z) - Sparsity-Preserving Differentially Private Training of Large Embedding
Models [67.29926605156788]
DP-SGD is a training algorithm that combines differential privacy with gradient descent.
Applying DP-SGD naively to embedding models can destroy gradient sparsity, leading to reduced training efficiency.
We present two new algorithms, DP-FEST and DP-AdaFEST, that preserve gradient sparsity during private training of large embedding models.
arXiv Detail & Related papers (2023-11-14T17:59:51Z) - A Novel Cross-Perturbation for Single Domain Generalization [54.612933105967606]
Single domain generalization aims to enhance the ability of the model to generalize to unknown domains when trained on a single source domain.
The limited diversity in the training data hampers the learning of domain-invariant features, resulting in compromised generalization performance.
We propose CPerb, a simple yet effective cross-perturbation method to enhance the diversity of the training data.
arXiv Detail & Related papers (2023-08-02T03:16:12Z) - Learning signatures of decision making from many individuals playing the
same game [54.33783158658077]
We design a predictive framework that learns representations to encode an individual's 'behavioral style'
We apply our method to a large-scale behavioral dataset from 1,000 humans playing a 3-armed bandit task.
arXiv Detail & Related papers (2023-02-21T21:41:53Z) - Differentially private partitioned variational inference [28.96767727430277]
Learning a privacy-preserving model from sensitive data which are distributed across multiple devices is an increasingly important problem.
We present differentially private partitioned variational inference, the first general framework for learning a variational approximation to a Bayesian posterior distribution.
arXiv Detail & Related papers (2022-09-23T13:58:40Z) - Practical Challenges in Differentially-Private Federated Survival
Analysis of Medical Data [57.19441629270029]
In this paper, we take advantage of the inherent properties of neural networks to federate the process of training of survival analysis models.
In the realistic setting of small medical datasets and only a few data centers, this noise makes it harder for the models to converge.
We propose DPFed-post which adds a post-processing stage to the private federated learning scheme.
arXiv Detail & Related papers (2022-02-08T10:03:24Z) - Don't Generate Me: Training Differentially Private Generative Models
with Sinkhorn Divergence [73.14373832423156]
We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy.
Unlike existing approaches for training differentially private generative models, we do not rely on adversarial objectives.
arXiv Detail & Related papers (2021-11-01T18:10:21Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Differentially Private Deep Learning with Direct Feedback Alignment [15.410557873153833]
We propose the first differentially private method for training deep neural networks with direct feedback alignment (DFA)
DFA achieves significant gains in accuracy (often by 10-20%) compared to backprop-based differentially private training on a variety of architectures.
arXiv Detail & Related papers (2020-10-08T00:25:22Z) - Privately Learning Markov Random Fields [44.95321417724914]
We consider the problem of learning Random Fields (including the Ising model) under the constraint of differential privacy.
We provide algorithms and lower bounds for both problems under a variety of privacy constraints.
arXiv Detail & Related papers (2020-02-21T18:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.