Shuffled Differentially Private Federated Learning for Time Series Data
Analytics
- URL: http://arxiv.org/abs/2307.16196v1
- Date: Sun, 30 Jul 2023 10:30:38 GMT
- Title: Shuffled Differentially Private Federated Learning for Time Series Data
Analytics
- Authors: Chenxi Huang, Chaoyang Jiang, Zhenghua Chen
- Abstract summary: We develop a privacy-preserving federated learning algorithm for time series data.
Specifically, we employ local differential privacy to extend the privacy protection trust boundary to the clients.
We also incorporate shuffle techniques to achieve a privacy amplification, mitigating the accuracy decline caused by leveraging local differential privacy.
- Score: 10.198481976376717
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Trustworthy federated learning aims to achieve optimal performance while
ensuring clients' privacy. Existing privacy-preserving federated learning
approaches are mostly tailored for image data, lacking applications for time
series data, which have many important applications, like machine health
monitoring, human activity recognition, etc. Furthermore, protective noising on
a time series data analytics model can significantly interfere with
temporal-dependent learning, leading to a greater decline in accuracy. To
address these issues, we develop a privacy-preserving federated learning
algorithm for time series data. Specifically, we employ local differential
privacy to extend the privacy protection trust boundary to the clients. We also
incorporate shuffle techniques to achieve a privacy amplification, mitigating
the accuracy decline caused by leveraging local differential privacy. Extensive
experiments were conducted on five time series datasets. The evaluation results
reveal that our algorithm experienced minimal accuracy loss compared to
non-private federated learning in both small and large client scenarios. Under
the same level of privacy protection, our algorithm demonstrated improved
accuracy compared to the centralized differentially private federated learning
in both scenarios.
Related papers
- Privacy Preserving Large Language Models: ChatGPT Case Study Based Vision and Framework [6.828884629694705]
This article proposes the conceptual model called PrivChatGPT, a privacy-generative model for LLMs.
PrivChatGPT consists of two main components i.e., preserving user privacy during the data curation/pre-processing together with preserving private context and the private training process for large-scale data.
arXiv Detail & Related papers (2023-10-19T06:55:13Z) - Locally Differentially Private Distributed Online Learning with Guaranteed Optimality [1.800614371653704]
This paper proposes an approach that ensures both differential privacy and learning accuracy in distributed online learning.
While ensuring a diminishing expected instantaneous regret, the approach can simultaneously ensure a finite cumulative privacy budget.
To the best of our knowledge, this is the first algorithm that successfully ensures both rigorous local differential privacy and learning accuracy.
arXiv Detail & Related papers (2023-06-25T02:05:34Z) - Tight Auditing of Differentially Private Machine Learning [77.38590306275877]
For private machine learning, existing auditing mechanisms are tight.
They only give tight estimates under implausible worst-case assumptions.
We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets.
arXiv Detail & Related papers (2023-02-15T21:40:33Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Mixed Differential Privacy in Computer Vision [133.68363478737058]
AdaMix is an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data.
A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset.
arXiv Detail & Related papers (2022-03-22T06:15:43Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - Federated $f$-Differential Privacy [19.499120576896228]
Federated learning (FL) is a training paradigm where the clients collaboratively learn models by repeatedly sharing information.
We introduce federated $f$-differential privacy, a new notion specifically tailored to the federated setting.
We then propose a generic private federated learning framework PriFedSync that accommodates a large family of state-of-the-art FL algorithms.
arXiv Detail & Related papers (2021-02-22T16:28:21Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Differentially private cross-silo federated learning [16.38610531397378]
Strict privacy is of paramount importance in distributed machine learning.
In this paper we combine additively homomorphic secure summation protocols with differential privacy in the so-called cross-silo federated learning setting.
We demonstrate that our proposed solutions give prediction accuracy that is comparable to the non-distributed setting.
arXiv Detail & Related papers (2020-07-10T18:15:10Z) - SPEED: Secure, PrivatE, and Efficient Deep learning [2.283665431721732]
We introduce a deep learning framework able to deal with strong privacy constraints.
Based on collaborative learning, differential privacy and homomorphic encryption, the proposed approach advances state-of-the-art.
arXiv Detail & Related papers (2020-06-16T19:31:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.