Secure Sum Outperforms Homomorphic Encryption in (Current) Collaborative
Deep Learning
- URL: http://arxiv.org/abs/2006.02894v2
- Date: Mon, 25 Oct 2021 21:52:05 GMT
- Title: Secure Sum Outperforms Homomorphic Encryption in (Current) Collaborative
Deep Learning
- Authors: Derian Boer and Stefan Kramer
- Abstract summary: We discuss methods for training neural networks on the joint data of different data owners, that keep each party's input confidential.
We show that a less complex and computationally less expensive secure sum protocol exhibits superior properties in terms of both collusion-resistance and runtime.
- Score: 7.690774882108066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning (DL) approaches are achieving extraordinary results in a wide
range of domains, but often require a massive collection of private data.
Hence, methods for training neural networks on the joint data of different data
owners, that keep each party's input confidential, are called for. We address a
specific setting in federated learning, namely that of deep learning from
horizontally distributed data with a limited number of parties, where their
vulnerable intermediate results have to be processed in a privacy-preserving
manner. This setting can be found in medical and healthcare as well as
industrial applications. The predominant scheme for this is based on
homomorphic encryption (HE), and it is widely considered to be without
alternative. In contrast to this, we demonstrate that a carefully chosen, less
complex and computationally less expensive secure sum protocol in conjunction
with default secure channels exhibits superior properties in terms of both
collusion-resistance and runtime. Finally, we discuss several open research
questions in the context of collaborative DL, especially regarding privacy
risks caused by joint intermediate results.
Related papers
- A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Privacy-preserving Deep Learning based Record Linkage [14.755422488889824]
We propose the first deep learning-based multi-party privacy-preserving record linkage protocol.
In our approach, each database owner first trains a local deep learning model, which is then uploaded to a secure environment.
The global model is then used by a linkage unit to distinguish unlabelled record pairs as matches and non-matches.
arXiv Detail & Related papers (2022-11-03T22:10:12Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Homomorphic Encryption and Federated Learning based Privacy-Preserving
CNN Training: COVID-19 Detection Use-Case [0.41998444721319217]
This paper proposes a privacy-preserving federated learning algorithm for medical data using homomorphic encryption.
The proposed algorithm uses a secure multi-party computation protocol to protect the deep learning model from the adversaries.
arXiv Detail & Related papers (2022-04-16T08:38:35Z) - Reinforcement Learning on Encrypted Data [58.39270571778521]
We present a preliminary, experimental study of how a DQN agent trained on encrypted states performs in environments with discrete and continuous state spaces.
Our results highlight that the agent is still capable of learning in small state spaces even in presence of non-deterministic encryption, but performance collapses in more complex environments.
arXiv Detail & Related papers (2021-09-16T21:59:37Z) - On Deep Learning with Label Differential Privacy [54.45348348861426]
We study the multi-class classification setting where the labels are considered sensitive and ought to be protected.
We propose a new algorithm for training deep neural networks with label differential privacy, and run evaluations on several datasets.
arXiv Detail & Related papers (2021-02-11T15:09:06Z) - Privacy-preserving Decentralized Aggregation for Federated Learning [3.9323226496740733]
Federated learning is a promising framework for learning over decentralized data spanning multiple regions.
We develop a privacy-preserving decentralized aggregation protocol for federated learning.
We evaluate our algorithm on image classification and next-word prediction applications over benchmark datasets with 9 and 15 distributed sites.
arXiv Detail & Related papers (2020-12-13T23:45:42Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Differentially private cross-silo federated learning [16.38610531397378]
Strict privacy is of paramount importance in distributed machine learning.
In this paper we combine additively homomorphic secure summation protocols with differential privacy in the so-called cross-silo federated learning setting.
We demonstrate that our proposed solutions give prediction accuracy that is comparable to the non-distributed setting.
arXiv Detail & Related papers (2020-07-10T18:15:10Z) - SPEED: Secure, PrivatE, and Efficient Deep learning [2.283665431721732]
We introduce a deep learning framework able to deal with strong privacy constraints.
Based on collaborative learning, differential privacy and homomorphic encryption, the proposed approach advances state-of-the-art.
arXiv Detail & Related papers (2020-06-16T19:31:52Z) - GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially
Private Generators [74.16405337436213]
We propose Gradient-sanitized Wasserstein Generative Adrial Networks (GS-WGAN)
GS-WGAN allows releasing a sanitized form of sensitive data with rigorous privacy guarantees.
We find our approach consistently outperforms state-of-the-art approaches across multiple metrics.
arXiv Detail & Related papers (2020-06-15T10:01:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.