PrivacyFL: A simulator for privacy-preserving and secure federated
learning
- URL: http://arxiv.org/abs/2002.08423v2
- Date: Wed, 8 Jul 2020 03:30:36 GMT
- Title: PrivacyFL: A simulator for privacy-preserving and secure federated
learning
- Authors: Vaikkunth Mugunthan, Anton Peraire-Bueno and Lalana Kagal
- Abstract summary: Federated learning is a technique that enables distributed clients to collaboratively learn a shared machine learning model.
PrivacyFL is a privacy-preserving and secure federated learning simulator.
- Score: 2.578242050187029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is a technique that enables distributed clients to
collaboratively learn a shared machine learning model while keeping their
training data localized. This reduces data privacy risks, however, privacy
concerns still exist since it is possible to leak information about the
training dataset from the trained model's weights or parameters. Setting up a
federated learning environment, especially with security and privacy
guarantees, is a time-consuming process with numerous configurations and
parameters that can be manipulated. In order to help clients ensure that
collaboration is feasible and to check that it improves their model accuracy, a
real-world simulator for privacy-preserving and secure federated learning is
required. In this paper, we introduce PrivacyFL, which is an extensible, easily
configurable and scalable simulator for federated learning environments. Its
key features include latency simulation, robustness to client departure,
support for both centralized and decentralized learning, and configurable
privacy and security mechanisms based on differential privacy and secure
multiparty computation. In this paper, we motivate our research, describe the
architecture of the simulator and associated protocols, and discuss its
evaluation in numerous scenarios that highlight its wide range of functionality
and its advantages. Our paper addresses a significant real-world problem:
checking the feasibility of participating in a federated learning environment
under a variety of circumstances. It also has a strong practical impact because
organizations such as hospitals, banks, and research institutes, which have
large amounts of sensitive data and would like to collaborate, would greatly
benefit from having a system that enables them to do so in a privacy-preserving
and secure manner.
Related papers
- Privacy in Federated Learning [0.0]
Federated Learning (FL) represents a significant advancement in distributed machine learning.
This chapter delves into the core privacy concerns within FL, including the risks of data reconstruction, model inversion attacks, and membership inference.
It examines the trade-offs between model accuracy and privacy, emphasizing the importance of balancing these factors in practical implementations.
arXiv Detail & Related papers (2024-08-12T18:41:58Z) - On Joint Noise Scaling in Differentially Private Federated Learning with Multiple Local Steps [0.5439020425818999]
Federated learning is a distributed learning setting where the main aim is to train machine learning models without having to share raw data.
We show how a simple new analysis allows the parties to perform multiple local optimisation steps while still benefiting from secure aggregation.
arXiv Detail & Related papers (2024-07-27T15:54:58Z) - FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Reliability Check via Weight Similarity in Privacy-Preserving
Multi-Party Machine Learning [7.552100672006174]
We focus on addressing the concerns of data privacy, model privacy, and data quality associated with multi-party machine learning.
We present a scheme for privacy-preserving collaborative learning that checks the participants' data quality while guaranteeing data and model privacy.
arXiv Detail & Related papers (2021-01-14T08:55:42Z) - Differentially Private Secure Multi-Party Computation for Federated
Learning in Financial Applications [5.50791468454604]
Federated learning enables a population of clients, working with a trusted server, to collaboratively learn a shared machine learning model.
This reduces the risk of exposing sensitive data, but it is still possible to reverse engineer information about a client's private data set from communicated model parameters.
We present a privacy-preserving federated learning protocol to a non-specialist audience, demonstrate it using logistic regression on a real-world credit card fraud data set, and evaluate it using an open-source simulation platform.
arXiv Detail & Related papers (2020-10-12T17:16:27Z) - Secure Byzantine-Robust Machine Learning [61.03711813598128]
We propose a secure two-server protocol that offers both input privacy and Byzantine-robustness.
In addition, this protocol is communication-efficient, fault-tolerant and enjoys local differential privacy.
arXiv Detail & Related papers (2020-06-08T16:55:15Z) - Concentrated Differentially Private and Utility Preserving Federated
Learning [24.239992194656164]
Federated learning is a machine learning setting where a set of edge devices collaboratively train a model under the orchestration of a central server.
In this paper, we develop a federated learning approach that addresses the privacy challenge without much degradation on model utility.
We provide a tight end-to-end privacy guarantee of our approach and analyze its theoretical convergence rates.
arXiv Detail & Related papers (2020-03-30T19:20:42Z) - A Privacy-Preserving Distributed Architecture for
Deep-Learning-as-a-Service [68.84245063902908]
This paper introduces a novel distributed architecture for deep-learning-as-a-service.
It is able to preserve the user sensitive data while providing Cloud-based machine and deep learning services.
arXiv Detail & Related papers (2020-03-30T15:12:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.