Secure Byzantine-Robust Machine Learning
- URL: http://arxiv.org/abs/2006.04747v2
- Date: Sun, 18 Oct 2020 22:37:16 GMT
- Title: Secure Byzantine-Robust Machine Learning
- Authors: Lie He and Sai Praneeth Karimireddy and Martin Jaggi
- Abstract summary: We propose a secure two-server protocol that offers both input privacy and Byzantine-robustness.
In addition, this protocol is communication-efficient, fault-tolerant and enjoys local differential privacy.
- Score: 61.03711813598128
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Increasingly machine learning systems are being deployed to edge servers and
devices (e.g. mobile phones) and trained in a collaborative manner. Such
distributed/federated/decentralized training raises a number of concerns about
the robustness, privacy, and security of the procedure. While extensive work
has been done in tackling with robustness, privacy, or security individually,
their combination has rarely been studied. In this paper, we propose a secure
two-server protocol that offers both input privacy and Byzantine-robustness. In
addition, this protocol is communication-efficient, fault-tolerant and enjoys
local differential privacy.
Related papers
- Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - Uncovering Attacks and Defenses in Secure Aggregation for Federated Deep Learning [17.45950557331482]
Federated learning enables the collaborative learning of a global model on diverse data, preserving data locality and eliminating the need to transfer user data to a central server.
Secure aggregation protocols are designed to mask/encrypt user updates and enable a central server to aggregate the masked information.
MicroSecAgg (PoPETS 2024) proposes a single server secure aggregation protocol that aims to mitigate the high communication complexity of the existing approaches.
arXiv Detail & Related papers (2024-10-13T00:06:03Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - Pre-trained Encoders in Self-Supervised Learning Improve Secure and
Privacy-preserving Supervised Learning [63.45532264721498]
Self-supervised learning is an emerging technique to pre-train encoders using unlabeled data.
We perform first systematic, principled measurement study to understand whether and when a pretrained encoder can address the limitations of secure or privacy-preserving supervised learning algorithms.
arXiv Detail & Related papers (2022-12-06T21:35:35Z) - Secure Aggregation Is Not All You Need: Mitigating Privacy Attacks with
Noise Tolerance in Federated Learning [0.0]
Federated learning aims to preserve data privacy while creating AI models.
Current approaches rely heavily on secure aggregation protocols to preserve data privacy.
We investigate vulnerabilities to secure aggregation that could arise if the server is fully malicious.
arXiv Detail & Related papers (2022-11-10T05:13:08Z) - Collusion Resistant Federated Learning with Oblivious Distributed
Differential Privacy [4.951247283741297]
Privacy-preserving federated learning enables a population of distributed clients to jointly learn a shared model.
We present an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion.
We conclude with empirical analysis of the protocol's execution speed, learning accuracy, and privacy performance on two data sets.
arXiv Detail & Related papers (2022-02-20T19:52:53Z) - Secure Distributed Training at Scale [65.7538150168154]
Training in presence of peers requires specialized distributed training algorithms with Byzantine tolerance.
We propose a novel protocol for secure (Byzantine-tolerant) decentralized training that emphasizes communication efficiency.
arXiv Detail & Related papers (2021-06-21T17:00:42Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Efficient Sparse Secure Aggregation for Federated Learning [0.20052993723676896]
We adapt compression-based federated techniques to additive secret sharing, leading to an efficient secure aggregation protocol.
We prove its privacy against malicious adversaries and its correctness in the semi-honest setting.
Compared to prior works on secure aggregation, our protocol has a lower communication and adaptable costs for a similar accuracy.
arXiv Detail & Related papers (2020-07-29T14:28:30Z) - PrivacyFL: A simulator for privacy-preserving and secure federated
learning [2.578242050187029]
Federated learning is a technique that enables distributed clients to collaboratively learn a shared machine learning model.
PrivacyFL is a privacy-preserving and secure federated learning simulator.
arXiv Detail & Related papers (2020-02-19T20:16:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.