RoFL: Attestable Robustness for Secure Federated Learning
- URL: http://arxiv.org/abs/2107.03311v1
- Date: Wed, 7 Jul 2021 15:42:49 GMT
- Title: RoFL: Attestable Robustness for Secure Federated Learning
- Authors: Lukas Burkhalter, Hidde Lycklama \`a Nijeholt, Alexander Viand,
Nicolas K\"uchler, Anwar Hithnawi
- Abstract summary: Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
- Score: 59.63865074749391
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning is an emerging decentralized machine learning paradigm
that allows a large number of clients to train a joint model without the need
to share their private data. Participants instead only share ephemeral updates
necessary to train the model. To ensure the confidentiality of the client
updates, Federated Learning systems employ secure aggregation; clients encrypt
their gradient updates, and only the aggregated model is revealed to the
server. Achieving this level of data protection, however, presents new
challenges to the robustness of Federated Learning, i.e., the ability to
tolerate failures and attacks. Unfortunately, in this setting, a malicious
client can now easily exert influence on the model behavior without being
detected. As Federated Learning is being deployed in practice in a range of
sensitive applications, its robustness is growing in importance. In this paper,
we take a step towards understanding and improving the robustness of secure
Federated Learning. We start this paper with a systematic study that evaluates
and analyzes existing attack vectors and discusses potential defenses and
assesses their effectiveness. We then present RoFL, a secure Federated Learning
system that improves robustness against malicious clients through input checks
on the encrypted model updates. RoFL extends Federated Learning's secure
aggregation protocol to allow expressing a variety of properties and
constraints on model updates using zero-knowledge proofs. To enable RoFL to
scale to typical Federated Learning settings, we introduce several ML and
cryptographic optimizations specific to Federated Learning. We implement and
evaluate a prototype of RoFL and show that realistic ML models can be trained
in a reasonable time while improving robustness.
Related papers
- Formal Logic-guided Robust Federated Learning against Poisoning Attacks [6.997975378492098]
Federated Learning (FL) offers a promising solution to the privacy concerns associated with centralized Machine Learning (ML)
FL is vulnerable to various security threats, including poisoning attacks, where adversarial clients manipulate the training data or model updates to degrade overall model performance.
We present a defense mechanism designed to mitigate poisoning attacks in federated learning for time-series tasks.
arXiv Detail & Related papers (2024-11-05T16:23:19Z) - Enhancing Security Using Random Binary Weights in Privacy-Preserving Federated Learning [5.311735227179715]
We propose a novel method for enhancing security in privacy-preserving federated learning using the Vision Transformer.
In federated learning, learning is performed by collecting updated information without collecting raw data from each client.
The effectiveness of the proposed method is confirmed in terms of model performance and resistance to the APRIL (Attention PRIvacy Leakage) restoration attack.
arXiv Detail & Related papers (2024-09-30T06:28:49Z) - Reinforcement Learning as a Catalyst for Robust and Fair Federated
Learning: Deciphering the Dynamics of Client Contributions [6.318638597489423]
Reinforcement Federated Learning (RFL) is a novel framework that leverages deep reinforcement learning to adaptively optimize client contribution during aggregation.
In terms of robustness, RFL outperforms state-of-the-art methods, while maintaining comparable levels of fairness.
arXiv Detail & Related papers (2024-02-08T10:22:12Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Backdoor Attacks in Peer-to-Peer Federated Learning [11.235386862864397]
Peer-to-Peer Federated Learning (P2PFL) offer advantages in terms of both privacy and reliability.
We propose new backdoor attacks for P2PFL that leverage structural graph properties to select the malicious nodes, and achieve high attack success.
arXiv Detail & Related papers (2023-01-23T21:49:28Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - CRFL: Certifiably Robust Federated Learning against Backdoor Attacks [59.61565692464579]
This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors.
Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
arXiv Detail & Related papers (2021-06-15T16:50:54Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.