Sentinel: An Aggregation Function to Secure Decentralized Federated
Learning
- URL: http://arxiv.org/abs/2310.08097v2
- Date: Sat, 14 Oct 2023 07:27:01 GMT
- Title: Sentinel: An Aggregation Function to Secure Decentralized Federated
Learning
- Authors: Chao Feng, Alberto Huertas Celdran, Janosch Baltensperger, Enrique
Tomas Martinez Beltran, Gerome Bovet, Burkhard Stiller
- Abstract summary: This work introduces Sentinel, a defense strategy to counteract poisoning attacks in Decentralized Federated Learning (DFL)
Sentinel has been evaluated with diverse datasets and various poisoning attack types and threat levels, improving the state-of-the-art performance against both untargeted and targeted poisoning attacks.
- Score: 7.228253116465784
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The rapid integration of Federated Learning (FL) into networking encompasses
various aspects such as network management, quality of service, and
cybersecurity while preserving data privacy. In this context, Decentralized
Federated Learning (DFL) emerges as an innovative paradigm to train
collaborative models, addressing the single point of failure limitation.
However, the security and trustworthiness of FL and DFL are compromised by
poisoning attacks, negatively impacting its performance. Existing defense
mechanisms have been designed for centralized FL and they do not adequately
exploit the particularities of DFL. Thus, this work introduces Sentinel, a
defense strategy to counteract poisoning attacks in DFL. Sentinel leverages the
accessibility of local data and defines a three-step aggregation protocol
consisting of similarity filtering, bootstrap validation, and normalization to
safeguard against malicious model updates. Sentinel has been evaluated with
diverse datasets and various poisoning attack types and threat levels,
improving the state-of-the-art performance against both untargeted and targeted
poisoning attacks.
Related papers
- Byzantine-Robust Decentralized Federated Learning [30.33876141358171]
Federated learning (FL) enables multiple clients to collaboratively train machine learning models without revealing their private data.
Decentralized learning (DFL) architecture has been proposed to allow clients to train models collaboratively in a serverless and peer-to-peer manner.
DFL is highly vulnerable to poisoning attacks, where malicious clients could manipulate the system by sending carefully-crafted local models to their neighboring clients.
We propose a new algorithm called BALANCE (Byzantine-robust averaging through local similarity in decentralization) to defend against poisoning attacks in DFL.
arXiv Detail & Related papers (2024-06-14T21:28:37Z) - Enhancing Trust and Privacy in Distributed Networks: A Comprehensive Survey on Blockchain-based Federated Learning [51.13534069758711]
Decentralized approaches like blockchain offer a compelling solution by implementing a consensus mechanism among multiple entities.
Federated Learning (FL) enables participants to collaboratively train models while safeguarding data privacy.
This paper investigates the synergy between blockchain's security features and FL's privacy-preserving model training capabilities.
arXiv Detail & Related papers (2024-03-28T07:08:26Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - SPFL: A Self-purified Federated Learning Method Against Poisoning Attacks [12.580891810557482]
Federated learning (FL) is attractive for pulling privacy-preserving distributed training data.
We propose a self-purified FL (SPFL) method that enables benign clients to exploit trusted historical features of locally purified model.
We experimentally demonstrate that SPFL outperforms state-of-the-art FL defenses against various poisoning attacks.
arXiv Detail & Related papers (2023-09-19T13:31:33Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - Mitigating Communications Threats in Decentralized Federated Learning
through Moving Target Defense [0.0]
Decentralized Federated Learning (DFL) has enabled the training of machine learning models across federated participants.
This paper introduces a security module to counter communication-based attacks for DFL platforms.
The effectiveness of the security module is validated through experiments with the MNIST dataset and eclipse attacks.
arXiv Detail & Related papers (2023-07-21T17:43:50Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z) - Untargeted Poisoning Attack Detection in Federated Learning via Behavior
Attestation [7.979659145328856]
Federated Learning (FL) is a paradigm in Machine Learning (ML) that addresses data privacy, security, access rights and access to heterogeneous information issues.
Despite its advantages, there is an increased potential for cyberattacks on FL-based ML techniques that can undermine the benefits.
We propose attestedFL, a defense mechanism that monitors the training of individual nodes through state persistence in order to detect a malicious worker.
arXiv Detail & Related papers (2021-01-24T20:52:55Z) - A Secure Federated Learning Framework for 5G Networks [44.40119258491145]
Federated Learning (FL) has been proposed as an emerging paradigm to build machine learning models using distributed training datasets.
There are two critical security threats: poisoning and membership inference attacks.
We propose a blockchain-based secure FL framework to create smart contracts and prevent malicious or unreliable participants from involving in FL.
arXiv Detail & Related papers (2020-05-12T13:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.