SMTFL: Secure Model Training to Untrusted Participants in Federated Learning
- URL: http://arxiv.org/abs/2502.02038v2
- Date: Fri, 21 Feb 2025 02:55:12 GMT
- Title: SMTFL: Secure Model Training to Untrusted Participants in Federated Learning
- Authors: Zhihui Zhao, Xiaorong Dong, Yimo Ren, Jianhua Wang, Dan Yu, Hongsong Zhu, Yongle Chen,
- Abstract summary: Federated learning is an essential distributed model training technique.<n> gradient inversion attacks and poisoning attacks pose significant risks to the privacy of training data and the model correctness.<n>We propose a novel approach called SMTFL to achieve secure model training in federated learning without relying on trusted participants.
- Score: 8.225656436115509
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is an essential distributed model training technique. However, threats such as gradient inversion attacks and poisoning attacks pose significant risks to the privacy of training data and the model correctness. We propose a novel approach called SMTFL to achieve secure model training in federated learning without relying on trusted participants. To safeguard gradients privacy against gradient inversion attacks, clients are dynamically grouped, allowing one client's gradient to be divided to obfuscate the gradients of other clients within the group. This method incorporates checks and balances to reduce the collusion for inferring specific client data. To detect poisoning attacks from malicious clients, we assess the impact of aggregated gradients on the global model's performance, enabling effective identification and exclusion of malicious clients. Each client's gradients are encrypted and stored, with decryption collectively managed by all clients. The detected poisoning gradients are invalidated from the global model through a unlearning method. We present a practical secure aggregation scheme, which does not require trusted participants, avoids the performance degradation associated with traditional noise-injection, and aviods complex cryptographic operations during gradient aggregation. Evaluation results are encouraging based on four datasets and two models: SMTFL is effective against poisoning attacks and gradient inversion attacks, achieving an accuracy rate of over 95% in locating malicious clients, while keeping the false positive rate for honest clients within 5%. The model accuracy is also nearly restored to its pre-attack state when SMTFL is deployed.
Related papers
- ProtegoFed: Backdoor-Free Federated Instruction Tuning with Interspersed Poisoned Data [50.142067708131826]
Federated Instruction Tuning (FIT) enables collaborative instruction tuning of large language models across multiple organizations (clients) in a cross-silo setting without requiring the sharing of private instructions.<n>Recent findings suggest that poisoned samples may be pervasive and inadvertently embedded in real-world datasets, potentially distributed across all clients, even if the clients are benign.<n>This paper introduces ProtegoFed, the first backdoor-free FIT framework that accurately detects, purifies, and even interspersed poisoned data across clients during the training.
arXiv Detail & Related papers (2026-02-28T07:25:32Z) - Robust Federated Learning for Malicious Clients using Loss Trend Deviation Detection [0.0]
Federated Learning (FL) facilitates collaborative model training among distributed clients while ensuring that raw data remains on local devices.<n>Such clients can interfere with the training process by sending misleading updates, which can negatively affect the performance and reliability of the global model.<n>We propose the Federated Learning with Loss Trend Detection (FL-LTD), a lightweight and privacy-preserving defense framework that detects and mitigates malicious behavior by monitoring temporal loss dynamics rather than model gradients.
arXiv Detail & Related papers (2026-01-28T18:09:53Z) - FLARE: Adaptive Multi-Dimensional Reputation for Robust Client Reliability in Federated Learning [0.6524460254566904]
Federated learning (FL) enables collaborative model training while preserving data privacy.<n>It remains vulnerable to malicious clients who compromise model integrity through Byzantine attacks, data poisoning, or adaptive adversarial behaviors.<n>We propose FLARE, an adaptive reputation-based framework that transforms client reliability assessment from binary decisions to a continuous, multi-dimensional trust evaluation.
arXiv Detail & Related papers (2025-11-18T17:57:40Z) - FLAegis: A Two-Layer Defense Framework for Federated Learning Against Poisoning Attacks [2.6599014990168843]
Federated Learning (FL) has become a powerful technique for training Machine Learning (ML) models in a decentralized manner.<n>Third parties, known as Byzantine clients, can poison the training process by submitting false model updates.<n>This study introduces FLAegis, a two-stage defensive framework designed to identify Byzantine clients and improve the robustness of FL systems.
arXiv Detail & Related papers (2025-08-26T07:09:15Z) - Pigeon-SL: Robust Split Learning Framework for Edge Intelligence under Malicious Clients [53.496957000114875]
We introduce Pigeon-SL, a novel scheme that guarantees at least one entirely honest cluster among M clients, even when up to N of them are adversarial.<n>In each global round, the access point partitions the clients into N+1 clusters, trains each cluster independently via vanilla SL, and evaluates their validation losses on a shared dataset.<n>Only the cluster with the lowest loss advances, thereby isolating and discarding malicious updates.
arXiv Detail & Related papers (2025-08-04T09:34:50Z) - SecureFed: A Two-Phase Framework for Detecting Malicious Clients in Federated Learning [0.0]
Federated Learning (FL) protects data privacy while providing a decentralized method for training models.<n>Because of the distributed schema, it is susceptible to adversarial clients that could alter results or sabotage model performance.<n>This study presents SecureFed, a two-phase FL framework for identifying and reducing the impact of such attackers.
arXiv Detail & Related papers (2025-06-19T16:52:48Z) - Toward Malicious Clients Detection in Federated Learning [24.72033419379761]
Federated learning (FL) enables multiple clients to collaboratively train a global machine learning model without sharing their raw data.<n>In this paper, we propose a novel algorithm, SafeFL, specifically designed to accurately identify malicious clients in FL.
arXiv Detail & Related papers (2025-05-14T03:36:36Z) - Gradient Purification: Defense Against Poisoning Attack in Decentralized Federated Learning [21.892850886276317]
gradient purification defense, named GPD, integrates seamlessly with existing DFL aggregation to defend against poisoning attacks.<n>It aims to mitigate the harm in model gradients while retaining the benefit in model weights for enhancing accuracy.<n>It significantly outperforms state-of-the-art defenses in terms of accuracy against various poisoning attacks.
arXiv Detail & Related papers (2025-01-08T12:14:00Z) - ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - Federated Learning with Extremely Noisy Clients via Negative
Distillation [70.13920804879312]
Federated learning (FL) has shown remarkable success in cooperatively training deep models, while struggling with noisy labels.
We propose a novel approach, called negative distillation (FedNed) to leverage models trained on noisy clients.
FedNed first identifies noisy clients and employs rather than discards the noisy clients in a knowledge distillation manner.
arXiv Detail & Related papers (2023-12-20T01:59:48Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - FedDefender: Backdoor Attack Defense in Federated Learning [0.0]
Federated Learning (FL) is a privacy-preserving distributed machine learning technique.
We propose FedDefender, a defense mechanism against targeted poisoning attacks in FL.
arXiv Detail & Related papers (2023-07-02T03:40:04Z) - G$^2$uardFL: Safeguarding Federated Learning Against Backdoor Attacks
through Attributed Client Graph Clustering [116.4277292854053]
Federated Learning (FL) offers collaborative model training without data sharing.
FL is vulnerable to backdoor attacks, where poisoned model weights lead to compromised system integrity.
We present G$2$uardFL, a protective framework that reinterprets the identification of malicious clients as an attributed graph clustering problem.
arXiv Detail & Related papers (2023-06-08T07:15:04Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - FLCert: Provably Secure Federated Learning against Poisoning Attacks [67.8846134295194]
We propose FLCert, an ensemble federated learning framework that is provably secure against poisoning attacks.
Our experiments show that the label predicted by our FLCert for a test input is provably unaffected by a bounded number of malicious clients.
arXiv Detail & Related papers (2022-10-02T17:50:04Z) - BaFFLe: Backdoor detection via Feedback-based Federated Learning [3.6895394817068357]
We propose Backdoor detection via Feedback-based Federated Learning (BAFFLE)
We show that BAFFLE reliably detects state-of-the-art backdoor attacks with a detection accuracy of 100% and a false-positive rate below 5%.
arXiv Detail & Related papers (2020-11-04T07:44:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.