TrustFed: A Reliable Federated Learning Framework with Malicious-Attack
Resistance
- URL: http://arxiv.org/abs/2312.04597v1
- Date: Wed, 6 Dec 2023 13:56:45 GMT
- Title: TrustFed: A Reliable Federated Learning Framework with Malicious-Attack
Resistance
- Authors: Hangn Su, Jianhong Zhou, Xianhua Niu, Gang Feng
- Abstract summary: Federated learning (FL) enables collaborative learning among multiple clients while ensuring individual data privacy.
In this paper, we propose a hierarchical audit-based FL (HiAudit-FL) framework to enhance the reliability and security of the learning process.
Our simulation results demonstrate that HiAudit-FL can effectively identify and handle potential malicious users accurately, with small system overhead.
- Score: 8.924352407824566
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a key technology in 6G research, federated learning (FL) enables
collaborative learning among multiple clients while ensuring individual data
privacy. However, malicious attackers among the participating clients can
intentionally tamper with the training data or the trained model, compromising
the accuracy and trustworthiness of the system. To address this issue, in this
paper, we propose a hierarchical audit-based FL (HiAudit-FL) framework, with
the aim to enhance the reliability and security of the learning process. The
hierarchical audit process includes two stages, namely model-audit and
parameter-audit. In the model-audit stage, a low-overhead audit method is
employed to identify suspicious clients. Subsequently, in the parameter-audit
stage, a resource-consuming method is used to detect all malicious clients with
higher accuracy among the suspicious ones. Specifically, we execute the model
audit method among partial clients for multiple rounds, which is modeled as a
partial observation Markov decision process (POMDP) with the aim to enhance the
robustness and accountability of the decision-making in complex and uncertain
environments. Meanwhile, we formulate the problem of identifying malicious
attackers through a multi-round audit as an active sequential hypothesis
testing problem and leverage a diffusion model-based AI-Enabled audit selection
strategy (ASS) to decide which clients should be audited in each round. To
accomplish efficient and effective audit selection, we design a DRL-ASS
algorithm by incorporating the ASS in a deep reinforcement learning (DRL)
framework. Our simulation results demonstrate that HiAudit-FL can effectively
identify and handle potential malicious users accurately, with small system
overhead.
Related papers
- Submodular Maximization Approaches for Equitable Client Selection in Federated Learning [4.167345675621377]
In a conventional Learning framework, client selection for training typically involves the random sampling of a subset of clients in each iteration.
This paper introduces two novel methods, namely SUBTRUNC and UNIONFL, designed to address the limitations of random client selection.
arXiv Detail & Related papers (2024-08-24T22:40:31Z) - Trust Driven On-Demand Scheme for Client Deployment in Federated Learning [39.9947471801304]
"Trusted-On-Demand-FL" establishes a relationship of trust between the server and the pool of eligible clients.
Our simulations rely on a continuous user behavior dataset, deploying an optimization model powered by a genetic algorithm.
arXiv Detail & Related papers (2024-05-01T08:50:08Z) - Reinforcement Learning as a Catalyst for Robust and Fair Federated
Learning: Deciphering the Dynamics of Client Contributions [6.318638597489423]
Reinforcement Federated Learning (RFL) is a novel framework that leverages deep reinforcement learning to adaptively optimize client contribution during aggregation.
In terms of robustness, RFL outperforms state-of-the-art methods, while maintaining comparable levels of fairness.
arXiv Detail & Related papers (2024-02-08T10:22:12Z) - FLCert: Provably Secure Federated Learning against Poisoning Attacks [67.8846134295194]
We propose FLCert, an ensemble federated learning framework that is provably secure against poisoning attacks.
Our experiments show that the label predicted by our FLCert for a test input is provably unaffected by a bounded number of malicious clients.
arXiv Detail & Related papers (2022-10-02T17:50:04Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Anomaly Detection through Unsupervised Federated Learning [0.0]
Federated learning is proving to be one of the most promising paradigms for leveraging distributed resources.
We propose a novel method in which, through a preprocessing phase, clients are grouped into communities.
The resulting anomaly detection model is then shared and used to detect anomalies within the clients of the same community.
arXiv Detail & Related papers (2022-09-09T08:45:47Z) - On the Convergence of Clustered Federated Learning [57.934295064030636]
In a federated learning system, the clients, e.g. mobile devices and organization participants, usually have different personal preferences or behavior patterns.
This paper proposes a novel weighted client-based clustered FL algorithm to leverage the client's group and each client in a unified optimization framework.
arXiv Detail & Related papers (2022-02-13T02:39:19Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Auto-weighted Robust Federated Learning with Corrupted Data Sources [7.475348174281237]
Federated learning provides a communication-efficient and privacy-preserving training process.
Standard federated learning techniques that naively minimize an average loss function are vulnerable to data corruptions.
We propose Auto-weighted Robust Federated Learning (arfl) to provide robustness against corrupted data sources.
arXiv Detail & Related papers (2021-01-14T21:54:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.