AdRo-FL: Informed and Secure Client Selection for Federated Learning in the Presence of Adversarial Aggregator
- URL: http://arxiv.org/abs/2506.17805v2
- Date: Sat, 12 Jul 2025 06:28:00 GMT
- Title: AdRo-FL: Informed and Secure Client Selection for Federated Learning in the Presence of Adversarial Aggregator
- Authors: Md. Kamrul Hossain, Walid Aljoby, Anis Elgabli, Ahmed M. Abdelmoniem, Khaled A. Harras,
- Abstract summary: Federated Learning (FL) enables collaborative learning without exposing clients' data.<n>Recent work demonstrates a critical vulnerability where adversarial aggregators manipulate client selection to bypass protections.<n>We propose Adversarial Robust Federated Learning (AdRo-FL) to enable informed client selection based on client utility.<n>AdRo-FL achieves up to $1.85times$ faster time-to-accuracy and up to $1.06times$ higher final accuracy compared to insecure baselines.
- Score: 8.185201693854367
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated Learning (FL) enables collaborative learning without exposing clients' data. While clients only share model updates with the aggregator, studies reveal that aggregators can infer sensitive information from these updates. Secure Aggregation (SA) protects individual updates during transmission; however, recent work demonstrates a critical vulnerability where adversarial aggregators manipulate client selection to bypass SA protections, constituting a Biased Selection Attack (BSA). Although verifiable random selection prevents BSA, it precludes informed client selection essential for FL performance. We propose Adversarial Robust Federated Learning (AdRo-FL), which simultaneously enables: informed client selection based on client utility, and robust defense against BSA maintaining privacy-preserving aggregation. AdRo-FL implements two client selection frameworks tailored for distinct settings. The first framework assumes clients are grouped into clusters based on mutual trust, such as different branches of an organization. The second framework handles distributed clients where no trust relationships exist between them. For the cluster-oriented setting, we propose a novel defense against BSA by (1) enforcing a minimum client selection quota from each cluster, supervised by a cluster-head in every round, and (2) introducing a client utility function to prioritize efficient clients. For the distributed setting, we design a two-phase selection protocol: first, the aggregator selects the top clients based on our utility-driven ranking; then, a verifiable random function (VRF) ensures a BSA-resistant final selection. AdRo-FL also applies quantization to reduce communication overhead and sets strict transmission deadlines to improve energy efficiency. AdRo-FL achieves up to $1.85\times$ faster time-to-accuracy and up to $1.06\times$ higher final accuracy compared to insecure baselines.
Related papers
- FedRand: Enhancing Privacy in Federated Learning with Randomized LoRA Subparameter Updates [58.18162789618869]
Federated Learning (FL) is a widely used framework for training models in a decentralized manner.<n>We propose the FedRand framework, which avoids disclosing the full set of client parameters.<n>We empirically validate that FedRand improves robustness against MIAs compared to relevant baselines.
arXiv Detail & Related papers (2025-03-10T11:55:50Z) - Federated Instruction Tuning of LLMs with Domain Coverage Augmentation [87.49293964617128]
Federated Domain-specific Instruction Tuning (FedDIT) utilizes limited cross-client private data together with various strategies of instruction augmentation.<n>We propose FedDCA, which optimize domain coverage through greedy client center selection and retrieval-based augmentation.<n>For client-side computational efficiency and system scalability, FedDCA$*$, the variant of FedDCA, utilizes heterogeneous encoders with server-side feature alignment.
arXiv Detail & Related papers (2024-09-30T09:34:31Z) - Utilizing Free Clients in Federated Learning for Focused Model
Enhancement [9.370655190768163]
Federated Learning (FL) is a distributed machine learning approach to learn models on decentralized heterogeneous data.
We present FedALIGN (Federated Adaptive Learning with Inclusion of Global Needs) to address this challenge.
arXiv Detail & Related papers (2023-10-06T18:23:40Z) - Blockchain-based Optimized Client Selection and Privacy Preserved
Framework for Federated Learning [2.4201849657206496]
Federated learning is a distributed mechanism that trained large-scale neural network models with the participation of multiple clients.
With this feature, federated learning is considered a secure solution for data privacy issues.
We proposed the blockchain-based optimized client selection and privacy-preserved framework.
arXiv Detail & Related papers (2023-07-25T01:35:51Z) - FedGT: Identification of Malicious Clients in Federated Learning with Secure Aggregation [69.75513501757628]
FedGT is a novel framework for identifying malicious clients in federated learning with secure aggregation.
We show that FedGT significantly outperforms the private robust aggregation approach based on the geometric median recently proposed by Pillutla et al.
arXiv Detail & Related papers (2023-05-09T14:54:59Z) - FilFL: Client Filtering for Optimized Client Participation in Federated Learning [71.46173076298957]
Federated learning enables clients to collaboratively train a model without exchanging local data.
Clients participating in the training process significantly impact the convergence rate, learning efficiency, and model generalization.
We propose a novel approach, client filtering, to improve model generalization and optimize client participation and training.
arXiv Detail & Related papers (2023-02-13T18:55:31Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Efficient Distribution Similarity Identification in Clustered Federated
Learning via Principal Angles Between Client Data Subspaces [59.33965805898736]
Clustered learning has been shown to produce promising results by grouping clients into clusters.
Existing FL algorithms are essentially trying to group clients together with similar distributions.
Prior FL algorithms attempt similarities indirectly during training.
arXiv Detail & Related papers (2022-09-21T17:37:54Z) - Game of Gradients: Mitigating Irrelevant Clients in Federated Learning [3.2095659532757916]
Federated learning (FL) deals with multiple clients participating in collaborative training of a machine learning model under the orchestration of a central server.
In this setup, each client's data is private to itself and is not transferable to other clients or the server.
We refer to these problems as Federated Relevant Client Selection (FRCS)
arXiv Detail & Related papers (2021-10-23T16:34:42Z) - Towards Bidirectional Protection in Federated Learning [70.36925233356335]
F2ED-LEARNING offers bidirectional defense against malicious centralized server and Byzantine malicious clients.
F2ED-LEARNING securely aggregates each shard's update and launches FilterL2 on updates from different shards.
evaluation shows that F2ED-LEARNING consistently achieves optimal or close-to-optimal performance.
arXiv Detail & Related papers (2020-10-02T19:37:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.