Byzantine-Resilient Over-the-Air Federated Learning under Zero-Trust Architecture
- URL: http://arxiv.org/abs/2503.18284v1
- Date: Mon, 24 Mar 2025 01:56:30 GMT
- Title: Byzantine-Resilient Over-the-Air Federated Learning under Zero-Trust Architecture
- Authors: Jiacheng Yao, Wei Shi, Wei Xu, Zhaohui Yang, A. Lee Swindlehurst, Dusit Niyato,
- Abstract summary: We propose a novel Byzantine-robust FL paradigm for over-the-air transmissions, referred to as federated learning with secure adaptive clustering (FedSAC)<n>FedSAC aims to protect a portion of the devices from attacks through zero trust architecture (ZTA) based Byzantine identification and adaptive device clustering.<n> Numerical results substantiate the superiority of the proposed FedSAC over existing methods in terms of both test accuracy and convergence rate.
- Score: 68.83934802584899
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over-the-air computation (AirComp) has emerged as an essential approach for enabling communication-efficient federated learning (FL) over wireless networks. Nonetheless, the inherent analog transmission mechanism in AirComp-based FL (AirFL) intensifies challenges posed by potential Byzantine attacks. In this paper, we propose a novel Byzantine-robust FL paradigm for over-the-air transmissions, referred to as federated learning with secure adaptive clustering (FedSAC). FedSAC aims to protect a portion of the devices from attacks through zero trust architecture (ZTA) based Byzantine identification and adaptive device clustering. By conducting a one-step convergence analysis, we theoretically characterize the convergence behavior with different device clustering mechanisms and uneven aggregation weighting factors for each device. Building upon our analytical results, we formulate a joint optimization problem for the clustering and weighting factors in each communication round. To facilitate the targeted optimization, we propose a dynamic Byzantine identification method using historical reputation based on ZTA. Furthermore, we introduce a sequential clustering method, transforming the joint optimization into a weighting optimization problem without sacrificing the optimality. To optimize the weighting, we capitalize on the penalty convex-concave procedure (P-CCP) to obtain a stationary solution. Numerical results substantiate the superiority of the proposed FedSAC over existing methods in terms of both test accuracy and convergence rate.
Related papers
- Byzantine-Resilient Federated Learning via Distributed Optimization [3.2075234058213757]
Byzantine attacks present a critical challenge to Federated Learning (FL)<n>Traditional FL frameworks rely on aggregation-based protocols for model updates, leaving them vulnerable to sophisticated adversarial strategies.<n>We show that the Primal-Dual Method of Multipliers (PDMM) inherently mitigates Byzantine impacts by leveraging its fault-tolerant consensus mechanism.
arXiv Detail & Related papers (2025-03-13T18:34:42Z) - Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.<n>We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - Over-the-Air Fair Federated Learning via Multi-Objective Optimization [52.295563400314094]
We propose an over-the-air fair federated learning algorithm (OTA-FFL) to train fair FL models.<n>Experiments demonstrate the superiority of OTA-FFL in achieving fairness and robust performance.
arXiv Detail & Related papers (2025-01-06T21:16:51Z) - Embedding Byzantine Fault Tolerance into Federated Learning via Consistency Scoring [3.536605202672355]
In this paper, we propose a intuitive plugin that can be integrated into existing FL techniques to achieve Byzantine-Resilience.
Numerical results on medical image classification task validate that plugging the proposed approach into representative FL algorithms, effectively achieves Byzantine resilience.
arXiv Detail & Related papers (2024-11-15T14:17:19Z) - Byzantine-Robust Aggregation for Securing Decentralized Federated
Learning [0.32985979395737774]
Federated Learning (FL) emerges as a distributed machine learning approach that addresses privacy concerns by training AI models locally on devices.
Decentralized Federated Learning (DFL) extends the FL paradigm by eliminating the central server, thereby enhancing scalability and robustness through the avoidance of a single point of failure.
We present a novel Byzantine-robust aggregation algorithm to enhance the security of DFL environments, coined WFAgg.
arXiv Detail & Related papers (2024-09-26T11:36:08Z) - FADAS: Towards Federated Adaptive Asynchronous Optimization [56.09666452175333]
Federated learning (FL) has emerged as a widely adopted training paradigm for privacy-preserving machine learning.
This paper introduces federated adaptive asynchronous optimization, named FADAS, a novel method that incorporates asynchronous updates into adaptive federated optimization with provable guarantees.
We rigorously establish the convergence rate of the proposed algorithms and empirical results demonstrate the superior performance of FADAS over other asynchronous FL baselines.
arXiv Detail & Related papers (2024-07-25T20:02:57Z) - Enhancing Security in Federated Learning through Adaptive
Consensus-Based Model Update Validation [2.28438857884398]
This paper introduces an advanced approach for fortifying Federated Learning (FL) systems against label-flipping attacks.
We propose a consensus-based verification process integrated with an adaptive thresholding mechanism.
Our results indicate a significant mitigation of label-flipping attacks, bolstering the FL system's resilience.
arXiv Detail & Related papers (2024-03-05T20:54:56Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Communication-Efficient Stochastic Zeroth-Order Optimization for
Federated Learning [28.65635956111857]
Federated learning (FL) enables edge devices to collaboratively train a global model without sharing their private data.
To enhance the training efficiency of FL, various algorithms have been proposed, ranging from first-order computation to first-order methods.
arXiv Detail & Related papers (2022-01-24T08:56:06Z) - Robust Federated Learning via Over-The-Air Computation [48.47690125123958]
Simple averaging of model updates via over-the-air computation makes the learning task vulnerable to random or intended modifications of the local model updates of some malicious clients.
We propose a robust transmission and aggregation framework to such attacks while preserving the benefits of over-the-air computation for federated learning.
arXiv Detail & Related papers (2021-11-01T19:21:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.