Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design
- URL: http://arxiv.org/abs/2105.06256v1
- Date: Mon, 10 May 2021 08:02:27 GMT
- Title: Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design
- Authors: Chuan Ma, Jun Li, Ming Ding, Kang Wei, Wen Chen and H. Vincent Poor
- Abstract summary: Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
- Score: 76.29738151117583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Owing to the low communication costs and privacy-promoting capabilities,
Federated Learning (FL) has become a promising tool for training effective
machine learning models among distributed clients. However, with the
distributed architecture, low quality models could be uploaded to the
aggregator server by unreliable clients, leading to a degradation or even a
collapse of training. In this paper, we model these unreliable behaviors of
clients and propose a defensive mechanism to mitigate such a security risk.
Specifically, we first investigate the impact on the models caused by
unreliable clients by deriving a convergence upper bound on the loss function
based on the gradient descent updates. Our theoretical bounds reveal that with
a fixed amount of total computational resources, there exists an optimal number
of local training iterations in terms of convergence performance. We further
design a novel defensive mechanism, named deep neural network based secure
aggregation (DeepSA). Our experimental results validate our theoretical
analysis. In addition, the effectiveness of DeepSA is verified by comparing
with other state-of-the-art defensive mechanisms.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Silver Linings in the Shadows: Harnessing Membership Inference for Machine Unlearning [7.557226714828334]
We present a novel unlearning mechanism designed to remove the impact of specific data samples from a neural network.
In achieving this goal, we crafted a novel loss function tailored to eliminate privacy-sensitive information from weights and activation values of the target model.
Our results showcase the superior performance of our approach in terms of unlearning efficacy and latency as well as the fidelity of the primary task.
arXiv Detail & Related papers (2024-07-01T00:20:26Z) - Fed-Credit: Robust Federated Learning with Credibility Management [18.349127735378048]
Federated Learning (FL) is an emerging machine learning approach enabling model training on decentralized devices or data sources.
We propose a robust FL approach based on the credibility management scheme, called Fed-Credit.
The results exhibit superior accuracy and resilience against adversarial attacks, all while maintaining comparatively low computational complexity.
arXiv Detail & Related papers (2024-05-20T03:35:13Z) - Exploring the Vulnerabilities of Machine Learning and Quantum Machine
Learning to Adversarial Attacks using a Malware Dataset: A Comparative
Analysis [0.0]
Machine learning (ML) and quantum machine learning (QML) have shown remarkable potential in tackling complex problems.
Their susceptibility to adversarial attacks raises concerns when deploying these systems in security sensitive applications.
We present a comparative analysis of the vulnerability of ML and QNN models to adversarial attacks using a malware dataset.
arXiv Detail & Related papers (2023-05-31T06:31:42Z) - A Generative Framework for Low-Cost Result Validation of Machine Learning-as-a-Service Inference [4.478182379059458]
Fides is a novel framework for real-time integrity validation of ML-as-a-Service (ML) inference.
Fides features a client-side attack detection model that uses statistical analysis and divergence measurements to identify, with a high likelihood, if the service model is under attack.
We devised a generative adversarial network framework for training the attack detection and re-classification models.
arXiv Detail & Related papers (2023-03-31T19:17:30Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - A Framework for Understanding Model Extraction Attack and Defense [48.421636548746704]
We study tradeoffs between model utility from a benign user's view and privacy from an adversary's view.
We develop new metrics to quantify such tradeoffs, analyze their theoretical properties, and develop an optimization problem to understand the optimal adversarial attack and defense strategies.
arXiv Detail & Related papers (2022-06-23T05:24:52Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Adversarial Robustness through Bias Variance Decomposition: A New
Perspective for Federated Learning [41.525434598682764]
Federated learning learns a neural network model by aggregating the knowledge from a group of distributed clients under the privacy-preserving constraint.
We show that this paradigm might inherit the adversarial vulnerability of the centralized neural network.
We propose an adversarially robust federated learning framework, named Fed_BVA, with improved server and client update mechanisms.
arXiv Detail & Related papers (2020-09-18T18:58:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.