Blockchain-based Optimized Client Selection and Privacy Preserved
Framework for Federated Learning
- URL: http://arxiv.org/abs/2308.04442v1
- Date: Tue, 25 Jul 2023 01:35:51 GMT
- Title: Blockchain-based Optimized Client Selection and Privacy Preserved
Framework for Federated Learning
- Authors: Attia Qammar, Abdenacer Naouri, Jianguo Ding, Huansheng Ning
- Abstract summary: Federated learning is a distributed mechanism that trained large-scale neural network models with the participation of multiple clients.
With this feature, federated learning is considered a secure solution for data privacy issues.
We proposed the blockchain-based optimized client selection and privacy-preserved framework.
- Score: 2.4201849657206496
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is a distributed mechanism that trained large-scale neural
network models with the participation of multiple clients and data remains on
their devices, only sharing the local model updates. With this feature,
federated learning is considered a secure solution for data privacy issues.
However, the typical FL structure relies on the client-server, which leads to
the single-point-of-failure (SPoF) attack, and the random selection of clients
for model training compromised the model accuracy. Furthermore, adversaries try
for inference attacks i.e., attack on privacy leads to gradient leakage
attacks. We proposed the blockchain-based optimized client selection and
privacy-preserved framework in this context. We designed the three kinds of
smart contracts such as 1) registration of clients 2) forward bidding to select
optimized clients for FL model training 3) payment settlement and reward smart
contracts. Moreover, fully homomorphic encryption with Cheon, Kim, Kim, and
Song (CKKS) method is implemented before transmitting the local model updates
to the server. Finally, we evaluated our proposed method on the benchmark
dataset and compared it with state-of-the-art studies. Consequently, we
achieved a higher accuracy rate and privacy-preserved FL framework with
decentralized nature.
Related papers
- ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - Addressing Membership Inference Attack in Federated Learning with Model Compression [8.842172558292027]
Federated Learning (FL) has been proposed as a privacy-preserving solution for machine learning.
Recent works have reported that FL can leak private client data through membership inference attacks.
We show that effectiveness of these attacks negatively correlates with the size of the client's datasets and model complexity.
arXiv Detail & Related papers (2023-11-29T15:54:15Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - A New Implementation of Federated Learning for Privacy and Security
Enhancement [27.612480082254486]
Federated learning (FL) has emerged as a new machine learning setting.
No local data needs to be shared, and privacy can be well protected.
We propose a model update based federated averaging algorithm to defend against Byzantine attacks.
arXiv Detail & Related papers (2022-08-03T03:13:19Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - PRECAD: Privacy-Preserving and Robust Federated Learning via
Crypto-Aided Differential Privacy [14.678119872268198]
Federated Learning (FL) allows multiple participating clients to train machine learning models collaboratively by keeping their datasets local and only exchanging model updates.
Existing FL protocol designs have been shown to be vulnerable to attacks that aim to compromise data privacy and/or model robustness.
We develop a framework called PRECAD, which simultaneously achieves differential privacy (DP) and enhances robustness against model poisoning attacks with the help of cryptography.
arXiv Detail & Related papers (2021-10-22T04:08:42Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.