Robust softmax aggregation on blockchain based federated learning with convergence guarantee
- URL: http://arxiv.org/abs/2311.07027v2
- Date: Fri, 29 Dec 2023 02:19:34 GMT
- Title: Robust softmax aggregation on blockchain based federated learning with convergence guarantee
- Authors: Huiyu Wu, Diego Klabjan,
- Abstract summary: We propose a softmax aggregation blockchain based federated learning framework.
First, we propose a new blockchain based federated learning architecture that utilizes the well-tested proof-of-stake consensus mechanism.
Second, to ensure the robustness of the aggregation process, we design a novel softmax aggregation method.
- Score: 11.955062839855334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Blockchain based federated learning is a distributed learning scheme that allows model training without participants sharing their local data sets, where the blockchain components eliminate the need for a trusted central server compared to traditional Federated Learning algorithms. In this paper we propose a softmax aggregation blockchain based federated learning framework. First, we propose a new blockchain based federated learning architecture that utilizes the well-tested proof-of-stake consensus mechanism on an existing blockchain network to select validators and miners to aggregate the participants' updates and compute the blocks. Second, to ensure the robustness of the aggregation process, we design a novel softmax aggregation method based on approximated population loss values that relies on our specific blockchain architecture. Additionally, we show our softmax aggregation technique converges to the global minimum in the convex setting with non-restricting assumptions. Our comprehensive experiments show that our framework outperforms existing robust aggregation algorithms in various settings by large margins.
Related papers
- BlockFound: Customized blockchain foundation model for anomaly detection [47.04595143348698]
BlockFound is a customized foundation model for anomaly blockchain transaction detection.
We introduce a series of customized designs to model the unique data structure of blockchain transactions.
BlockFound is the only method that successfully detects anomalous transactions on Solana with high accuracy.
arXiv Detail & Related papers (2024-10-05T05:11:34Z) - Voltran: Unlocking Trust and Confidentiality in Decentralized Federated Learning Aggregation [12.446757264387564]
We present Voltran, an innovative hybrid platform designed to achieve trust, confidentiality, and robustness for Federated Learning (FL)
We offload the FL aggregation into TEE to provide an isolated, trusted and customizable off-chain execution.
We provide strong scalability on multiple FL scenarios by introducing a multi-SGX parallel execution strategy.
arXiv Detail & Related papers (2024-08-13T13:33:35Z) - Enhancing Trust and Privacy in Distributed Networks: A Comprehensive Survey on Blockchain-based Federated Learning [51.13534069758711]
Decentralized approaches like blockchain offer a compelling solution by implementing a consensus mechanism among multiple entities.
Federated Learning (FL) enables participants to collaboratively train models while safeguarding data privacy.
This paper investigates the synergy between blockchain's security features and FL's privacy-preserving model training capabilities.
arXiv Detail & Related papers (2024-03-28T07:08:26Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - A Blockchain-empowered Multi-Aggregator Federated Learning Architecture
in Edge Computing with Deep Reinforcement Learning Optimization [8.082460100928358]
Federated learning (FL) is emerging as a sought-after distributed machine learning architecture.
With advancements in network infrastructure, FL has been seamlessly integrated into edge computing.
While blockchain technology promises to bolster security, practical deployment on resource-constrained edge devices remains a challenge.
arXiv Detail & Related papers (2023-10-14T20:47:30Z) - Multi-dimensional Data Quick Query for Blockchain-based Federated Learning [6.499393722730449]
We propose a novel data structure to improve the query efficiency within each block named MerkleRB-Tree.
In detail, we leverage Minimal Bounding Rectangle(MBR) and bloom-filters for the query process of multi-dimensional continuous-valued attributes and discrete-valued attributes respectively.
arXiv Detail & Related papers (2023-09-27T01:35:11Z) - FedChain: An Efficient and Secure Consensus Protocol based on Proof of Useful Federated Learning for Blockchain [0.3480973072524161]
The core of the blockchain is the consensus protocol, which establishes consensus among all the participants.
We propose an efficient and secure consensus protocol based on proof of useful federated learning for blockchain (called FedChain)
Our approach has been tested and validated through extensive experiments, demonstrating its performance.
arXiv Detail & Related papers (2023-08-29T08:04:07Z) - Blockchain-based Federated Learning with Secure Aggregation in Trusted
Execution Environment for Internet-of-Things [20.797220195954065]
This paper proposes a blockchain-based Federated Learning (FL) framework with Intel Software Guard Extension (SGX)-based Trusted Execution Environment (TEE) to securely aggregate local models in Industrial Internet-of-Things (IIoTs)
In FL, local models can be tampered with by attackers. Hence, a global model generated from the tampered local models can be erroneous. Therefore, the proposed framework leverages a blockchain network for secure model aggregation.
nodes can verify the authenticity of the aggregated model, run a blockchain consensus mechanism to ensure the integrity of the model, and add it to the distributed ledger for tamper-proof storage.
arXiv Detail & Related papers (2023-04-25T15:00:39Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.