Reward-Based 1-bit Compressed Federated Distillation on Blockchain
- URL: http://arxiv.org/abs/2106.14265v1
- Date: Sun, 27 Jun 2021 15:51:04 GMT
- Title: Reward-Based 1-bit Compressed Federated Distillation on Blockchain
- Authors: Leon Witt, Usama Zafar, KuoYeh Shen, Felix Sattler, Dan Li, Wojciech
Samek
- Abstract summary: Recent advent of various forms of Federated Knowledge Distillation (FD) paves the way for a new generation of robust and communication-efficient Federated Learning (FL)
We introduce a novel decentralized federated learning framework where heavily compressed 1-bit soft-labels are aggregated on a smart contract.
In a context where workers' contributions are now easily comparable, we modify the Peer Truth Serum for Crowdsourcing mechanism (PTSC) for FD to reward honest participation.
- Score: 14.365210947456209
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent advent of various forms of Federated Knowledge Distillation (FD)
paves the way for a new generation of robust and communication-efficient
Federated Learning (FL), where mere soft-labels are aggregated, rather than
whole gradients of Deep Neural Networks (DNN) as done in previous FL schemes.
This security-per-design approach in combination with increasingly performant
Internet of Things (IoT) and mobile devices opens up a new realm of
possibilities to utilize private data from industries as well as from
individuals as input for artificial intelligence model training. Yet in
previous FL systems, lack of trust due to the imbalance of power between
workers and a central authority, the assumption of altruistic worker
participation and the inability to correctly measure and compare contributions
of workers hinder this technology from scaling beyond small groups of already
entrusted entities towards mass adoption. This work aims to mitigate the
aforementioned issues by introducing a novel decentralized federated learning
framework where heavily compressed 1-bit soft-labels, resembling 1-hot label
predictions, are aggregated on a smart contract. In a context where workers'
contributions are now easily comparable, we modify the Peer Truth Serum for
Crowdsourcing mechanism (PTSC) for FD to reward honest participation based on
peer consistency in an incentive compatible fashion. Due to heavy reductions of
both computational complexity and storage, our framework is a fully
on-blockchain FL system that is feasible on simple smart contracts and
therefore blockchain agnostic. We experimentally test our new framework and
validate its theoretical properties.
Related papers
- Enhancing Trust and Privacy in Distributed Networks: A Comprehensive Survey on Blockchain-based Federated Learning [51.13534069758711]
Decentralized approaches like blockchain offer a compelling solution by implementing a consensus mechanism among multiple entities.
Federated Learning (FL) enables participants to collaboratively train models while safeguarding data privacy.
This paper investigates the synergy between blockchain's security features and FL's privacy-preserving model training capabilities.
arXiv Detail & Related papers (2024-03-28T07:08:26Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Defending Against Poisoning Attacks in Federated Learning with
Blockchain [12.840821573271999]
We propose a secure and reliable federated learning system based on blockchain and distributed ledger technology.
Our system incorporates a peer-to-peer voting mechanism and a reward-and-slash mechanism, which are powered by on-chain smart contracts, to detect and deter malicious behaviors.
arXiv Detail & Related papers (2023-07-02T11:23:33Z) - Federated Nearest Neighbor Machine Translation [66.8765098651988]
In this paper, we propose a novel federated nearest neighbor (FedNN) machine translation framework.
FedNN leverages one-round memorization-based interaction to share knowledge across different clients.
Experiments show that FedNN significantly reduces computational and communication costs compared with FedAvg.
arXiv Detail & Related papers (2023-02-23T18:04:07Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - APPFLChain: A Privacy Protection Distributed Artificial-Intelligence
Architecture Based on Federated Learning and Consortium Blockchain [6.054775780656853]
We propose a new system architecture called APPFLChain.
It is an integrated architecture of a Hyperledger Fabric-based blockchain and a federated-learning paradigm.
Our new system can maintain a high degree of security and privacy as users do not need to share sensitive personal information to the server.
arXiv Detail & Related papers (2022-06-26T05:30:07Z) - Fairness, Integrity, and Privacy in a Scalable Blockchain-based
Federated Learning System [0.0]
Federated machine learning (FL) allows to collectively train models on sensitive data as only the clients' models and not their training data need to be shared.
Despite the attention that research on FL has drawn, the concept still lacks broad adoption in practice.
This paper suggests a FL system that incorporates blockchain technology, local differential privacy, and zero-knowledge proofs.
arXiv Detail & Related papers (2021-11-11T16:08:44Z) - Federated Learning using Smart Contracts on Blockchains, based on Reward
Driven Approach [0.0]
We show how smart contract based blockchain can be a very natural communication channel for federated learning.
We show how intuitive a measure of each agents' contribution can be built and integrated with the life cycle of the training and reward process.
arXiv Detail & Related papers (2021-07-19T12:51:22Z) - Faithful Edge Federated Learning: Scalability and Privacy [4.8534377897519105]
Federated learning enables machine learning algorithms to be trained over a network of decentralized edge devices without requiring the exchange of local datasets.
We analyze how the key feature of federated learning, unbalanced and non-i.i.d. data, affects agents' incentives to voluntarily participate.
We design two faithful federated learning mechanisms which satisfy economic properties, scalability, and privacy.
arXiv Detail & Related papers (2021-06-30T08:46:40Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.