Blockchain-based Trustworthy Federated Learning Architecture
- URL: http://arxiv.org/abs/2108.06912v1
- Date: Mon, 16 Aug 2021 06:13:58 GMT
- Title: Blockchain-based Trustworthy Federated Learning Architecture
- Authors: Sin Kit Lo, Yue Liu, Qinghua Lu, Chen Wang, Xiwei Xu, Hye-Young Paik,
Liming Zhu
- Abstract summary: We present a blockchain-based trustworthy federated learning architecture.
We first design a smart contract-based data-model provenance registry to enable accountability.
We also propose a weighted fair data sampler algorithm to enhance fairness in training data.
- Score: 16.062545221270337
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is an emerging privacy-preserving AI technique where
clients (i.e., organisations or devices) train models locally and formulate a
global model based on the local model updates without transferring local data
externally. However, federated learning systems struggle to achieve
trustworthiness and embody responsible AI principles. In particular, federated
learning systems face accountability and fairness challenges due to
multi-stakeholder involvement and heterogeneity in client data distribution. To
enhance the accountability and fairness of federated learning systems, we
present a blockchain-based trustworthy federated learning architecture. We
first design a smart contract-based data-model provenance registry to enable
accountability. Additionally, we propose a weighted fair data sampler algorithm
to enhance fairness in training data. We evaluate the proposed approach using a
COVID-19 X-ray detection use case. The evaluation results show that the
approach is feasible to enable accountability and improve fairness. The
proposed algorithm can achieve better performance than the default federated
learning setting in terms of the model's generalisation and accuracy.
Related papers
- Fed-QSSL: A Framework for Personalized Federated Learning under Bitwidth
and Data Heterogeneity [14.313847382199059]
Federated quantization-based self-supervised learning scheme (Fed-QSSL) designed to address heterogeneity in FL systems.
Fed-QSSL deploys de-quantization, weighted aggregation and re-quantization, ultimately creating models personalized to both data distribution and specific infrastructure of each client's device.
arXiv Detail & Related papers (2023-12-20T19:11:19Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - FAIR-FATE: Fair Federated Learning with Momentum [0.41998444721319217]
We propose a novel FAIR FederATEd Learning algorithm that aims to achieve group fairness while maintaining high utility.
To the best of our knowledge, this is the first approach in machine learning that aims to achieve fairness using a fair Momentum estimate.
Experimental results on real-world datasets demonstrate that FAIR-FATE outperforms state-of-the-art fair Federated Learning algorithms.
arXiv Detail & Related papers (2022-09-27T20:33:38Z) - Federated Self-supervised Learning for Heterogeneous Clients [20.33482170846688]
We propose a unified and systematic framework, emphHeterogeneous Self-supervised Federated Learning (Hetero-SSFL) for enabling self-supervised learning with federation on heterogeneous clients.
The proposed framework allows representation learning across all the clients without imposing architectural constraints or requiring presence of labeled data.
We empirically demonstrate that our proposed approach outperforms the state of the art methods by a significant margin.
arXiv Detail & Related papers (2022-05-25T05:07:44Z) - Improving Fairness via Federated Learning [14.231231094281362]
We propose a new theoretical framework, with which we analyze the value of federated learning in improving fairness.
We then theoretically and empirically show that the performance tradeoff of FedAvg-based fair learning algorithms is strictly worse than that of a fair classifier trained on centralized data.
To resolve this, we propose FedFB, a private fair learning algorithm on decentralized data with a modified FedAvg protocol.
arXiv Detail & Related papers (2021-10-29T05:25:44Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Fairness-aware Agnostic Federated Learning [47.26747955026486]
We develop a fairness-aware agnostic federated learning framework (AgnosticFair) to deal with the challenge of unknown testing distribution.
We use kernel reweighing functions to assign a reweighing value on each training sample in both loss function and fairness constraint.
Built model can be directly applied to local sites as it guarantees fairness on local data distributions.
arXiv Detail & Related papers (2020-10-10T17:58:20Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.