FederatedTrust: A Solution for Trustworthy Federated Learning
- URL: http://arxiv.org/abs/2302.09844v2
- Date: Thu, 6 Jul 2023 11:35:31 GMT
- Title: FederatedTrust: A Solution for Trustworthy Federated Learning
- Authors: Pedro Miguel S\'anchez S\'anchez, Alberto Huertas Celdr\'an, Ning Xie,
G\'er\^ome Bovet, Gregorio Mart\'inez P\'erez, Burkhard Stiller
- Abstract summary: The rapid expansion of the Internet of Things (IoT) has presented challenges for centralized Machine and Deep Learning (ML/DL) methods.
To address concerns regarding data privacy, collaborative and privacy-preserving ML/DL techniques like Federated Learning (FL) have emerged.
- Score: 3.202927443898192
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid expansion of the Internet of Things (IoT) and Edge Computing has
presented challenges for centralized Machine and Deep Learning (ML/DL) methods
due to the presence of distributed data silos that hold sensitive information.
To address concerns regarding data privacy, collaborative and
privacy-preserving ML/DL techniques like Federated Learning (FL) have emerged.
However, ensuring data privacy and performance alone is insufficient since
there is a growing need to establish trust in model predictions. Existing
literature has proposed various approaches on trustworthy ML/DL (excluding data
privacy), identifying robustness, fairness, explainability, and accountability
as important pillars. Nevertheless, further research is required to identify
trustworthiness pillars and evaluation metrics specifically relevant to FL
models, as well as to develop solutions that can compute the trustworthiness
level of FL models. This work examines the existing requirements for evaluating
trustworthiness in FL and introduces a comprehensive taxonomy consisting of six
pillars (privacy, robustness, fairness, explainability, accountability, and
federation), along with over 30 metrics for computing the trustworthiness of FL
models. Subsequently, an algorithm named FederatedTrust is designed based on
the pillars and metrics identified in the taxonomy to compute the
trustworthiness score of FL models. A prototype of FederatedTrust is
implemented and integrated into the learning process of FederatedScope, a
well-established FL framework. Finally, five experiments are conducted using
different configurations of FederatedScope to demonstrate the utility of
FederatedTrust in computing the trustworthiness of FL models. Three experiments
employ the FEMNIST dataset, and two utilize the N-BaIoT dataset considering a
real-world IoT security use case.
Related papers
- TPFL: A Trustworthy Personalized Federated Learning Framework via Subjective Logic [13.079535924498977]
Federated learning (FL) enables collaborative model training across distributed clients while preserving data privacy.
Most FL approaches focusing solely on privacy protection fall short in scenarios where trustworthiness is crucial.
We introduce Trustworthy Personalized Federated Learning framework designed for classification tasks via subjective logic.
arXiv Detail & Related papers (2024-10-16T07:33:29Z) - Enabling Trustworthy Federated Learning in Industrial IoT: Bridging the Gap Between Interpretability and Robustness [4.200214709723945]
Federated Learning (FL) is a paradigm shift in machine learning, allowing collaborative model training while keeping data localized.
The essence of FL in IIoT lies in its ability to learn from diverse, distributed data sources without requiring central data storage.
This article focuses on enabling trustworthy FL in IIoT by bridging the gap between interpretability and robustness.
arXiv Detail & Related papers (2024-09-01T15:13:39Z) - Accuracy-Privacy Trade-off in the Mitigation of Membership Inference Attack in Federated Learning [4.152322723065285]
federated learning (FL) has emerged as a prominent method in machine learning, emphasizing privacy preservation by allowing multiple clients to collaboratively build a model while keeping their training data private.
Despite this focus on privacy, FL models are susceptible to various attacks, including membership inference attacks (MIAs)
arXiv Detail & Related papers (2024-07-26T22:44:41Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - Trustworthy Federated Learning: A Survey [0.5089078998562185]
Federated Learning (FL) has emerged as a significant advancement in the field of Artificial Intelligence (AI)
We provide an extensive overview of the current state of Trustworthy FL, exploring existing solutions and well-defined pillars relevant to Trustworthy.
We propose a taxonomy that encompasses three main pillars: Interpretability, Fairness, and Security & Privacy.
arXiv Detail & Related papers (2023-05-19T09:11:26Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - Reliable Federated Disentangling Network for Non-IID Domain Feature [62.73267904147804]
In this paper, we propose a novel reliable federated disentangling network, termed RFedDis.
To the best of our knowledge, our proposed RFedDis is the first work to develop an FL approach based on evidential uncertainty combined with feature disentangling.
Our proposed RFedDis provides outstanding performance with a high degree of reliability as compared to other state-of-the-art FL approaches.
arXiv Detail & Related papers (2023-01-30T11:46:34Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Data Valuation for Vertical Federated Learning: A Model-free and
Privacy-preserving Method [14.451118953357605]
FedValue is a privacy-preserving, task-specific but model-free data valuation method for Vertical Federated learning (VFL)
We first introduce a novel data valuation metric, namely MShapley-CMI. The metric evaluates a data party's contribution to a predictive analytics task without the need of executing a machine learning model.
Next, we develop an innovative federated method that calculates the MShapley-CMI value for each data party in a privacy-preserving manner.
arXiv Detail & Related papers (2021-12-15T02:42:28Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - A Principled Approach to Data Valuation for Federated Learning [73.19984041333599]
Federated learning (FL) is a popular technique to train machine learning (ML) models on decentralized data sources.
The Shapley value (SV) defines a unique payoff scheme that satisfies many desiderata for a data value notion.
This paper proposes a variant of the SV amenable to FL, which we call the federated Shapley value.
arXiv Detail & Related papers (2020-09-14T04:37:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.