Towards an Accountable and Reproducible Federated Learning: A FactSheets
Approach
- URL: http://arxiv.org/abs/2202.12443v1
- Date: Fri, 25 Feb 2022 00:34:14 GMT
- Title: Towards an Accountable and Reproducible Federated Learning: A FactSheets
Approach
- Authors: Nathalie Baracaldo, Ali Anwar, Mark Purcell, Ambrish Rawat, Mathieu
Sinn, Bashar Altakrouri, Dian Balta, Mahdi Sellami, Peter Kuhn, Ulrich
Schopp, Matthias Buchinger
- Abstract summary: Federated Learning (FL) is a novel paradigm for the shared training of models based on decentralized and private data.
We introduce AF2 Framework, where we instrument FL with accountability by fusing verifiable claims with tamper-evident facts.
We build on AI FactSheets for instilling transparency and trustworthiness into the AI lifecycle.
- Score: 6.488712018186561
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a novel paradigm for the shared training of models
based on decentralized and private data. With respect to ethical guidelines, FL
is promising regarding privacy, but needs to excel vis-\`a-vis transparency and
trustworthiness. In particular, FL has to address the accountability of the
parties involved and their adherence to rules, law and principles. We introduce
AF^2 Framework, where we instrument FL with accountability by fusing verifiable
claims with tamper-evident facts, into reproducible arguments. We build on AI
FactSheets for instilling transparency and trustworthiness into the AI
lifecycle and expand it to incorporate dynamic and nested facts, as well as
complex model compositions in FL. Based on our approach, an auditor can
validate, reproduce and certify a FL process. This can be directly applied in
practice to address the challenges of AI engineering and ethics.
Related papers
- Privacy-Preserving Federated Learning via Dataset Distillation [9.60829979241686]
Federated Learning (FL) allows users to share knowledge instead of raw data to train a model with high accuracy.
During the training, users lose control over the knowledge shared, which causes serious data privacy issues.
This work proposes FLiP, which aims to bring the principle of least privilege (PoLP) to FL training.
arXiv Detail & Related papers (2024-10-25T13:20:40Z) - Federated Learning Priorities Under the European Union Artificial
Intelligence Act [68.44894319552114]
We perform a first-of-its-kind interdisciplinary analysis (legal and ML) of the impact the AI Act may have on Federated Learning.
We explore data governance issues and the concern for privacy.
Most noteworthy are the opportunities to defend against data bias and enhance private and secure computation.
arXiv Detail & Related papers (2024-02-05T19:52:19Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Fair Differentially Private Federated Learning Framework [0.0]
Federated learning (FL) is a distributed machine learning strategy that enables participants to collaborate and train a shared model without sharing their individual datasets.
Privacy and fairness are crucial considerations in FL.
This paper presents a framework that addresses the challenges of generating a fair global model without validation data and creating a globally private differential model.
arXiv Detail & Related papers (2023-05-23T09:58:48Z) - FederatedTrust: A Solution for Trustworthy Federated Learning [3.202927443898192]
The rapid expansion of the Internet of Things (IoT) has presented challenges for centralized Machine and Deep Learning (ML/DL) methods.
To address concerns regarding data privacy, collaborative and privacy-preserving ML/DL techniques like Federated Learning (FL) have emerged.
arXiv Detail & Related papers (2023-02-20T09:02:24Z) - VeriFi: Towards Verifiable Federated Unlearning [59.169431326438676]
Federated learning (FL) is a collaborative learning paradigm where participants jointly train a powerful model without sharing their private data.
Leaving participant has the right to request to delete its private data from the global model.
We propose VeriFi, a unified framework integrating federated unlearning and verification.
arXiv Detail & Related papers (2022-05-25T12:20:02Z) - Federated Learning from Only Unlabeled Data with
Class-Conditional-Sharing Clients [98.22390453672499]
Supervised federated learning (FL) enables multiple clients to share the trained model without sharing their labeled data.
We propose federation of unsupervised learning (FedUL), where the unlabeled data are transformed into surrogate labeled data for each of the clients.
arXiv Detail & Related papers (2022-04-07T09:12:00Z) - Towards Verifiable Federated Learning [15.758657927386263]
Federated learning (FL) is an emerging paradigm of collaborative machine learning that preserves user privacy while building powerful models.
Due to the nature of open participation by self-interested entities, FL needs to guard against potential misbehaviours by legitimate FL participants.
Verifiable federated learning has become an emerging topic of research that has attracted significant interest from the academia and the industry alike.
arXiv Detail & Related papers (2022-02-15T09:52:25Z) - Fairness, Integrity, and Privacy in a Scalable Blockchain-based
Federated Learning System [0.0]
Federated machine learning (FL) allows to collectively train models on sensitive data as only the clients' models and not their training data need to be shared.
Despite the attention that research on FL has drawn, the concept still lacks broad adoption in practice.
This paper suggests a FL system that incorporates blockchain technology, local differential privacy, and zero-knowledge proofs.
arXiv Detail & Related papers (2021-11-11T16:08:44Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - A Principled Approach to Data Valuation for Federated Learning [73.19984041333599]
Federated learning (FL) is a popular technique to train machine learning (ML) models on decentralized data sources.
The Shapley value (SV) defines a unique payoff scheme that satisfies many desiderata for a data value notion.
This paper proposes a variant of the SV amenable to FL, which we call the federated Shapley value.
arXiv Detail & Related papers (2020-09-14T04:37:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.