OpenFed: A Comprehensive and Versatile Open-Source Federated Learning
Framework
- URL: http://arxiv.org/abs/2109.07852v3
- Date: Mon, 3 Apr 2023 06:17:16 GMT
- Title: OpenFed: A Comprehensive and Versatile Open-Source Federated Learning
Framework
- Authors: Dengsheng Chen, Vince Tan, Zhilin Lu and Jie Hu
- Abstract summary: We propose OpenFed, an open-source software framework for end-to-end Federated Learning.
For researchers, OpenFed provides a framework wherein new methods can be easily implemented and fairly evaluated.
For downstream users, OpenFed allows Federated Learning to be plugged and play within different subject-matter contexts.
- Score: 5.893286029670115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent developments in Artificial Intelligence techniques have enabled their
successful application across a spectrum of commercial and industrial settings.
However, these techniques require large volumes of data to be aggregated in a
centralized manner, forestalling their applicability to scenarios wherein the
data is sensitive or the cost of data transmission is prohibitive. Federated
Learning alleviates these problems by decentralizing model training, thereby
removing the need for data transfer and aggregation. To advance the adoption of
Federated Learning, more research and development needs to be conducted to
address some important open questions. In this work, we propose OpenFed, an
open-source software framework for end-to-end Federated Learning. OpenFed
reduces the barrier to entry for both researchers and downstream users of
Federated Learning by the targeted removal of existing pain points. For
researchers, OpenFed provides a framework wherein new methods can be easily
implemented and fairly evaluated against an extensive suite of benchmarks. For
downstream users, OpenFed allows Federated Learning to be plugged and play
within different subject-matter contexts, removing the need for deep expertise
in Federated Learning.
Related papers
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models [61.14336781917986]
We introduce OpenR, an open-source framework for enhancing the reasoning capabilities of large language models (LLMs)
OpenR unifies data acquisition, reinforcement learning training, and non-autoregressive decoding into a cohesive software platform.
Our work is the first to provide an open-source framework that explores the core techniques of OpenAI's o1 model with reinforcement learning.
arXiv Detail & Related papers (2024-10-12T23:42:16Z) - SoK: Challenges and Opportunities in Federated Unlearning [32.0365189539138]
This SoK paper aims to take a deep look at the emphfederated unlearning literature, with the goal of identifying research trends and challenges in this emerging field.
arXiv Detail & Related papers (2024-03-04T19:35:08Z) - Privacy-Enhancing Collaborative Information Sharing through Federated
Learning -- A Case of the Insurance Industry [1.8092553911119764]
The report demonstrates the benefits of harnessing the value of Federated Learning (FL) to learn a single model across multiple insurance industry datasets.
FL addresses two of the most pressing concerns: limited data volume and data variety, which are caused by privacy concerns.
During each round of FL, collaborators compute improvements on the model using their local private data, and these insights are combined to update a global model.
arXiv Detail & Related papers (2024-02-22T21:46:24Z) - Exploring Machine Learning Models for Federated Learning: A Review of
Approaches, Performance, and Limitations [1.1060425537315088]
Federated learning is a distributed learning framework enhanced to preserve the privacy of individuals' data.
In times of crisis, when real-time decision-making is critical, federated learning allows multiple entities to work collectively without sharing sensitive data.
This paper is a systematic review of the literature on privacy-preserving machine learning in the last few years.
arXiv Detail & Related papers (2023-11-17T19:23:21Z) - A Survey of Federated Unlearning: A Taxonomy, Challenges and Future
Directions [71.16718184611673]
The evolution of privacy-preserving Federated Learning (FL) has led to an increasing demand for implementing the right to be forgotten.
The implementation of selective forgetting is particularly challenging in FL due to its decentralized nature.
Federated Unlearning (FU) emerges as a strategic solution to address the increasing need for data privacy.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - When Decentralized Optimization Meets Federated Learning [41.58479981773202]
Federated learning is a new learning paradigm for extracting knowledge from distributed data.
Most existing federated learning approaches concentrate on the centralized setting, which is vulnerable to a single-point failure.
An alternative strategy for addressing this issue is the decentralized communication topology.
arXiv Detail & Related papers (2023-06-05T03:51:14Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - OpenGAN: Open-Set Recognition via Open Data Generation [76.00714592984552]
Real-world machine learning systems need to analyze novel testing data that differs from the training data.
Two conceptually elegant ideas for open-set discrimination are: 1) discriminatively learning an open-vs-closed binary discriminator, and 2) unsupervised learning the closed-set data distribution with a GAN.
We propose OpenGAN, which addresses the limitation of each approach by combining them with several technical insights.
arXiv Detail & Related papers (2021-04-07T06:19:24Z) - Federated Learning: A Signal Processing Perspective [144.63726413692876]
Federated learning is an emerging machine learning paradigm for training models across multiple edge devices holding local datasets, without explicitly exchanging the data.
This article provides a unified systematic framework for federated learning in a manner that encapsulates and highlights the main challenges that are natural to treat using signal processing tools.
arXiv Detail & Related papers (2021-03-31T15:14:39Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - IBM Federated Learning: an Enterprise Framework White Paper V0.1 [28.21579297214125]
Federated Learning (FL) is an approach to conduct machine learning without centralizing training data in a single place.
The framework applies to both Deep Neural Networks as well as traditional'' approaches for the most common machine learning libraries.
arXiv Detail & Related papers (2020-07-22T05:32:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.