OpenFL: An open-source framework for Federated Learning
- URL: http://arxiv.org/abs/2105.06413v1
- Date: Thu, 13 May 2021 16:40:19 GMT
- Title: OpenFL: An open-source framework for Federated Learning
- Authors: G Anthony Reina, Alexey Gruzdev, Patrick Foley, Olga Perepelkina,
Mansi Sharma, Igor Davidyuk, Ilya Trushkin, Maksim Radionov, Aleksandr
Mokrov, Dmitry Agapov, Jason Martin, Brandon Edwards, Micah J. Sheller,
Sarthak Pati, Prakash Narayana Moorthy, Shih-han Wang, Prashant Shah,
Spyridon Bakas
- Abstract summary: Federated learning (FL) is a computational paradigm that enables organizations to collaborate on machine learning (ML) projects without sharing sensitive data.
OpenFL is an open-source framework for training ML algorithms using the data-private collaborative learning paradigm of FL.
- Score: 41.03632020180591
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a computational paradigm that enables
organizations to collaborate on machine learning (ML) projects without sharing
sensitive data, such as, patient records, financial data, or classified
secrets. Open Federated Learning (OpenFL https://github.com/intel/openfl) is an
open-source framework for training ML algorithms using the data-private
collaborative learning paradigm of FL. OpenFL works with training pipelines
built with both TensorFlow and PyTorch, and can be easily extended to other ML
and deep learning frameworks. Here, we summarize the motivation and development
characteristics of OpenFL, with the intention of facilitating its application
to existing ML model training in a production environment. Finally, we describe
the first use of the OpenFL framework to train consensus ML models in a
consortium of international healthcare organizations, as well as how it
facilitates the first computational competition on FL.
Related papers
- LanFL: Differentially Private Federated Learning with Large Language Models using Synthetic Samples [11.955062839855334]
Federated Learning (FL) is a collaborative, privacy-preserving machine learning framework.
The recent advent of powerful Large Language Models (LLMs) with tens to hundreds of billions of parameters makes the naive application of traditional FL methods impractical.
This paper introduces a novel FL scheme for LLMs, named LanFL, which is purely prompt-based and treats the underlying LLMs as black boxes.
arXiv Detail & Related papers (2024-10-24T19:28:33Z) - OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models [61.14336781917986]
We introduce OpenR, an open-source framework for enhancing the reasoning capabilities of large language models (LLMs)
OpenR unifies data acquisition, reinforcement learning training, and non-autoregressive decoding into a cohesive software platform.
Our work is the first to provide an open-source framework that explores the core techniques of OpenAI's o1 model with reinforcement learning.
arXiv Detail & Related papers (2024-10-12T23:42:16Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the Ugly [62.473245910234304]
This paper takes a hardware-centric approach to explore how Large Language Models can be brought to modern edge computing systems.
We provide a micro-level hardware benchmark, compare the model FLOP utilization to a state-of-the-art data center GPU, and study the network utilization in realistic conditions.
arXiv Detail & Related papers (2023-10-04T20:27:20Z) - APPFLx: Providing Privacy-Preserving Cross-Silo Federated Learning as a
Service [1.5070429249282935]
Cross-silo privacy-preserving federated learning (PPFL) is a powerful tool to collaboratively train robust and generalized machine learning (ML) models without sharing sensitive local data.
APPFLx is a ready-to-use platform that provides privacy-preserving cross-silo federated learning as a service.
arXiv Detail & Related papers (2023-08-17T05:15:47Z) - Towards Cooperative Federated Learning over Heterogeneous Edge/Fog
Networks [49.19502459827366]
Federated learning (FL) has been promoted as a popular technique for training machine learning (ML) models over edge/fog networks.
Traditional implementations of FL have largely neglected the potential for inter-network cooperation.
We advocate for cooperative federated learning (CFL), a cooperative edge/fog ML paradigm built on device-to-device (D2D) and device-to-server (D2S) interactions.
arXiv Detail & Related papers (2023-03-15T04:41:36Z) - Vertical Federated Learning: A Structured Literature Review [0.0]
Federated learning (FL) has emerged as a promising distributed learning paradigm with an added advantage of data privacy.
In this paper, we present a structured literature review discussing the state-of-the-art approaches in VFL.
arXiv Detail & Related papers (2022-12-01T16:16:41Z) - NVIDIA FLARE: Federated Learning from Simulation to Real-World [11.490933081543787]
We created NVIDIA FLARE as an open-source development kit (SDK) to make it easier for data scientists to use FL in their research and real-world applications.
The SDK includes solutions for state-of-the-art FL algorithms and federated machine learning approaches.
arXiv Detail & Related papers (2022-10-24T14:30:50Z) - FedML: A Research Library and Benchmark for Federated Machine Learning [55.09054608875831]
Federated learning (FL) is a rapidly growing research field in machine learning.
Existing FL libraries cannot adequately support diverse algorithmic development.
We introduce FedML, an open research library and benchmark to facilitate FL algorithm development and fair performance comparison.
arXiv Detail & Related papers (2020-07-27T13:02:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.