APPFL: Open-Source Software Framework for Privacy-Preserving Federated
Learning
- URL: http://arxiv.org/abs/2202.03672v1
- Date: Tue, 8 Feb 2022 06:23:05 GMT
- Title: APPFL: Open-Source Software Framework for Privacy-Preserving Federated
Learning
- Authors: Minseok Ryu, Youngdae Kim, Kibaek Kim, and Ravi K. Madduri
- Abstract summary: Federated learning (FL) enables training models at different sites and updating the weights from the training instead of transferring data to a central location and training as in classical machine learning.
We introduce APPFL, the Argonne Privacy-Preserving Federated Learning framework.
APPFL allows users to leverage implemented privacy-preserving algorithms, implement new algorithms, and simulate and deploy various FL algorithms with privacy-preserving techniques.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) enables training models at different sites and
updating the weights from the training instead of transferring data to a
central location and training as in classical machine learning. The FL
capability is especially important to domains such as biomedicine and smart
grid, where data may not be shared freely or stored at a central location
because of policy challenges. Thanks to the capability of learning from
decentralized datasets, FL is now a rapidly growing research field, and
numerous FL frameworks have been developed. In this work, we introduce APPFL,
the Argonne Privacy-Preserving Federated Learning framework. APPFL allows users
to leverage implemented privacy-preserving algorithms, implement new
algorithms, and simulate and deploy various FL algorithms with
privacy-preserving techniques. The modular framework enables users to customize
the components for algorithms, privacy, communication protocols, neural network
models, and user data. We also present a new communication-efficient algorithm
based on an inexact alternating direction method of multipliers. The algorithm
requires significantly less communication between the server and the clients
than does the current state of the art. We demonstrate the computational
capabilities of APPFL, including differentially private FL on various test
datasets and its scalability, by using multiple algorithms and datasets on
different computing environments.
Related papers
- Advances in APPFL: A Comprehensive and Extensible Federated Learning Framework [1.4206132527980742]
Federated learning (FL) is a distributed machine learning paradigm enabling collaborative model training while preserving data privacy.
We present the recent advances in developing APPFL, a framework and benchmarking suite for federated learning.
We demonstrate the capabilities of APPFL through extensive experiments evaluating various aspects of FL, including communication efficiency, privacy preservation, computational performance, and resource utilization.
arXiv Detail & Related papers (2024-09-17T22:20:26Z) - Privacy-aware Berrut Approximated Coded Computing for Federated Learning [1.2084539012992408]
We propose a solution to guarantee privacy in Federated Learning schemes.
Our proposal is based on the Berrut Approximated Coded Computing, adapted to a Secret Sharing configuration.
arXiv Detail & Related papers (2024-05-02T20:03:13Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the Ugly [62.473245910234304]
This paper takes a hardware-centric approach to explore how Large Language Models can be brought to modern edge computing systems.
We provide a micro-level hardware benchmark, compare the model FLOP utilization to a state-of-the-art data center GPU, and study the network utilization in realistic conditions.
arXiv Detail & Related papers (2023-10-04T20:27:20Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Enhancing Efficiency in Multidevice Federated Learning through Data Selection [11.67484476827617]
Federated learning (FL) in multidevice environments creates new opportunities to learn from a vast and diverse amount of private data.
In this paper, we develop an FL framework to incorporate on-device data selection on such constrained devices.
We show that our framework achieves 19% higher accuracy and 58% lower latency; compared to the baseline FL without our implemented strategies.
arXiv Detail & Related papers (2022-11-08T11:39:17Z) - Evaluation and comparison of federated learning algorithms for Human
Activity Recognition on smartphones [0.5039813366558306]
Federated Learning (FL) has been introduced as a new machine learning paradigm enhancing the use of local devices.
In this paper, we propose a new FL algorithm, termed FedDist, which can modify models during training by identifying dissimilarities between neurons among the clients.
Results have shown the ability of FedDist to adapt to heterogeneous data and the capability of FL to deal with asynchronous situations.
arXiv Detail & Related papers (2022-10-30T18:47:23Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - FedML: A Research Library and Benchmark for Federated Machine Learning [55.09054608875831]
Federated learning (FL) is a rapidly growing research field in machine learning.
Existing FL libraries cannot adequately support diverse algorithmic development.
We introduce FedML, an open research library and benchmark to facilitate FL algorithm development and fair performance comparison.
arXiv Detail & Related papers (2020-07-27T13:02:08Z) - Evaluating the Communication Efficiency in Federated Learning Algorithms [3.713348568329249]
Recently, in light of new privacy legislations in many countries, the concept of Federated Learning (FL) has been introduced.
In FL, mobile users are empowered to learn a global model by aggregating their local models, without sharing the privacy-sensitive data.
This raises the challenge of communication cost when implementing FL at large scale.
arXiv Detail & Related papers (2020-04-06T15:31:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.