FedLess: Secure and Scalable Federated Learning Using Serverless
Computing
- URL: http://arxiv.org/abs/2111.03396v1
- Date: Fri, 5 Nov 2021 11:14:07 GMT
- Title: FedLess: Secure and Scalable Federated Learning Using Serverless
Computing
- Authors: Andreas Grafberger, Mohak Chadha, Anshul Jindal, Jianfeng Gu, Michael
Gerndt
- Abstract summary: Federated Learning (FL) enables remote clients to learn a shared ML model while keeping the data local.
We present a novel system and framework for serverless FL, called FedLess.
Our system supports multiple commercial and self-hosted F providers and can be deployed in the cloud, on-premise in institutional data centers, and on edge devices.
- Score: 1.141832715860866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The traditional cloud-centric approach for Deep Learning (DL) requires
training data to be collected and processed at a central server which is often
challenging in privacy-sensitive domains like healthcare. Towards this, a new
learning paradigm called Federated Learning (FL) has been proposed that brings
the potential of DL to these domains while addressing privacy and data
ownership issues. FL enables remote clients to learn a shared ML model while
keeping the data local. However, conventional FL systems face several
challenges such as scalability, complex infrastructure management, and wasted
compute and incurred costs due to idle clients. These challenges of FL systems
closely align with the core problems that serverless computing and
Function-as-a-Service (FaaS) platforms aim to solve. These include rapid
scalability, no infrastructure management, automatic scaling to zero for idle
clients, and a pay-per-use billing model. To this end, we present a novel
system and framework for serverless FL, called FedLess. Our system supports
multiple commercial and self-hosted FaaS providers and can be deployed in the
cloud, on-premise in institutional data centers, and on edge devices. To the
best of our knowledge, we are the first to enable FL across a large fabric of
heterogeneous FaaS providers while providing important features like security
and Differential Privacy. We demonstrate with comprehensive experiments that
the successful training of DNNs for different tasks across up to 200 client
functions and more is easily possible using our system. Furthermore, we
demonstrate the practical viability of our methodology by comparing it against
a traditional FL system and show that it can be cheaper and more
resource-efficient.
Related papers
- Swarm Learning: A Survey of Concepts, Applications, and Trends [3.55026004901472]
Deep learning models have raised privacy and security concerns due to their reliance on large datasets on central servers.
Federated learning (FL) has introduced a novel approach to building a versatile, large-scale machine learning framework.
Swarm learning (SL) has been proposed in collaboration with Hewlett Packard Enterprise (HPE)
SL represents a decentralized machine learning framework that leverages blockchain technology for secure, scalable, and private data management.
arXiv Detail & Related papers (2024-05-01T14:59:24Z) - Training Heterogeneous Client Models using Knowledge Distillation in
Serverless Federated Learning [0.5510212613486574]
Federated Learning (FL) is an emerging machine learning paradigm that enables the collaborative training of a shared global model across distributed clients.
Recent works on designing systems for efficient FL have shown that utilizing serverless computing technologies can enhance resource efficiency, reduce training costs, and alleviate the complex infrastructure management burden on data holders.
arXiv Detail & Related papers (2024-02-11T20:15:52Z) - A Quality-of-Service Compliance System using Federated Learning and
Optimistic Rollups [0.0]
A parallel trend is the rise of phones and tablets as primary computing devices for many people.
The powerful sensors present on these devices combined with the fact that they are mobile, mean they have access to data of an unprecedentedly diverse and private nature.
Models learned on such data hold the promise of greatly improving usability by powering more intelligent applications, but the sensitive nature of the data means there are risks and responsibilities to storing it in a centralized location.
We propose the use of Federated Learning (FL) so that specific data about services performed by clients do not leave the source machines.
arXiv Detail & Related papers (2023-11-14T20:02:37Z) - Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the Ugly [62.473245910234304]
This paper takes a hardware-centric approach to explore how Large Language Models can be brought to modern edge computing systems.
We provide a micro-level hardware benchmark, compare the model FLOP utilization to a state-of-the-art data center GPU, and study the network utilization in realistic conditions.
arXiv Detail & Related papers (2023-10-04T20:27:20Z) - A Survey on Decentralized Federated Learning [0.709016563801433]
In recent years, federated learning has become a popular paradigm for training distributed, large-scale, and privacy-preserving machine learning (ML) systems.
In a typical FL system, the central server acts only as an orchestrator; it iteratively gathers and aggregates all the local models trained by each client on its private data until convergence.
One of the most critical challenges is to overcome the centralized orchestration of the classical FL client-server architecture.
Decentralized FL solutions have emerged where all FL clients cooperate and communicate without a central server.
arXiv Detail & Related papers (2023-08-08T22:07:15Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM [62.62684911017472]
Federated learning (FL) enables devices to jointly train shared models while keeping the training data local for privacy purposes.
We introduce a VFL framework with multiple heads (VIM), which takes the separate contribution of each client into account.
VIM achieves significantly higher performance and faster convergence compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-20T23:14:33Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - Wireless Communications for Collaborative Federated Learning [160.82696473996566]
Internet of Things (IoT) devices may not be able to transmit their collected data to a central controller for training machine learning models.
Google's seminal FL algorithm requires all devices to be directly connected with a central controller.
This paper introduces a novel FL framework, called collaborative FL (CFL), which enables edge devices to implement FL with less reliance on a central controller.
arXiv Detail & Related papers (2020-06-03T20:00:02Z) - Federated Learning for Resource-Constrained IoT Devices: Panoramas and
State-of-the-art [12.129978716326676]
We introduce some recently implemented real-life applications of Federated Learning.
In large-scale networks, there may be clients with varying computational resource capabilities.
We highlight future directions in the FL area concerning resource-constrained devices.
arXiv Detail & Related papers (2020-02-25T01:03:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.