Smart Multi-tenant Federated Learning
- URL: http://arxiv.org/abs/2207.04202v1
- Date: Sat, 9 Jul 2022 06:22:39 GMT
- Title: Smart Multi-tenant Federated Learning
- Authors: Weiming Zhuang, Yonggang Wen, Shuai Zhang
- Abstract summary: We propose a smart multi-tenant FL system, MuFL, to effectively coordinate and execute simultaneous training activities.
We first formalize the problem of multi-tenant FL, define multi-tenant FL scenarios, and introduce a vanilla multi-tenant FL system that trains activities sequentially to form baselines.
Experiments demonstrate that MuFL outperforms other methods while consuming 40% less energy.
- Score: 16.025681567222477
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is an emerging distributed machine learning method
that empowers in-situ model training on decentralized edge devices. However,
multiple simultaneous training activities could overload resource-constrained
devices. In this work, we propose a smart multi-tenant FL system, MuFL, to
effectively coordinate and execute simultaneous training activities. We first
formalize the problem of multi-tenant FL, define multi-tenant FL scenarios, and
introduce a vanilla multi-tenant FL system that trains activities sequentially
to form baselines. Then, we propose two approaches to optimize multi-tenant FL:
1) activity consolidation merges training activities into one activity with a
multi-task architecture; 2) after training it for rounds, activity splitting
divides it into groups by employing affinities among activities such that
activities within a group have better synergy. Extensive experiments
demonstrate that MuFL outperforms other methods while consuming 40% less
energy. We hope this work will inspire the community to further study and
optimize multi-tenant FL.
Related papers
- NVLM: Open Frontier-Class Multimodal LLMs [64.00053046838225]
We introduce NVLM 1.0, a family of frontier-class multimodal large language models (LLMs) that achieve state-of-the-art results on vision-language tasks.
We propose a novel architecture that enhances both training efficiency and multimodal reasoning capabilities.
We develop production-grade multimodality for the NVLM-1.0 models, enabling them to excel in vision-language tasks.
arXiv Detail & Related papers (2024-09-17T17:59:06Z) - Resource-Efficient Federated Multimodal Learning via Layer-wise and Progressive Training [15.462969044840868]
We introduce LW-FedMML, a layer-wise federated multimodal learning approach which decomposes the training process into multiple stages.
We conduct extensive experiments across various FL and multimodal learning settings to validate the effectiveness of our proposed method.
Specifically, LW-FedMML reduces memory usage by up to $2.7times$, computational operations (FLOPs) by $2.4times$, and total communication cost by $2.3times$.
arXiv Detail & Related papers (2024-07-22T07:06:17Z) - MAS: Towards Resource-Efficient Federated Multiple-Task Learning [29.60567693814403]
Federated learning (FL) is an emerging distributed machine learning method that empowers in-situ model training on decentralized edge devices.
We propose the first FL system to effectively coordinate and train multiple simultaneous FL tasks.
We present our new approach, MAS (Merge and Split), to optimize the performance of training multiple simultaneous FL tasks.
arXiv Detail & Related papers (2023-07-21T01:04:52Z) - Multi-Job Intelligent Scheduling with Cross-Device Federated Learning [65.69079337653994]
Federated Learning (FL) enables collaborative global machine learning model training without sharing sensitive raw data.
We propose a novel multi-job FL framework, which enables the training process of multiple jobs in parallel.
We propose a novel intelligent scheduling approach based on multiple scheduling methods, including an original reinforcement learning-based scheduling method and an original Bayesian optimization-based scheduling method.
arXiv Detail & Related papers (2022-11-24T06:17:40Z) - Multi-Model Federated Learning with Provable Guarantees [19.470024548995717]
Federated Learning (FL) is a variant of distributed learning where devices collaborate to learn a model without sharing their data with the central server or each other.
We refer to the process of multiple independent clients simultaneously in a federated setting using a common pool of clients as a multi-model edge FL.
arXiv Detail & Related papers (2022-07-09T19:47:52Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - Efficient Device Scheduling with Multi-Job Federated Learning [64.21733164243781]
We propose a novel multi-job Federated Learning framework to enable the parallel training process of multiple jobs.
We propose a reinforcement learning-based method and a Bayesian optimization-based method to schedule devices for multiple jobs while minimizing the cost.
Our proposed approaches significantly outperform baseline approaches in terms of training time (up to 8.67 times faster) and accuracy (up to 44.6% higher)
arXiv Detail & Related papers (2021-12-11T08:05:11Z) - How Does Cell-Free Massive MIMO Support Multiple Federated Learning
Groups? [42.63398054091038]
We propose a cell-free massive multiple-input multiple-output (MIMO) network to guarantee the stable operation of multiple FL processes.
We then develop a novel scheme that asynchronously executes the iterations of FL processes under multicasting downlink and conventional uplink transmission protocols.
arXiv Detail & Related papers (2021-07-20T15:46:53Z) - MALib: A Parallel Framework for Population-based Multi-agent
Reinforcement Learning [61.28547338576706]
Population-based multi-agent reinforcement learning (PB-MARL) refers to the series of methods nested with reinforcement learning (RL) algorithms.
We present MALib, a scalable and efficient computing framework for PB-MARL.
arXiv Detail & Related papers (2021-06-05T03:27:08Z) - UPDeT: Universal Multi-agent Reinforcement Learning via Policy
Decoupling with Transformers [108.92194081987967]
We make the first attempt to explore a universal multi-agent reinforcement learning pipeline, designing one single architecture to fit tasks.
Unlike previous RNN-based models, we utilize a transformer-based model to generate a flexible policy.
The proposed model, named as Universal Policy Decoupling Transformer (UPDeT), further relaxes the action restriction and makes the multi-agent task's decision process more explainable.
arXiv Detail & Related papers (2021-01-20T07:24:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.