MAS: Towards Resource-Efficient Federated Multiple-Task Learning
- URL: http://arxiv.org/abs/2307.11285v1
- Date: Fri, 21 Jul 2023 01:04:52 GMT
- Title: MAS: Towards Resource-Efficient Federated Multiple-Task Learning
- Authors: Weiming Zhuang, Yonggang Wen, Lingjuan Lyu, Shuai Zhang
- Abstract summary: Federated learning (FL) is an emerging distributed machine learning method that empowers in-situ model training on decentralized edge devices.
We propose the first FL system to effectively coordinate and train multiple simultaneous FL tasks.
We present our new approach, MAS (Merge and Split), to optimize the performance of training multiple simultaneous FL tasks.
- Score: 29.60567693814403
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is an emerging distributed machine learning method
that empowers in-situ model training on decentralized edge devices. However,
multiple simultaneous FL tasks could overload resource-constrained devices. In
this work, we propose the first FL system to effectively coordinate and train
multiple simultaneous FL tasks. We first formalize the problem of training
simultaneous FL tasks. Then, we present our new approach, MAS (Merge and
Split), to optimize the performance of training multiple simultaneous FL tasks.
MAS starts by merging FL tasks into an all-in-one FL task with a multi-task
architecture. After training for a few rounds, MAS splits the all-in-one FL
task into two or more FL tasks by using the affinities among tasks measured
during the all-in-one training. It then continues training each split of FL
tasks based on model parameters from the all-in-one training. Extensive
experiments demonstrate that MAS outperforms other methods while reducing
training time by 2x and reducing energy consumption by 40%. We hope this work
will inspire the community to further study and optimize training simultaneous
FL tasks.
Related papers
- Fair Concurrent Training of Multiple Models in Federated Learning [32.74516106486226]
Federated learning (FL) enables collaborative learning across multiple clients.
Recent proliferation of FL applications may increasingly require multiple FL tasks to be trained simultaneously.
Current MMFL algorithms use naive average-based client-task allocation schemes.
We propose a difficulty-aware algorithm that dynamically allocates clients to tasks in each training round.
arXiv Detail & Related papers (2024-04-22T02:41:10Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - MetisFL: An Embarrassingly Parallelized Controller for Scalable &
Efficient Federated Learning Workflows [1.9874264019909988]
A Federated Learning (FL) system typically consists of two core processing entities: the federation controller and the learners.
To meet this need, we designed and developed a novel FL system called MetisFL, where the federation controller is the first-class citizen.
MetisFL re-engineers all the operations conducted by the federation controller to accelerate the training of large-scale FL.
arXiv Detail & Related papers (2023-11-01T07:01:19Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Multi-Job Intelligent Scheduling with Cross-Device Federated Learning [65.69079337653994]
Federated Learning (FL) enables collaborative global machine learning model training without sharing sensitive raw data.
We propose a novel multi-job FL framework, which enables the training process of multiple jobs in parallel.
We propose a novel intelligent scheduling approach based on multiple scheduling methods, including an original reinforcement learning-based scheduling method and an original Bayesian optimization-based scheduling method.
arXiv Detail & Related papers (2022-11-24T06:17:40Z) - M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task
Learning with Model-Accelerator Co-design [95.41238363769892]
Multi-task learning (MTL) encapsulates multiple learned tasks in a single model and often lets those tasks learn better jointly.
Current MTL regimes have to activate nearly the entire model even to just execute a single task.
We present a model-accelerator co-design framework to enable efficient on-device MTL.
arXiv Detail & Related papers (2022-10-26T15:40:24Z) - Smart Multi-tenant Federated Learning [16.025681567222477]
We propose a smart multi-tenant FL system, MuFL, to effectively coordinate and execute simultaneous training activities.
We first formalize the problem of multi-tenant FL, define multi-tenant FL scenarios, and introduce a vanilla multi-tenant FL system that trains activities sequentially to form baselines.
Experiments demonstrate that MuFL outperforms other methods while consuming 40% less energy.
arXiv Detail & Related papers (2022-07-09T06:22:39Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - Efficient Device Scheduling with Multi-Job Federated Learning [64.21733164243781]
We propose a novel multi-job Federated Learning framework to enable the parallel training process of multiple jobs.
We propose a reinforcement learning-based method and a Bayesian optimization-based method to schedule devices for multiple jobs while minimizing the cost.
Our proposed approaches significantly outperform baseline approaches in terms of training time (up to 8.67 times faster) and accuracy (up to 44.6% higher)
arXiv Detail & Related papers (2021-12-11T08:05:11Z) - Papaya: Practical, Private, and Scalable Federated Learning [6.833772874570774]
Cross-device Federated Learning (FL) is a distributed learning paradigm with several challenges.
Most FL systems described in the literature are synchronous - they perform a synchronized aggregation of model updates from individual clients.
In this work, we outline a production asynchronous FL system design.
arXiv Detail & Related papers (2021-11-08T23:46:42Z) - How Does Cell-Free Massive MIMO Support Multiple Federated Learning
Groups? [42.63398054091038]
We propose a cell-free massive multiple-input multiple-output (MIMO) network to guarantee the stable operation of multiple FL processes.
We then develop a novel scheme that asynchronously executes the iterations of FL processes under multicasting downlink and conventional uplink transmission protocols.
arXiv Detail & Related papers (2021-07-20T15:46:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.