FedLPS: Heterogeneous Federated Learning for Multiple Tasks with Local
Parameter Sharing
- URL: http://arxiv.org/abs/2402.08578v1
- Date: Tue, 13 Feb 2024 16:30:30 GMT
- Title: FedLPS: Heterogeneous Federated Learning for Multiple Tasks with Local
Parameter Sharing
- Authors: Yongzhe Jia, Xuyun Zhang, Amin Beheshti, Wanchun Dou
- Abstract summary: We propose Federated Learning with Local Heterogeneous Sharing (FedLPS)
FedLPS uses transfer learning to facilitate the deployment of multiple tasks on a single device by dividing the local model into a shareable encoder and task-specific encoders.
FedLPS significantly outperforms the state-of-the-art (SOTA) FL frameworks by up to 4.88% and reduces the computational resource consumption by 21.3%.
- Score: 14.938531944702193
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) has emerged as a promising solution in Edge Computing
(EC) environments to process the proliferation of data generated by edge
devices. By collaboratively optimizing the global machine learning models on
distributed edge devices, FL circumvents the need for transmitting raw data and
enhances user privacy. Despite practical successes, FL still confronts
significant challenges including constrained edge device resources, multiple
tasks deployment, and data heterogeneity. However, existing studies focus on
mitigating the FL training costs of each single task whereas neglecting the
resource consumption across multiple tasks in heterogeneous FL scenarios. In
this paper, we propose Heterogeneous Federated Learning with Local Parameter
Sharing (FedLPS) to fill this gap. FedLPS leverages principles from transfer
learning to facilitate the deployment of multiple tasks on a single device by
dividing the local model into a shareable encoder and task-specific encoders.
To further reduce resource consumption, a channel-wise model pruning algorithm
that shrinks the footprint of local models while accounting for both data and
system heterogeneity is employed in FedLPS. Additionally, a novel heterogeneous
model aggregation algorithm is proposed to aggregate the heterogeneous
predictors in FedLPS. We implemented the proposed FedLPS on a real FL platform
and compared it with state-of-the-art (SOTA) FL frameworks. The experimental
results on five popular datasets and two modern DNN models illustrate that the
proposed FedLPS significantly outperforms the SOTA FL frameworks by up to 4.88%
and reduces the computational resource consumption by 21.3%. Our code is
available at:https://github.com/jyzgh/FedLPS.
Related papers
- Client Contribution Normalization for Enhanced Federated Learning [4.726250115737579]
Mobile devices, including smartphones and laptops, generate decentralized and heterogeneous data.
Federated Learning (FL) offers a promising alternative by enabling collaborative training of a global model across decentralized devices without data sharing.
This paper focuses on data-dependent heterogeneity in FL and proposes a novel approach leveraging mean latent representations extracted from locally trained models.
arXiv Detail & Related papers (2024-11-10T04:03:09Z) - Lightweight Industrial Cohorted Federated Learning for Heterogeneous Assets [0.0]
Federated Learning (FL) is the most widely adopted collaborative learning approach for training decentralized Machine Learning (ML) models.
However, since great data similarity or homogeneity is taken for granted in all FL tasks, FL is still not specifically designed for the industrial setting.
We propose a Lightweight Industrial Cohorted FL (LICFL) algorithm that uses model parameters for cohorting without any additional on-edge (clientlevel) computations and communications.
arXiv Detail & Related papers (2024-07-25T12:48:56Z) - Non-Federated Multi-Task Split Learning for Heterogeneous Sources [17.47679789733922]
We introduce a new architecture and methodology to perform multi-task learning for heterogeneous data sources efficiently.
We show through theoretical analysis that MTSL can achieve fast convergence by tuning the learning rate of the server and clients.
arXiv Detail & Related papers (2024-05-31T19:27:03Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Tackling Computational Heterogeneity in FL: A Few Theoretical Insights [68.8204255655161]
We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
arXiv Detail & Related papers (2023-07-12T16:28:21Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Resource-Aware Heterogeneous Federated Learning using Neural Architecture Search [8.184714897613166]
Federated Learning (FL) is used to train AI/ML models in distributed and privacy-preserving settings.
We propose Resource-aware Federated Learning (RaFL)
RaFL allocates resource-aware specialized models to edge devices using Neural Architecture Search (NAS)
arXiv Detail & Related papers (2022-11-09T09:38:57Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.