Adaptive Parameterization of Deep Learning Models for Federated Learning
- URL: http://arxiv.org/abs/2302.02949v2
- Date: Thu, 25 May 2023 04:55:39 GMT
- Title: Adaptive Parameterization of Deep Learning Models for Federated Learning
- Authors: Morten From Elvebakken, Alexandros Iosifidis, Lukas Esterle
- Abstract summary: Federated Learning offers a way to train deep neural networks in a distributed fashion.
It incurs a communication overhead as the model parameters or gradients need to be exchanged regularly during training.
In this paper, we propose to utilise parallel Adapters for Federated Learning.
- Score: 85.82002651944254
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning offers a way to train deep neural networks in a
distributed fashion. While this addresses limitations related to distributed
data, it incurs a communication overhead as the model parameters or gradients
need to be exchanged regularly during training. This can be an issue with large
scale distribution of learning tasks and negate the benefit of the respective
resource distribution. In this paper, we we propose to utilise parallel
Adapters for Federated Learning. Using various datasets, we show that Adapters
can be incorporated to different Federated Learning techniques. We highlight
that our approach can achieve similar inference performance compared to
training the full model while reducing the communication overhead by roughly
90%. We further explore the applicability of Adapters in cross-silo and
cross-device settings, as well as different non-IID data distributions.
Related papers
- Personalized Federated Learning via Feature Distribution Adaptation [3.410799378893257]
Federated learning (FL) is a distributed learning framework that leverages commonalities between distributed client datasets to train a global model.
personalized federated learning (PFL) seeks to address this by learning individual models tailored to each client.
We propose an algorithm, pFedFDA, that efficiently generates personalized models by adapting global generative classifiers to their local feature distributions.
arXiv Detail & Related papers (2024-11-01T03:03:52Z) - Collaborative and Efficient Personalization with Mixtures of Adaptors [5.195669033269619]
We propose a parameter-efficient framework to tackle multi-task learning problems.
We call our framework Federated Low-Rank Adaptive Learning (FLoRAL)
We show promising experimental results on synthetic datasets and real-world federated multi-task problems.
arXiv Detail & Related papers (2024-10-04T15:11:15Z) - FedLF: Adaptive Logit Adjustment and Feature Optimization in Federated Long-Tailed Learning [5.23984567704876]
Federated learning offers a paradigm to the challenge of preserving privacy in distributed machine learning.
Traditional approach fails to address the phenomenon of class-wise bias in global long-tailed data.
New method FedLF introduces three modifications in the local training phase: adaptive logit adjustment, continuous class centred optimization, and feature decorrelation.
arXiv Detail & Related papers (2024-09-18T16:25:29Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Federated Learning of Neural ODE Models with Different Iteration Counts [0.9444784653236158]
Federated learning is a distributed machine learning approach in which clients train models locally with their own data and upload them to a server so that their trained results are shared between them without uploading raw data to the server.
In this paper, we utilize Neural ODE based models for federated learning.
We show that our approach can reduce communication size by up to 92.4% compared with a baseline ResNet model using CIFAR-10 dataset.
arXiv Detail & Related papers (2022-08-19T17:57:32Z) - FedILC: Weighted Geometric Mean and Invariant Gradient Covariance for
Federated Learning on Non-IID Data [69.0785021613868]
Federated learning is a distributed machine learning approach which enables a shared server model to learn by aggregating the locally-computed parameter updates with the training data from spatially-distributed client silos.
We propose the Federated Invariant Learning Consistency (FedILC) approach, which leverages the gradient covariance and the geometric mean of Hessians to capture both inter-silo and intra-silo consistencies.
This is relevant to various fields such as medical healthcare, computer vision, and the Internet of Things (IoT)
arXiv Detail & Related papers (2022-05-19T03:32:03Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - Deep invariant networks with differentiable augmentation layers [87.22033101185201]
Methods for learning data augmentation policies require held-out data and are based on bilevel optimization problems.
We show that our approach is easier and faster to train than modern automatic data augmentation techniques.
arXiv Detail & Related papers (2022-02-04T14:12:31Z) - Flexible Parallel Learning in Edge Scenarios: Communication,
Computational and Energy Cost [20.508003076947848]
Fog- and IoT-based scenarios often require combining both approaches.
We present a framework for flexible parallel learning (FPL), achieving both data and model parallelism.
Our experiments, carried out using state-of-the-art deep-network architectures and large-scale datasets, confirm that FPL allows for an excellent trade-off among computational (hence energy) cost, communication overhead, and learning performance.
arXiv Detail & Related papers (2022-01-19T03:47:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.