CC-FedAvg: Computationally Customized Federated Averaging
- URL: http://arxiv.org/abs/2212.13679v3
- Date: Sat, 1 Jul 2023 08:36:44 GMT
- Title: CC-FedAvg: Computationally Customized Federated Averaging
- Authors: Hao Zhang, Tingting Wu, Siyao Cheng, Jie Liu
- Abstract summary: Federated learning (FL) is an emerging paradigm to train model with distributed data from numerous Internet of Things (IoT) devices.
We propose a strategy for estimating local models without computationally intensive iterations.
We show that CC-FedAvg has the same convergence rate and comparable performance as FedAvg without resource constraints.
- Score: 11.687451505965655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is an emerging paradigm to train model with
distributed data from numerous Internet of Things (IoT) devices. It inherently
assumes a uniform capacity among participants. However, due to different
conditions such as differing energy budgets or executing parallel unrelated
tasks, participants have diverse computational resources in practice.
Participants with insufficient computation budgets must plan for the use of
restricted computational resources appropriately, otherwise they would be
unable to complete the entire training procedure, resulting in model
performance decline. To address this issue, we propose a strategy for
estimating local models without computationally intensive iterations. Based on
it, we propose Computationally Customized Federated Averaging (CC-FedAvg),
which allows participants to determine whether to perform traditional local
training or model estimation in each round based on their current computational
budgets. Both theoretical analysis and exhaustive experiments indicate that
CC-FedAvg has the same convergence rate and comparable performance as FedAvg
without resource constraints. Furthermore, CC-FedAvg can be viewed as a
computation-efficient version of FedAvg that retains model performance while
considerably lowering computation overhead.
Related papers
- FedHPL: Efficient Heterogeneous Federated Learning with Prompt Tuning and Logit Distillation [32.305134875959226]
Federated learning (FL) is a privacy-preserving paradigm that enables distributed clients to collaboratively train models with a central server.
We propose FedHPL, a parameter-efficient unified $textbfFed$erated learning framework for $textbfH$eterogeneous settings.
We show that our framework outperforms state-of-the-art FL approaches, with less overhead and training rounds.
arXiv Detail & Related papers (2024-05-27T15:25:32Z) - Switchable Decision: Dynamic Neural Generation Networks [98.61113699324429]
We propose a switchable decision to accelerate inference by dynamically assigning resources for each data instance.
Our method benefits from less cost during inference while keeping the same accuracy.
arXiv Detail & Related papers (2024-05-07T17:44:54Z) - Scalable Federated Unlearning via Isolated and Coded Sharding [76.12847512410767]
Federated unlearning has emerged as a promising paradigm to erase the client-level data effect.
This paper proposes a scalable federated unlearning framework based on isolated sharding and coded computing.
arXiv Detail & Related papers (2024-01-29T08:41:45Z) - Distributed Learning of Mixtures of Experts [0.0]
We deal with datasets that are either distributed by nature or potentially large for which distributing the computations is usually a standard way to proceed.
We propose a distributed learning approach for mixtures of experts (MoE) models with an aggregation strategy to construct a reduction estimator from local estimators fitted parallelly to distributed subsets of the data.
arXiv Detail & Related papers (2023-12-15T15:26:13Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Equitable-FL: Federated Learning with Sparsity for Resource-Constrained
Environment [10.980548731600116]
We propose a sparse form of federated learning that performs well in a Resource Constrained Environment.
Our goal is to make learning possible, regardless of a node's space, computing, or bandwidth scarcity.
Results obtained from experiments performed for training convolutional neural networks validate the efficacy of Equitable-FL.
arXiv Detail & Related papers (2023-09-02T08:40:17Z) - Resource-Efficient Federated Hyperdimensional Computing [6.778675369739912]
In conventional hyperdimensional computing (HDC), training larger models usually results in higher predictive performance but also requires more computational, communication, and energy resources.
A proposed resource-efficient framework alleviates such constraints by training multiple smaller independent HDC sub-models.
Our numerical comparison demonstrates that the proposed framework achieves a comparable or higher predictive performance while consuming less computational and wireless resources.
arXiv Detail & Related papers (2023-06-02T08:07:14Z) - Unifying Synergies between Self-supervised Learning and Dynamic
Computation [53.66628188936682]
We present a novel perspective on the interplay between SSL and DC paradigms.
We show that it is feasible to simultaneously learn a dense and gated sub-network from scratch in a SSL setting.
The co-evolution during pre-training of both dense and gated encoder offers a good accuracy-efficiency trade-off.
arXiv Detail & Related papers (2023-01-22T17:12:58Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z) - Controlling Computation versus Quality for Neural Sequence Models [42.525463454120256]
Conditional computation makes neural sequence models (Transformers) more efficient and computation-aware during inference.
We evaluate our approach on two tasks: (i) WMT English-French Translation and (ii) Unsupervised representation learning (BERT)
arXiv Detail & Related papers (2020-02-17T17:54:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.