Distributed Learning in Heterogeneous Environment: federated learning
with adaptive aggregation and computation reduction
- URL: http://arxiv.org/abs/2302.10757v1
- Date: Thu, 16 Feb 2023 16:32:54 GMT
- Title: Distributed Learning in Heterogeneous Environment: federated learning
with adaptive aggregation and computation reduction
- Authors: Jingxin Li, Toktam Mahmoodi, Hak-Keung Lam
- Abstract summary: heterogeneous data, time-varying wireless conditions and computing-limited devices are three main challenges.
We propose strategies to address these challenges.
The proposed framework can tolerate communication delay of up to 15 rounds under a moderate delay environment.
- Score: 37.217844795181975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although federated learning has achieved many breakthroughs recently, the
heterogeneous nature of the learning environment greatly limits its performance
and hinders its real-world applications. The heterogeneous data, time-varying
wireless conditions and computing-limited devices are three main challenges,
which often result in an unstable training process and degraded accuracy.
Herein, we propose strategies to address these challenges. Targeting the
heterogeneous data distribution, we propose a novel adaptive mixing aggregation
(AMA) scheme that mixes the model updates from previous rounds with current
rounds to avoid large model shifts and thus, maintain training stability. We
further propose a novel staleness-based weighting scheme for the asynchronous
model updates caused by the dynamic wireless environment. Lastly, we propose a
novel CPU-friendly computation-reduction scheme based on transfer learning by
sharing the feature extractor (FES) and letting the computing-limited devices
update only the classifier. The simulation results show that the proposed
framework outperforms existing state-of-the-art solutions and increases the
test accuracy, and training stability by up to 2.38%, 93.10% respectively.
Additionally, the proposed framework can tolerate communication delay of up to
15 rounds under a moderate delay environment without significant accuracy
degradation.
Related papers
- FLARE: A New Federated Learning Framework with Adjustable Learning Rates over Resource-Constrained Wireless Networks [20.048146776405005]
Wireless federated learning (WFL) suffers from heterogeneity prevailing in the data distributions, computing powers, and channel conditions.
This paper presents a new idea with Federated Learning Adjusted leaning ratE (FLR ratE)
Experiments that FLARE consistently outperforms the baselines.
arXiv Detail & Related papers (2024-04-23T07:48:17Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - FedDCT: A Dynamic Cross-Tier Federated Learning Framework in Wireless Networks [5.914766366715661]
Federated Learning (FL) trains a global model across devices without exposing local data.
resource heterogeneity and inevitable stragglers in wireless networks severely impact the efficiency and accuracy of FL training.
We propose a novel Dynamic Cross-Tier Federated Learning framework (FedDCT)
arXiv Detail & Related papers (2023-07-10T08:54:07Z) - Environment Transformer and Policy Optimization for Model-Based Offline
Reinforcement Learning [25.684201757101267]
We propose an uncertainty-aware sequence modeling architecture called Environment Transformer.
Benefiting from the accurate modeling of the transition dynamics and reward function, Environment Transformer can be combined with arbitrary planning, dynamics programming, or policy optimization algorithms for offline RL.
arXiv Detail & Related papers (2023-03-07T11:26:09Z) - Adaptive Fairness-Aware Online Meta-Learning for Changing Environments [29.073555722548956]
The fairness-aware online learning framework has arisen as a powerful tool for the continual lifelong learning setting.
Existing methods make heavy use of the i.i.d assumption for data and hence provide static regret analysis for the framework.
We propose a novel adaptive fairness-aware online meta-learning algorithm, namely FairSAOML, which is able to adapt to changing environments in both bias control and model precision.
arXiv Detail & Related papers (2022-05-20T15:29:38Z) - Adaptive Anomaly Detection for Internet of Things in Hierarchical Edge
Computing: A Contextual-Bandit Approach [81.5261621619557]
We propose an adaptive anomaly detection scheme with hierarchical edge computing (HEC)
We first construct multiple anomaly detection DNN models with increasing complexity, and associate each of them to a corresponding HEC layer.
Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network.
arXiv Detail & Related papers (2021-08-09T08:45:47Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.