Cost-Effective Federated Learning in Mobile Edge Networks
- URL: http://arxiv.org/abs/2109.05411v1
- Date: Sun, 12 Sep 2021 03:02:24 GMT
- Title: Cost-Effective Federated Learning in Mobile Edge Networks
- Authors: Bing Luo, Xiang Li, Shiqiang Wang, Jianwei Huang, Leandros Tassiulas
- Abstract summary: Federated learning (FL) is a distributed learning paradigm that enables a large number of mobile devices to collaboratively learn a model without sharing their raw data.
We analyze how to design adaptive FL in mobile edge networks that optimally chooses essential control variables to minimize the total cost.
We develop a low-cost sampling-based algorithm to learn the convergence related unknown parameters.
- Score: 37.16466118235272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a distributed learning paradigm that enables a
large number of mobile devices to collaboratively learn a model under the
coordination of a central server without sharing their raw data. Despite its
practical efficiency and effectiveness, the iterative on-device learning
process (e.g., local computations and global communications with the server)
incurs a considerable cost in terms of learning time and energy consumption,
which depends crucially on the number of selected clients and the number of
local iterations in each training round. In this paper, we analyze how to
design adaptive FL in mobile edge networks that optimally chooses these
essential control variables to minimize the total cost while ensuring
convergence. We establish the analytical relationship between the total cost
and the control variables with the convergence upper bound. To efficiently
solve the cost minimization problem, we develop a low-cost sampling-based
algorithm to learn the convergence related unknown parameters. We derive
important solution properties that effectively identify the design principles
for different optimization metrics. Practically, we evaluate our theoretical
results both in a simulated environment and on a hardware prototype.
Experimental evidence verifies our derived properties and demonstrates that our
proposed solution achieves near-optimal performance for different optimization
metrics for various datasets and heterogeneous system and statistical settings.
Related papers
- Switchable Decision: Dynamic Neural Generation Networks [98.61113699324429]
We propose a switchable decision to accelerate inference by dynamically assigning resources for each data instance.
Our method benefits from less cost during inference while keeping the same accuracy.
arXiv Detail & Related papers (2024-05-07T17:44:54Z) - DYNAMITE: Dynamic Interplay of Mini-Batch Size and Aggregation Frequency
for Federated Learning with Static and Streaming Dataset [23.11152686493894]
Federated Learning (FL) is a distributed learning paradigm that can coordinate heterogeneous edge devices to perform model training without sharing private data.
This paper introduces novel analytical models and optimization algorithms that leverage the interplay between batch size and aggregation frequency to navigate the trade-offs among convergence, cost, and completion time for dynamic FL training.
arXiv Detail & Related papers (2023-10-20T08:36:12Z) - Sample-Driven Federated Learning for Energy-Efficient and Real-Time IoT
Sensing [22.968661040226756]
We introduce an online reinforcement learning algorithm named Sample-driven Control for Federated Learning (SCFL) built on the Soft Actor-Critic (A2C) framework.
SCFL enables the agent to dynamically adapt and find the global optima even in changing environments.
arXiv Detail & Related papers (2023-10-11T13:50:28Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - AnycostFL: Efficient On-Demand Federated Learning over Heterogeneous
Edge Devices [20.52519915112099]
We propose a cost-adjustable FL framework, named AnycostFL, that enables diverse edge devices to efficiently perform local updates.
Experiment results indicate that, our learning framework can reduce up to 1.9 times of the training latency and energy consumption for realizing a reasonable global testing accuracy.
arXiv Detail & Related papers (2023-01-08T15:25:55Z) - Energy-Aware Edge Association for Cluster-based Personalized Federated
Learning [2.3262774900834606]
Federated Learning over wireless network enables data-conscious services by leveraging ubiquitous intelligence at network edge for privacy-preserving model training.
We propose clustered federated learning to group user devices with similar preference and provide each cluster with a personalized model.
We formulate an accuracy-cost trade-off optimization problem by jointly considering model accuracy, communication resource allocation and energy consumption.
arXiv Detail & Related papers (2022-02-06T07:58:41Z) - Energy-Efficient Multi-Orchestrator Mobile Edge Learning [54.28419430315478]
Mobile Edge Learning (MEL) is a collaborative learning paradigm that features distributed training of Machine Learning (ML) models over edge devices.
In MEL, possible coexistence of multiple learning tasks with different datasets may arise.
We propose lightweight algorithms that can achieve near-optimal performance and facilitate the trade-offs between energy consumption, accuracy, and solution complexity.
arXiv Detail & Related papers (2021-09-02T07:37:10Z) - Cost-Effective Federated Learning Design [37.16466118235272]
Federated learning (FL) is a distributed learning paradigm that enables a large number of devices to collaboratively learn a model without sharing their raw data.
Despite its efficiency and effectiveness, the iterative on-device learning process incurs a considerable cost in terms of learning time and energy consumption.
We analyze how to design adaptive FL that optimally chooses essential control variables to minimize the total cost while ensuring convergence.
arXiv Detail & Related papers (2020-12-15T14:45:11Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.