Jointly Optimizing Dataset Size and Local Updates in Heterogeneous
Mobile Edge Learning
- URL: http://arxiv.org/abs/2006.07402v3
- Date: Mon, 22 Feb 2021 05:17:27 GMT
- Title: Jointly Optimizing Dataset Size and Local Updates in Heterogeneous
Mobile Edge Learning
- Authors: Umair Mohammad, Sameh Sorour and Mohamed Hefeida
- Abstract summary: This paper proposes to maximize the accuracy of a distributed machine learning (ML) model trained on learners connected via the resource-constrained wireless edge.
We jointly optimize the number of local/global updates and the task size allocation to minimize the loss while taking into account heterogeneous communication and computation capabilities of each learner.
- Score: 11.191719032853527
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes to maximize the accuracy of a distributed machine
learning (ML) model trained on learners connected via the resource-constrained
wireless edge. We jointly optimize the number of local/global updates and the
task size allocation to minimize the loss while taking into account
heterogeneous communication and computation capabilities of each learner. By
leveraging existing bounds on the difference between the training loss at any
given iteration and the theoretically optimal loss, we derive an expression for
the objective function in terms of the number of local updates. The resulting
convex program is solved to obtain the optimal number of local updates which is
used to obtain the total updates and batch sizes for each learner. The merits
of the proposed solution, which is heterogeneity aware (HA), are exhibited by
comparing its performance to the heterogeneity unaware (HU) approach.
Related papers
- Asynchronous Message-Passing and Zeroth-Order Optimization Based Distributed Learning with a Use-Case in Resource Allocation in Communication Networks [11.182443036683225]
Distributed learning and adaptation have received significant interest and found wide-ranging applications in machine learning signal processing.
This paper specifically focuses on a scenario where agents collaborate towards a common task.
Agents, acting as transmitters, collaboratively train their individual policies to maximize a global reward.
arXiv Detail & Related papers (2023-11-08T11:12:27Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Revisiting Communication-Efficient Federated Learning with Balanced
Global and Local Updates [14.851898446967672]
We investigate and analyze the optimal trade-off between the number of local trainings and that of global aggregations.
Our proposed scheme can achieve a better performance in terms of the prediction accuracy, and converge much faster than the baseline schemes.
arXiv Detail & Related papers (2022-05-03T13:05:26Z) - Contextual Model Aggregation for Fast and Robust Federated Learning in
Edge Computing [88.76112371510999]
Federated learning is a prime candidate for distributed machine learning at the network edge.
Existing algorithms face issues with slow convergence and/or robustness of performance.
We propose a contextual aggregation scheme that achieves the optimal context-dependent bound on loss reduction.
arXiv Detail & Related papers (2022-03-23T21:42:31Z) - BayGo: Joint Bayesian Learning and Information-Aware Graph Optimization [48.30183416069897]
BayGo is a novel fully decentralized joint Bayesian learning and graph optimization framework.
We show that our framework achieves faster convergence and higher accuracy compared to fully-connected and star topology graphs.
arXiv Detail & Related papers (2020-11-09T11:16:55Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Differentially Private ADMM for Convex Distributed Learning: Improved
Accuracy via Multi-Step Approximation [10.742065340992525]
Alternating Direction Method of Multipliers (ADMM) is a popular computation for distributed learning.
When the training data is sensitive, the exchanged iterates will cause serious privacy concern.
We propose a new differentially private distributed ADMM with improved accuracy for a wide range of convex learning problems.
arXiv Detail & Related papers (2020-05-16T07:17:31Z) - Joint Parameter-and-Bandwidth Allocation for Improving the Efficiency of
Partitioned Edge Learning [73.82875010696849]
Machine learning algorithms are deployed at the network edge for training artificial intelligence (AI) models.
This paper focuses on the novel joint design of parameter (computation load) allocation and bandwidth allocation.
arXiv Detail & Related papers (2020-03-10T05:52:15Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.