Contextual Model Aggregation for Fast and Robust Federated Learning in
Edge Computing
- URL: http://arxiv.org/abs/2203.12738v1
- Date: Wed, 23 Mar 2022 21:42:31 GMT
- Title: Contextual Model Aggregation for Fast and Robust Federated Learning in
Edge Computing
- Authors: Hung T. Nguyen, H. Vincent Poor, Mung Chiang
- Abstract summary: Federated learning is a prime candidate for distributed machine learning at the network edge.
Existing algorithms face issues with slow convergence and/or robustness of performance.
We propose a contextual aggregation scheme that achieves the optimal context-dependent bound on loss reduction.
- Score: 88.76112371510999
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated learning is a prime candidate for distributed machine learning at
the network edge due to the low communication complexity and privacy protection
among other attractive properties. However, existing algorithms face issues
with slow convergence and/or robustness of performance due to the considerable
heterogeneity of data distribution, computation and communication capability at
the edge. In this work, we tackle both of these issues by focusing on the key
component of model aggregation in federated learning systems and studying
optimal algorithms to perform this task. Particularly, we propose a contextual
aggregation scheme that achieves the optimal context-dependent bound on loss
reduction in each round of optimization. The aforementioned context-dependent
bound is derived from the particular participating devices in that round and an
assumption on smoothness of the overall loss function. We show that this
aggregation leads to a definite reduction of loss function at every round.
Furthermore, we can integrate our aggregation with many existing algorithms to
obtain the contextual versions. Our experimental results demonstrate
significant improvements in convergence speed and robustness of the contextual
versions compared to the original algorithms. We also consider different
variants of the contextual aggregation and show robust performance even in the
most extreme settings.
Related papers
- Unified Framework for Neural Network Compression via Decomposition and Optimal Rank Selection [3.3454373538792552]
We present a unified framework that applies decomposition and optimal rank selection, employing a composite compression loss within defined rank constraints.
Our approach includes an automatic rank search in a continuous space, efficiently identifying optimal rank configurations without the use of training data.
Using various benchmark datasets, we demonstrate the efficacy of our method through a comprehensive analysis.
arXiv Detail & Related papers (2024-09-05T14:15:54Z) - Nonconvex Federated Learning on Compact Smooth Submanifolds With Heterogeneous Data [23.661713049508375]
We propose an algorithm that learns over a submanifold in the setting of a client.
We show that our proposed algorithm converges sub-ly to a neighborhood of a first-order optimal solution by using a novel analysis.
arXiv Detail & Related papers (2024-06-12T17:53:28Z) - Lower Bounds and Optimal Algorithms for Non-Smooth Convex Decentralized Optimization over Time-Varying Networks [57.24087627267086]
We consider the task of minimizing the sum of convex functions stored in a decentralized manner across the nodes of a communication network.
Lower bounds on the number of decentralized communications and (sub)gradient computations required to solve the problem have been established.
We develop the first optimal algorithm that matches these lower bounds and offers substantially improved theoretical performance compared to the existing state of the art.
arXiv Detail & Related papers (2024-05-28T10:28:45Z) - Provably Efficient Learning in Partially Observable Contextual Bandit [4.910658441596583]
We show how causal bounds can be applied to improving classical bandit algorithms.
This research has the potential to enhance the performance of contextual bandit agents in real-world applications.
arXiv Detail & Related papers (2023-08-07T13:24:50Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Federated Learning for Heterogeneous Bandits with Unobserved Contexts [0.0]
We study the problem of federated multi-arm contextual bandits with unknown contexts.
We propose an elimination-based algorithm and prove the regret bound for linearly parametrized reward functions.
arXiv Detail & Related papers (2023-03-29T22:06:24Z) - On the Convergence of Distributed Stochastic Bilevel Optimization
Algorithms over a Network [55.56019538079826]
Bilevel optimization has been applied to a wide variety of machine learning models.
Most existing algorithms restrict their single-machine setting so that they are incapable of handling distributed data.
We develop novel decentralized bilevel optimization algorithms based on a gradient tracking communication mechanism and two different gradients.
arXiv Detail & Related papers (2022-06-30T05:29:52Z) - Decentralized Statistical Inference with Unrolled Graph Neural Networks [26.025935320024665]
We propose a learning-based framework, which unrolls decentralized optimization algorithms into graph neural networks (GNNs)
By minimizing the recovery error via end-to-end training, this learning-based framework resolves the model mismatch issue.
Our convergence analysis reveals that the learned model parameters may accelerate the convergence and reduce the recovery error to a large extent.
arXiv Detail & Related papers (2021-04-04T07:52:34Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.