Faster On-Device Training Using New Federated Momentum Algorithm
- URL: http://arxiv.org/abs/2002.02090v1
- Date: Thu, 6 Feb 2020 04:12:43 GMT
- Title: Faster On-Device Training Using New Federated Momentum Algorithm
- Authors: Zhouyuan Huo, Qian Yang, Bin Gu, Lawrence Carin. Heng Huang
- Abstract summary: Mobile crowdsensing has gained significant attention in recent years and has become a critical paradigm for emerging Internet of Things applications.
To utilize these data to train machine learning models while not compromising user opportunities, federated has become a promising solution.
- Score: 47.187934818456604
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mobile crowdsensing has gained significant attention in recent years and has
become a critical paradigm for emerging Internet of Things applications. The
sensing devices continuously generate a significant quantity of data, which
provide tremendous opportunities to develop innovative intelligent
applications. To utilize these data to train machine learning models while not
compromising user privacy, federated learning has become a promising solution.
However, there is little understanding of whether federated learning algorithms
are guaranteed to converge. We reconsider model averaging in federated learning
and formulate it as a gradient-based method with biased gradients. This novel
perspective assists analysis of its convergence rate and provides a new
direction for more acceleration. We prove for the first time that the federated
averaging algorithm is guaranteed to converge for non-convex problems, without
imposing additional assumptions. We further propose a novel accelerated
federated learning algorithm and provide a convergence guarantee. Simulated
federated learning experiments are conducted to train deep neural networks on
benchmark datasets, and experimental results show that our proposed method
converges faster than previous approaches.
Related papers
- Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Accelerating Federated Edge Learning via Topology Optimization [41.830942005165625]
Federated edge learning (FEEL) is envisioned as a promising paradigm to achieve privacy-preserving distributed learning.
It consumes excessive learning time due to the existence of straggler devices.
A novel topology-optimized federated edge learning (TOFEL) scheme is proposed to tackle the heterogeneity issue in federated learning.
arXiv Detail & Related papers (2022-04-01T14:49:55Z) - Fast Federated Learning in the Presence of Arbitrary Device
Unavailability [26.368873771739715]
Federated Learning (FL) coordinates heterogeneous devices to collaboratively train a shared model while preserving user privacy.
One challenge arises when devices drop out of the training process beyond the central server.
We propose Im Federated Apatientaging (MIFA) to solve this problem.
arXiv Detail & Related papers (2021-06-08T07:46:31Z) - Concept drift detection and adaptation for federated and continual
learning [55.41644538483948]
Smart devices can collect vast amounts of data from their environment.
This data is suitable for training machine learning models, which can significantly improve their behavior.
In this work, we present a new method, called Concept-Drift-Aware Federated Averaging.
arXiv Detail & Related papers (2021-05-27T17:01:58Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - Improving Federated Relational Data Modeling via Basis Alignment and
Weight Penalty [18.096788806121754]
Federated learning (FL) has attracted increasing attention in recent years.
We present a modified version of the graph neural network algorithm that performs federated modeling over Knowledge Graph (KG)
We propose a novel optimization algorithm, named FedAlign, with 1) optimal transportation (OT) for on-client personalization and 2) weight constraint to speed up the convergence.
Empirical results show that our proposed method outperforms the state-of-the-art FL methods, such as FedAVG and FedProx, with better convergence.
arXiv Detail & Related papers (2020-11-23T12:52:18Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.