Asynchronous Parallel Incremental Block-Coordinate Descent for
Decentralized Machine Learning
- URL: http://arxiv.org/abs/2202.03263v1
- Date: Mon, 7 Feb 2022 15:04:15 GMT
- Title: Asynchronous Parallel Incremental Block-Coordinate Descent for
Decentralized Machine Learning
- Authors: Hao Chen, Yu Ye, Ming Xiao and Mikael Skoglund
- Abstract summary: Machine learning (ML) is a key technique for big-data-driven modelling and analysis of massive Internet of Things (IoT) based intelligent and ubiquitous computing.
For fast-increasing applications and data amounts, distributed learning is a promising emerging paradigm since it is often impractical or inefficient to share/aggregate data.
This paper studies the problem of training an ML model over decentralized systems, where data are distributed over many user devices.
- Score: 55.198301429316125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) is a key technique for big-data-driven modelling and
analysis of massive Internet of Things (IoT) based intelligent and ubiquitous
computing. For fast-increasing applications and data amounts, distributed
learning is a promising emerging paradigm since it is often impractical or
inefficient to share/aggregate data to a centralized location from distinct
ones. This paper studies the problem of training an ML model over decentralized
systems, where data are distributed over many user devices and the learning
algorithm run on-device, with the aim of relaxing the burden at a central
entity/server. Although gossip-based approaches have been used for this purpose
in different use cases, they suffer from high communication costs, especially
when the number of devices is large. To mitigate this, incremental-based
methods are proposed. We first introduce incremental block-coordinate descent
(I-BCD) for the decentralized ML, which can reduce communication costs at the
expense of running time. To accelerate the convergence speed, an asynchronous
parallel incremental BCD (API-BCD) method is proposed, where multiple
devices/agents are active in an asynchronous fashion. We derive convergence
properties for the proposed methods. Simulation results also show that our
API-BCD method outperforms state of the art in terms of running time and
communication costs.
Related papers
- High-Dimensional Distributed Sparse Classification with Scalable Communication-Efficient Global Updates [50.406127962933915]
We develop solutions to problems which enable us to learn a communication-efficient distributed logistic regression model.
In our experiments we demonstrate a large improvement in accuracy over distributed algorithms with only a few distributed update steps needed.
arXiv Detail & Related papers (2024-07-08T19:34:39Z) - Asynchronous Local Computations in Distributed Bayesian Learning [8.516532665507835]
We propose gossip-based communication to leverage fast computations and reduce communication overhead simultaneously.
We observe faster initial convergence and improved performance accuracy, especially in the low data range.
We achieve on average 78% and over 90% classification accuracy respectively on the Gamma Telescope and mHealth data sets from the UCI ML repository.
arXiv Detail & Related papers (2023-11-06T20:11:41Z) - Communication-Efficient Decentralized Federated Learning via One-Bit
Compressive Sensing [52.402550431781805]
Decentralized federated learning (DFL) has gained popularity due to its practicality across various applications.
Compared to the centralized version, training a shared model among a large number of nodes in DFL is more challenging.
We develop a novel algorithm based on the framework of the inexact alternating direction method (iADM)
arXiv Detail & Related papers (2023-08-31T12:22:40Z) - Time-Correlated Sparsification for Efficient Over-the-Air Model
Aggregation in Wireless Federated Learning [23.05003652536773]
Federated edge learning (FEEL) is a promising distributed machine learning (ML) framework to drive edge intelligence applications.
We propose time-correlated sparsification with hybrid aggregation (TCS-H) for communication-efficient FEEL.
arXiv Detail & Related papers (2022-02-17T02:48:07Z) - Collaborative Learning over Wireless Networks: An Introductory Overview [84.09366153693361]
We will mainly focus on collaborative training across wireless devices.
Many distributed optimization algorithms have been developed over the last decades.
They provide data locality; that is, a joint model can be trained collaboratively while the data available at each participating device remains local.
arXiv Detail & Related papers (2021-12-07T20:15:39Z) - Dynamic Network-Assisted D2D-Aided Coded Distributed Learning [59.29409589861241]
We propose a novel device-to-device (D2D)-aided coded federated learning method (D2D-CFL) for load balancing across devices.
We derive an optimal compression rate for achieving minimum processing time and establish its connection with the convergence time.
Our proposed method is beneficial for real-time collaborative applications, where the users continuously generate training data.
arXiv Detail & Related papers (2021-11-26T18:44:59Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Scaling-up Distributed Processing of Data Streams for Machine Learning [10.581140430698103]
This paper reviews recently developed methods that focus on large-scale distributed optimization in the compute- and bandwidth-limited regime.
It focuses on methods that solve: (i) distributed convex problems, and (ii) distributed principal component analysis, which is a non problem with geometric structure that permits global convergence.
arXiv Detail & Related papers (2020-05-18T16:28:54Z) - Federated Learning with Cooperating Devices: A Consensus Approach for
Massive IoT Networks [8.456633924613456]
Federated learning (FL) is emerging as a new paradigm to train machine learning models in distributed systems.
The paper proposes a fully distributed (or server-less) learning approach: the proposed FL algorithms leverage the cooperation of devices that perform data operations inside the network.
The approach lays the groundwork for integration of FL within 5G and beyond networks characterized by decentralized connectivity and computing.
arXiv Detail & Related papers (2019-12-27T15:16:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.