Sparse-SignSGD with Majority Vote for Communication-Efficient
Distributed Learning
- URL: http://arxiv.org/abs/2302.07475v1
- Date: Wed, 15 Feb 2023 05:36:41 GMT
- Title: Sparse-SignSGD with Majority Vote for Communication-Efficient
Distributed Learning
- Authors: Chanho Park and Namyoon Lee
- Abstract summary: $sf S3$GD-MV is a communication-efficient distributed optimization algorithm.
We show that it converges at the same rate as signSGD while significantly reducing communication costs.
These findings highlight the potential of $sf S3$GD-MV as a promising solution for communication-efficient distributed optimization in deep learning.
- Score: 20.22227794319504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The training efficiency of complex deep learning models can be significantly
improved through the use of distributed optimization. However, this process is
often hindered by a large amount of communication cost between workers and a
parameter server during iterations. To address this bottleneck, in this paper,
we present a new communication-efficient algorithm that offers the synergistic
benefits of both sparsification and sign quantization, called ${\sf S}^3$GD-MV.
The workers in ${\sf S}^3$GD-MV select the top-$K$ magnitude components of
their local gradient vector and only send the signs of these components to the
server. The server then aggregates the signs and returns the results via a
majority vote rule. Our analysis shows that, under certain mild conditions,
${\sf S}^3$GD-MV can converge at the same rate as signSGD while significantly
reducing communication costs, if the sparsification parameter $K$ is properly
chosen based on the number of workers and the size of the deep learning model.
Experimental results using both independent and identically distributed (IID)
and non-IID datasets demonstrate that the ${\sf S}^3$GD-MV attains higher
accuracy than signSGD, significantly reducing communication costs. These
findings highlight the potential of ${\sf S}^3$GD-MV as a promising solution
for communication-efficient distributed optimization in deep learning.
Related papers
- FedScalar: A Communication efficient Federated Learning [0.0]
Federated learning (FL) has gained considerable popularity for distributed machine learning.
emphFedScalar enables agents to communicate updates using a single scalar.
arXiv Detail & Related papers (2024-10-03T07:06:49Z) - SignSGD with Federated Voting [69.06621279967865]
SignSGD with majority voting (signSGD-MV) is an effective distributed learning algorithm that can significantly reduce communication costs by one-bit quantization.
We propose a novel signSGD with textitfederated voting (signSGD-FV)
The idea of federated voting is to exploit learnable weights to perform weighted majority voting.
We demonstrate that the proposed signSGD-FV algorithm has a theoretical convergence guarantee even when edge devices use heterogeneous mini-batch sizes.
arXiv Detail & Related papers (2024-03-25T02:32:43Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Communication-Efficient Adam-Type Algorithms for Distributed Data Mining [93.50424502011626]
We propose a class of novel distributed Adam-type algorithms (emphi.e., SketchedAMSGrad) utilizing sketching.
Our new algorithm achieves a fast convergence rate of $O(frac1sqrtnT + frac1(k/d)2 T)$ with the communication cost of $O(k log(d))$ at each iteration.
arXiv Detail & Related papers (2022-10-14T01:42:05Z) - OFedQIT: Communication-Efficient Online Federated Learning via
Quantization and Intermittent Transmission [7.6058140480517356]
Online federated learning (OFL) is a promising framework to collaboratively learn a sequence of non-linear functions (or models) from distributed streaming data.
We propose a communication-efficient OFL algorithm (named OFedQIT) by means of a quantization and an intermittent transmission.
Our analysis reveals that OFedQIT successfully addresses the drawbacks of OFedAvg while maintaining superior learning accuracy.
arXiv Detail & Related papers (2022-05-13T07:46:43Z) - Acceleration in Distributed Optimization Under Similarity [72.54787082152278]
We study distributed (strongly convex) optimization problems over a network of agents, with no centralized nodes.
An $varepsilon$-solution is achieved in $tildemathcalrhoObig(sqrtfracbeta/mu (1-)log1/varepsilonbig)$ number of communications steps.
This rate matches (up to poly-log factors) for the first time lower complexity communication bounds of distributed gossip-algorithms applied to the class of problems of interest.
arXiv Detail & Related papers (2021-10-24T04:03:00Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Variance Reduced Local SGD with Lower Communication Complexity [52.44473777232414]
We propose Variance Reduced Local SGD to further reduce the communication complexity.
VRL-SGD achieves a emphlinear iteration speedup with a lower communication complexity $O(Tfrac12 Nfrac32)$ even if workers access non-identical datasets.
arXiv Detail & Related papers (2019-12-30T08:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.