Communication Efficient Distributed Learning with Censored, Quantized,
and Generalized Group ADMM
- URL: http://arxiv.org/abs/2009.06459v2
- Date: Tue, 12 Jan 2021 05:37:15 GMT
- Title: Communication Efficient Distributed Learning with Censored, Quantized,
and Generalized Group ADMM
- Authors: Chaouki Ben Issaid, Anis Elgabli, Jihong Park, Mehdi Bennis,
M\'erouane Debbah
- Abstract summary: We propose a communication-efficiently decentralized machine learning framework that solves a consensus optimization problem defined over a network of inter-connected workers.
The proposed algorithm, Censored and Quantized Generalized GADMM, leverages the worker grouping and decentralized learning ideas of Group Alternating Direction Method of Multipliers (GADMM)
Numerical simulations corroborate that CQ-GGADMM exhibits higher communication efficiency in terms of the number of communication rounds and transmit energy consumption without compromising the accuracy and convergence speed.
- Score: 52.12831959365598
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a communication-efficiently decentralized machine
learning framework that solves a consensus optimization problem defined over a
network of inter-connected workers. The proposed algorithm, Censored and
Quantized Generalized GADMM (CQ-GGADMM), leverages the worker grouping and
decentralized learning ideas of Group Alternating Direction Method of
Multipliers (GADMM), and pushes the frontier in communication efficiency by
extending its applicability to generalized network topologies, while
incorporating link censoring for negligible updates after quantization. We
theoretically prove that CQ-GGADMM achieves the linear convergence rate when
the local objective functions are strongly convex under some mild assumptions.
Numerical simulations corroborate that CQ-GGADMM exhibits higher communication
efficiency in terms of the number of communication rounds and transmit energy
consumption without compromising the accuracy and convergence speed, compared
to the censored decentralized ADMM, and the worker grouping method of GADMM.
Related papers
- Boosting the Performance of Decentralized Federated Learning via Catalyst Acceleration [66.43954501171292]
We introduce Catalyst Acceleration and propose an acceleration Decentralized Federated Learning algorithm called DFedCata.
DFedCata consists of two main components: the Moreau envelope function, which addresses parameter inconsistencies, and Nesterov's extrapolation step, which accelerates the aggregation phase.
Empirically, we demonstrate the advantages of the proposed algorithm in both convergence speed and generalization performance on CIFAR10/100 with various non-iid data distributions.
arXiv Detail & Related papers (2024-10-09T06:17:16Z) - Semantic Communication for Cooperative Perception using HARQ [51.148203799109304]
We leverage an importance map to distill critical semantic information, introducing a cooperative perception semantic communication framework.
To counter the challenges posed by time-varying multipath fading, our approach incorporates the use of frequency-division multiplexing (OFDM) along with channel estimation and equalization strategies.
We introduce a novel semantic error detection method that is integrated with our semantic communication framework in the spirit of hybrid automatic repeated request (HARQ)
arXiv Detail & Related papers (2024-08-29T08:53:26Z) - Learning Regionally Decentralized AC Optimal Power Flows with ADMM [16.843799157160063]
This paper studies how machine learning may help in speeding up the convergence of ADMM for solving AC-OPF.
It proposes a novel decentralized machine-learning approach, namely ML-ADMM, where each agent uses deep learning to learn the consensus parameters on the coupling branches.
arXiv Detail & Related papers (2022-05-08T05:30:35Z) - Communication Efficient Federated Learning via Ordered ADMM in a Fully
Decentralized Setting [32.41824379833395]
A communication efficient algorithm, called ordering-based alternating direction method of multipliers (OADMM), is devised.
A variant of OADMM, called SOADMM, is proposed where transmissions are ordered but transmissions are never stopped for each node at each iteration.
arXiv Detail & Related papers (2022-02-05T15:32:02Z) - Adaptive Stochastic ADMM for Decentralized Reinforcement Learning in
Edge Industrial IoT [106.83952081124195]
Reinforcement learning (RL) has been widely investigated and shown to be a promising solution for decision-making and optimal control processes.
We propose an adaptive ADMM (asI-ADMM) algorithm and apply it to decentralized RL with edge-computing-empowered IIoT networks.
Experiment results show that our proposed algorithms outperform the state of the art in terms of communication costs and scalability, and can well adapt to complex IoT environments.
arXiv Detail & Related papers (2021-06-30T16:49:07Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Distributed ADMM with Synergetic Communication and Computation [39.930150618785355]
We propose a novel distributed alternating direction method of multipliers (ADMM) algorithm with synergetic communication and computation.
In the proposed algorithm, each node interacts with only part of its neighboring nodes, the number of which is progressively determined according to a searching procedure.
We prove the convergence of the proposed algorithm and provide an upper bound of the convergence variance brought by randomness.
arXiv Detail & Related papers (2020-09-29T08:36:26Z) - Distributed Optimization, Averaging via ADMM, and Network Topology [0.0]
We study the connection between network topology and convergence rates for different algorithms on a real world problem of sensor localization.
We also show interesting connections between ADMM and lifted Markov chains besides providing an explicitly characterization of its convergence.
arXiv Detail & Related papers (2020-09-05T21:44:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.