Communication Efficient Federated Learning via Ordered ADMM in a Fully
Decentralized Setting
- URL: http://arxiv.org/abs/2202.02580v1
- Date: Sat, 5 Feb 2022 15:32:02 GMT
- Title: Communication Efficient Federated Learning via Ordered ADMM in a Fully
Decentralized Setting
- Authors: Yicheng Chen, Rick S. Blum, and Brian M. Sadler
- Abstract summary: A communication efficient algorithm, called ordering-based alternating direction method of multipliers (OADMM), is devised.
A variant of OADMM, called SOADMM, is proposed where transmissions are ordered but transmissions are never stopped for each node at each iteration.
- Score: 32.41824379833395
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The challenge of communication-efficient distributed optimization has
attracted attention in recent years. In this paper, a communication efficient
algorithm, called ordering-based alternating direction method of multipliers
(OADMM) is devised in a general fully decentralized network setting where a
worker can only exchange messages with neighbors. Compared to the classical
ADMM, a key feature of OADMM is that transmissions are ordered among workers at
each iteration such that a worker with the most informative data broadcasts its
local variable to neighbors first, and neighbors who have not transmitted yet
can update their local variables based on that received transmission. In OADMM,
we prohibit workers from transmitting if their current local variables are not
sufficiently different from their previously transmitted value. A variant of
OADMM, called SOADMM, is proposed where transmissions are ordered but
transmissions are never stopped for each node at each iteration. Numerical
results demonstrate that given a targeted accuracy, OADMM can significantly
reduce the number of communications compared to existing algorithms including
ADMM. We also show numerically that SOADMM can accelerate convergence,
resulting in communication savings compared to the classical ADMM.
Related papers
- Efficient Distribution Matching of Representations via Noise-Injected Deep InfoMax [73.03684002513218]
We enhance Deep InfoMax (DIM) to enable automatic matching of learned representations to a selected prior distribution.
We show that such modification allows for learning uniformly and normally distributed representations.
The results indicate a moderate trade-off between the performance on the downstream tasks and quality of DM.
arXiv Detail & Related papers (2024-10-09T15:40:04Z) - Limited Communications Distributed Optimization via Deep Unfolded Distributed ADMM [27.09017677987757]
We propose unfolded D-ADMM to enable D-ADMM to operate reliably with a small number of messages exchanged by each agent.
We specialize unfolded D-ADMM for two representative settings: a distributed estimation task, and a distributed learning scenario.
Our numerical results demonstrate that the proposed approach dramatically reduces the number of communications utilized by D-ADMM, without compromising its performance.
arXiv Detail & Related papers (2023-09-21T08:05:28Z) - Dynamic Size Message Scheduling for Multi-Agent Communication under
Limited Bandwidth [5.590219593864609]
We present the Dynamic Size Message Scheduling (DSMS) method, which introduces a finer-grained approach to scheduling.
Our contribution lies in adaptively adjusting message sizes using Fourier transform-based compression techniques.
Experimental results demonstrate that DSMS significantly improves performance in multi-agent cooperative tasks.
arXiv Detail & Related papers (2023-06-16T18:33:11Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - Multi-Prompt Alignment for Multi-Source Unsupervised Domain Adaptation [86.02485817444216]
We introduce Multi-Prompt Alignment (MPA), a simple yet efficient framework for multi-source UDA.
MPA denoises the learned prompts through an auto-encoding process and aligns them by maximizing the agreement of all the reconstructed prompts.
Experiments show that MPA achieves state-of-the-art results on three popular datasets with an impressive average accuracy of 54.1% on DomainNet.
arXiv Detail & Related papers (2022-09-30T03:40:10Z) - Learning Regionally Decentralized AC Optimal Power Flows with ADMM [16.843799157160063]
This paper studies how machine learning may help in speeding up the convergence of ADMM for solving AC-OPF.
It proposes a novel decentralized machine-learning approach, namely ML-ADMM, where each agent uses deep learning to learn the consensus parameters on the coupling branches.
arXiv Detail & Related papers (2022-05-08T05:30:35Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Distributed ADMM with Synergetic Communication and Computation [39.930150618785355]
We propose a novel distributed alternating direction method of multipliers (ADMM) algorithm with synergetic communication and computation.
In the proposed algorithm, each node interacts with only part of its neighboring nodes, the number of which is progressively determined according to a searching procedure.
We prove the convergence of the proposed algorithm and provide an upper bound of the convergence variance brought by randomness.
arXiv Detail & Related papers (2020-09-29T08:36:26Z) - Communication Efficient Distributed Learning with Censored, Quantized,
and Generalized Group ADMM [52.12831959365598]
We propose a communication-efficiently decentralized machine learning framework that solves a consensus optimization problem defined over a network of inter-connected workers.
The proposed algorithm, Censored and Quantized Generalized GADMM, leverages the worker grouping and decentralized learning ideas of Group Alternating Direction Method of Multipliers (GADMM)
Numerical simulations corroborate that CQ-GGADMM exhibits higher communication efficiency in terms of the number of communication rounds and transmit energy consumption without compromising the accuracy and convergence speed.
arXiv Detail & Related papers (2020-09-14T14:18:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.