Adaptive Consensus: A network pruning approach for decentralized
optimization
- URL: http://arxiv.org/abs/2309.02626v1
- Date: Wed, 6 Sep 2023 00:28:10 GMT
- Title: Adaptive Consensus: A network pruning approach for decentralized
optimization
- Authors: Suhail M. Shah, Albert S. Berahas, Raghu Bollapragada
- Abstract summary: We consider network-based decentralized optimization problems, where each node in the network possesses a local function.
The objective is to collectively attain a consensus solution that minimizes the sum of all the local functions.
We propose an adaptive randomized communication-efficient algorithmic framework that reduces the volume of communication.
- Score: 0.5432724320036953
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider network-based decentralized optimization problems, where each
node in the network possesses a local function and the objective is to
collectively attain a consensus solution that minimizes the sum of all the
local functions. A major challenge in decentralized optimization is the
reliance on communication which remains a considerable bottleneck in many
applications. To address this challenge, we propose an adaptive randomized
communication-efficient algorithmic framework that reduces the volume of
communication by periodically tracking the disagreement error and judiciously
selecting the most influential and effective edges at each node for
communication. Within this framework, we present two algorithms: Adaptive
Consensus (AC) to solve the consensus problem and Adaptive Consensus based
Gradient Tracking (AC-GT) to solve smooth strongly convex decentralized
optimization problems. We establish strong theoretical convergence guarantees
for the proposed algorithms and quantify their performance in terms of various
algorithmic parameters under standard assumptions. Finally, numerical
experiments showcase the effectiveness of the framework in significantly
reducing the information exchange required to achieve a consensus solution.
Related papers
- Lower Bounds and Optimal Algorithms for Non-Smooth Convex Decentralized Optimization over Time-Varying Networks [57.24087627267086]
We consider the task of minimizing the sum of convex functions stored in a decentralized manner across the nodes of a communication network.
Lower bounds on the number of decentralized communications and (sub)gradient computations required to solve the problem have been established.
We develop the first optimal algorithm that matches these lower bounds and offers substantially improved theoretical performance compared to the existing state of the art.
arXiv Detail & Related papers (2024-05-28T10:28:45Z) - Improving Point-based Crowd Counting and Localization Based on Auxiliary Point Guidance [59.71186244597394]
We introduce an effective approach to stabilize the proposal-target matching in point-based methods.
We propose Auxiliary Point Guidance (APG) to provide clear and effective guidance for proposal selection and optimization.
We also develop Implicit Feature Interpolation (IFI) to enable adaptive feature extraction in diverse crowd scenarios.
arXiv Detail & Related papers (2024-05-17T07:23:27Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Federated Multi-Level Optimization over Decentralized Networks [55.776919718214224]
We study the problem of distributed multi-level optimization over a network, where agents can only communicate with their immediate neighbors.
We propose a novel gossip-based distributed multi-level optimization algorithm that enables networked agents to solve optimization problems at different levels in a single timescale.
Our algorithm achieves optimal sample complexity, scaling linearly with the network size, and demonstrates state-of-the-art performance on various applications.
arXiv Detail & Related papers (2023-10-10T00:21:10Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Decentralized Multi-Level Compositional Optimization Algorithms with Level-Independent Convergence Rate [26.676582181833584]
Decentralized multi-level optimization is challenging because of the multilevel structure and decentralized communication.
We develop two novel decentralized optimization algorithms to optimize the multi-level compositional problem.
arXiv Detail & Related papers (2023-06-06T00:23:28Z) - A Penalty-Based Method for Communication-Efficient Decentralized Bilevel Programming [14.35928967799696]
This paper introduces a penalty function-based decentralized algorithm for solving bilevel programming problems over a decentralized network.
A key feature of the proposed algorithm is the estimation of the hyper-gradient of the penalty function.
Our theoretical framework ensures non-asymptotic convergence to the optimal solution of the original problem under various convexity conditions.
arXiv Detail & Related papers (2022-11-08T08:39:30Z) - DESTRESS: Computation-Optimal and Communication-Efficient Decentralized
Nonconvex Finite-Sum Optimization [43.31016937305845]
Internet-of-things, networked sensing, autonomous systems and federated learning call for decentralized algorithms for finite-sum optimizations.
We develop DEcentralized STochastic REcurSive methodDESTRESS for non finite-sum optimization.
Detailed theoretical and numerical comparisons show that DESTRESS improves upon prior decentralized algorithms.
arXiv Detail & Related papers (2021-10-04T03:17:41Z) - A Decentralized Adaptive Momentum Method for Solving a Class of Min-Max
Optimization Problems [9.653157073271021]
We develop a decentralized adaptive momentum (ADAM)-type algorithm for solving min-max optimization problem.
We obtain non-asymptotic rates of convergence of the proposed algorithm for finding a (stochastic) first-order Nash equilibrium point.
arXiv Detail & Related papers (2021-06-10T22:32:01Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.