Federated Multi-Level Optimization over Decentralized Networks
- URL: http://arxiv.org/abs/2310.06217v1
- Date: Tue, 10 Oct 2023 00:21:10 GMT
- Title: Federated Multi-Level Optimization over Decentralized Networks
- Authors: Shuoguang Yang, Xuezhou Zhang, Mengdi Wang
- Abstract summary: We study the problem of distributed multi-level optimization over a network, where agents can only communicate with their immediate neighbors.
We propose a novel gossip-based distributed multi-level optimization algorithm that enables networked agents to solve optimization problems at different levels in a single timescale.
Our algorithm achieves optimal sample complexity, scaling linearly with the network size, and demonstrates state-of-the-art performance on various applications.
- Score: 55.776919718214224
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-level optimization has gained increasing attention in recent years, as
it provides a powerful framework for solving complex optimization problems that
arise in many fields, such as meta-learning, multi-player games, reinforcement
learning, and nested composition optimization. In this paper, we study the
problem of distributed multi-level optimization over a network, where agents
can only communicate with their immediate neighbors. This setting is motivated
by the need for distributed optimization in large-scale systems, where
centralized optimization may not be practical or feasible. To address this
problem, we propose a novel gossip-based distributed multi-level optimization
algorithm that enables networked agents to solve optimization problems at
different levels in a single timescale and share information through network
propagation. Our algorithm achieves optimal sample complexity, scaling linearly
with the network size, and demonstrates state-of-the-art performance on various
applications, including hyper-parameter tuning, decentralized reinforcement
learning, and risk-averse optimization.
Related papers
- DiffSG: A Generative Solver for Network Optimization with Diffusion Model [75.27274046562806]
Diffusion generative models can consider a broader range of solutions and exhibit stronger generalization by learning parameters.
We propose a new framework, which leverages intrinsic distribution learning of diffusion generative models to learn high-quality solutions.
arXiv Detail & Related papers (2024-08-13T07:56:21Z) - Lower Bounds and Optimal Algorithms for Non-Smooth Convex Decentralized Optimization over Time-Varying Networks [57.24087627267086]
We consider the task of minimizing the sum of convex functions stored in a decentralized manner across the nodes of a communication network.
Lower bounds on the number of decentralized communications and (sub)gradient computations required to solve the problem have been established.
We develop the first optimal algorithm that matches these lower bounds and offers substantially improved theoretical performance compared to the existing state of the art.
arXiv Detail & Related papers (2024-05-28T10:28:45Z) - Fast and Scalable Network Slicing by Integrating Deep Learning with
Lagrangian Methods [8.72339110741777]
Network slicing is a key technique in 5G and beyond for efficiently supporting diverse services.
Deep learning models suffer limited generalization and adaptability to dynamic slicing configurations.
We propose a novel framework that integrates constrained optimization methods and deep learning models.
arXiv Detail & Related papers (2024-01-22T07:19:16Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Adaptive Consensus: A network pruning approach for decentralized
optimization [0.5432724320036953]
We consider network-based decentralized optimization problems, where each node in the network possesses a local function.
The objective is to collectively attain a consensus solution that minimizes the sum of all the local functions.
We propose an adaptive randomized communication-efficient algorithmic framework that reduces the volume of communication.
arXiv Detail & Related papers (2023-09-06T00:28:10Z) - Decentralized Multi-Level Compositional Optimization Algorithms with Level-Independent Convergence Rate [26.676582181833584]
Decentralized multi-level optimization is challenging because of the multilevel structure and decentralized communication.
We develop two novel decentralized optimization algorithms to optimize the multi-level compositional problem.
arXiv Detail & Related papers (2023-06-06T00:23:28Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Decentralized Gossip-Based Stochastic Bilevel Optimization over
Communication Networks [42.76623191830371]
We propose a gossip-based distributed bilevel optimization algorithm.
Agents can solve both networked and outer problems in a single time.
Our algorithm achieves the state-of-the-art efficiency and test accuracy.
arXiv Detail & Related papers (2022-06-22T06:38:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.