Distributed Finite-Sum Constrained Optimization subject to Nonlinearity
on the Node Dynamics
- URL: http://arxiv.org/abs/2203.14527v1
- Date: Mon, 28 Mar 2022 06:47:01 GMT
- Title: Distributed Finite-Sum Constrained Optimization subject to Nonlinearity
on the Node Dynamics
- Authors: Mohammadreza Doostmohammadian, Maria Vrakopoulou, Alireza Aghasi,
Themistoklis Charalambous
- Abstract summary: We consider a distributed finite-sum (or fixed-sum) allocation technique to solve convex optimization problems over multi-agent networks (MANs)
This paper discusses how various nonlinearity constraints on the optimization problem can be addressed for different applications via a distributed setup (over a network)
- Score: 6.211043407287827
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Motivated by recent development in networking and parallel data-processing,
we consider a distributed and localized finite-sum (or fixed-sum) allocation
technique to solve resource-constrained convex optimization problems over
multi-agent networks (MANs). Such networks include (smart) agents representing
an intelligent entity capable of communication, processing, and
decision-making. In particular, we consider problems subject to practical
nonlinear constraints on the dynamics of the agents in terms of their
communications and actuation capabilities (referred to as the node dynamics),
e.g., networks of mobile robots subject to actuator saturation and quantized
communication. The considered distributed sum-preserving optimization solution
further enables adding purposeful nonlinear constraints, for example,
sign-based nonlinearities, to reach convergence in predefined-time or robust to
impulsive noise and disturbances in faulty environments. Moreover, convergence
can be achieved under minimal network connectivity requirements among the
agents; thus, the solution is applicable over dynamic networks where the
channels come and go due to the agent's mobility and limited range. This paper
discusses how various nonlinearity constraints on the optimization problem
(e.g., collaborative allocation of resources) can be addressed for different
applications via a distributed setup (over a network).
Related papers
- AI Flow at the Network Edge [58.31090055138711]
AI Flow is a framework that streamlines the inference process by jointly leveraging the heterogeneous resources available across devices, edge nodes, and cloud servers.
This article serves as a position paper for identifying the motivation, challenges, and principles of AI Flow.
arXiv Detail & Related papers (2024-11-19T12:51:17Z) - Performance-Aware Self-Configurable Multi-Agent Networks: A Distributed Submodular Approach for Simultaneous Coordination and Network Design [3.5527561584422465]
We present AlterNAting COordination and Network-Design Algorithm (Anaconda)
Anaconda is a scalable algorithm that also enjoys near-optimality guarantees.
We demonstrate in simulated scenarios of area monitoring and compare it with a state-of-the-art algorithm.
arXiv Detail & Related papers (2024-09-02T18:11:33Z) - Lower Bounds and Optimal Algorithms for Non-Smooth Convex Decentralized Optimization over Time-Varying Networks [57.24087627267086]
We consider the task of minimizing the sum of convex functions stored in a decentralized manner across the nodes of a communication network.
Lower bounds on the number of decentralized communications and (sub)gradient computations required to solve the problem have been established.
We develop the first optimal algorithm that matches these lower bounds and offers substantially improved theoretical performance compared to the existing state of the art.
arXiv Detail & Related papers (2024-05-28T10:28:45Z) - Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.
Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - Multi-Agent Reinforcement Learning for Power Control in Wireless
Networks via Adaptive Graphs [1.1861167902268832]
Multi-agent deep reinforcement learning (MADRL) has emerged as a promising method to address a wide range of complex optimization problems like power control.
We present the use of graphs as communication-inducing structures among distributed agents as an effective means to mitigate these challenges.
arXiv Detail & Related papers (2023-11-27T14:25:40Z) - Federated Multi-Level Optimization over Decentralized Networks [55.776919718214224]
We study the problem of distributed multi-level optimization over a network, where agents can only communicate with their immediate neighbors.
We propose a novel gossip-based distributed multi-level optimization algorithm that enables networked agents to solve optimization problems at different levels in a single timescale.
Our algorithm achieves optimal sample complexity, scaling linearly with the network size, and demonstrates state-of-the-art performance on various applications.
arXiv Detail & Related papers (2023-10-10T00:21:10Z) - Communication-Efficient Zeroth-Order Distributed Online Optimization:
Algorithm, Theory, and Applications [9.045332526072828]
This paper focuses on a multi-agent zeroth-order online optimization problem in a federated learning setting for target tracking.
The proposed solution is further analyzed in terms of errors and errors in two relevant applications.
arXiv Detail & Related papers (2023-06-09T03:51:45Z) - Compressed Regression over Adaptive Networks [58.79251288443156]
We derive the performance achievable by a network of distributed agents that solve, adaptively and in the presence of communication constraints, a regression problem.
We devise an optimized allocation strategy where the parameters necessary for the optimization can be learned online by the agents.
arXiv Detail & Related papers (2023-04-07T13:41:08Z) - Multi-Resource Allocation for On-Device Distributed Federated Learning
Systems [79.02994855744848]
This work poses a distributed multi-resource allocation scheme for minimizing the weighted sum of latency and energy consumption in the on-device distributed federated learning (FL) system.
Each mobile device in the system engages the model training process within the specified area and allocates its computation and communication resources for deriving and uploading parameters, respectively.
arXiv Detail & Related papers (2022-11-01T14:16:05Z) - Physics and Equality Constrained Artificial Neural Networks: Application
to Partial Differential Equations [1.370633147306388]
Physics-informed neural networks (PINNs) have been proposed to learn the solution of partial differential equations (PDE)
Here, we show that this specific way of formulating the objective function is the source of severe limitations in the PINN approach.
We propose a versatile framework that can tackle both inverse and forward problems.
arXiv Detail & Related papers (2021-09-30T05:55:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.