Distributed Policy Gradient for Linear Quadratic Networked Control with
Limited Communication Range
- URL: http://arxiv.org/abs/2403.03055v1
- Date: Tue, 5 Mar 2024 15:38:54 GMT
- Title: Distributed Policy Gradient for Linear Quadratic Networked Control with
Limited Communication Range
- Authors: Yuzi Yan and Yuan Shen
- Abstract summary: We show that it is possible to approximate the exact gradient only using local information.
Compared with the centralized optimal controller, the performance gap decreases to zero exponentially as the communication and control ranges increase.
- Score: 23.500806437272487
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a scalable distributed policy gradient method and proves
its convergence to near-optimal solution in multi-agent linear quadratic
networked systems. The agents engage within a specified network under local
communication constraints, implying that each agent can only exchange
information with a limited number of neighboring agents. On the underlying
graph of the network, each agent implements its control input depending on its
nearby neighbors' states in the linear quadratic control setting. We show that
it is possible to approximate the exact gradient only using local information.
Compared with the centralized optimal controller, the performance gap decreases
to zero exponentially as the communication and control ranges increase. We also
demonstrate how increasing the communication range enhances system stability in
the gradient descent process, thereby elucidating a critical trade-off. The
simulation results verify our theoretical findings.
Related papers
- Decentralized Federated Learning with Gradient Tracking over Time-Varying Directed Networks [42.92231921732718]
We propose a consensus-based algorithm called DSGTm-TV.
It incorporates gradient tracking and heavy-ball momentum to optimize a global objective function.
Under DSGTm-TV, agents will update local model parameters and gradient estimates using information exchange with neighboring agents.
arXiv Detail & Related papers (2024-09-25T06:23:16Z) - Networked Communication for Mean-Field Games with Function Approximation and Empirical Mean-Field Estimation [59.01527054553122]
Decentralised agents can learn equilibria in Mean-Field Games from a single, non-episodic run of the empirical system.
We introduce function approximation to the existing setting, drawing on the Munchausen Online Mirror Descent method.
We additionally provide new algorithms that allow agents to estimate the global empirical distribution based on a local neighbourhood.
arXiv Detail & Related papers (2024-08-21T13:32:46Z) - Accelerating Distributed Optimization: A Primal-Dual Perspective on Local Steps [4.471962177124311]
In distributed machine learning, linear variables across multiple agents with different data poses significant challenges.
In this paper we show that a framework that achieves the Lagrangian convergence on the primal variable requires no inter-agent communication.
arXiv Detail & Related papers (2024-07-02T22:14:54Z) - Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.
Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - Unsupervised Graph-based Learning Method for Sub-band Allocation in 6G Subnetworks [2.0583251142940377]
We present an unsupervised approach for frequency sub-band allocation in wireless networks using graph-based learning.
We model the subnetwork deployment as a conflict graph and propose an unsupervised learning approach inspired by the graph colouring and the Potts model to optimize the sub-band allocation.
arXiv Detail & Related papers (2023-12-13T12:57:55Z) - Communication-Efficient Zeroth-Order Distributed Online Optimization:
Algorithm, Theory, and Applications [9.045332526072828]
This paper focuses on a multi-agent zeroth-order online optimization problem in a federated learning setting for target tracking.
The proposed solution is further analyzed in terms of errors and errors in two relevant applications.
arXiv Detail & Related papers (2023-06-09T03:51:45Z) - Compressed Regression over Adaptive Networks [58.79251288443156]
We derive the performance achievable by a network of distributed agents that solve, adaptively and in the presence of communication constraints, a regression problem.
We devise an optimized allocation strategy where the parameters necessary for the optimization can be learned online by the agents.
arXiv Detail & Related papers (2023-04-07T13:41:08Z) - Fundamental Limits of Communication Efficiency for Model Aggregation in
Distributed Learning: A Rate-Distortion Approach [54.311495894129585]
We study the limit of communication cost of model aggregation in distributed learning from a rate-distortion perspective.
It is found that the communication gain by exploiting the correlation between worker nodes is significant for SignSGD.
arXiv Detail & Related papers (2022-06-28T13:10:40Z) - Learning Resilient Radio Resource Management Policies with Graph Neural
Networks [124.89036526192268]
We formulate a resilient radio resource management problem with per-user minimum-capacity constraints.
We show that we can parameterize the user selection and power control policies using a finite set of parameters.
Thanks to such adaptation, our proposed method achieves a superior tradeoff between the average rate and the 5th percentile rate.
arXiv Detail & Related papers (2022-03-07T19:40:39Z) - Distributed Adaptive Learning Under Communication Constraints [54.22472738551687]
This work examines adaptive distributed learning strategies designed to operate under communication constraints.
We consider a network of agents that must solve an online optimization problem from continual observation of streaming data.
arXiv Detail & Related papers (2021-12-03T19:23:48Z) - Distributed Voltage Regulation of Active Distribution System Based on
Enhanced Multi-agent Deep Reinforcement Learning [9.7314654861242]
This paper proposes a data-driven distributed voltage control approach based on the spectrum clustering and the enhanced multi-agent deep reinforcement learning (MADRL) algorithm.
The proposed method can significantly reduce the requirements of communications and knowledge of system parameters.
It also effectively deals with uncertainties and can provide online coordinated control based on the latest local information.
arXiv Detail & Related papers (2020-05-31T15:48:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.