Asymptotic Convergence of Deep Multi-Agent Actor-Critic Algorithms
- URL: http://arxiv.org/abs/2201.00570v1
- Date: Mon, 3 Jan 2022 10:33:52 GMT
- Title: Asymptotic Convergence of Deep Multi-Agent Actor-Critic Algorithms
- Authors: Adrian Redder, Arunselvan Ramaswamy, Holger Karl
- Abstract summary: We present sufficient conditions that ensure convergence of the multi-agent Deep Deterministic Policy Gradient (DDPG) algorithm.
It is an example of one of the most popular paradigms of Deep Reinforcement Learning (DeepRL) for tackling continuous action spaces: the actor-critic paradigm.
- Score: 0.6961253535504979
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present sufficient conditions that ensure convergence of the multi-agent
Deep Deterministic Policy Gradient (DDPG) algorithm. It is an example of one of
the most popular paradigms of Deep Reinforcement Learning (DeepRL) for tackling
continuous action spaces: the actor-critic paradigm. In the setting considered
herein, each agent observes a part of the global state space in order to take
local actions, for which it receives local rewards. For every agent, DDPG
trains a local actor (policy) and a local critic (Q-function). The analysis
shows that multi-agent DDPG using neural networks to approximate the local
policies and critics converge to limits with the following properties: The
critic limits minimize the average squared Bellman loss; the actor limits
parameterize a policy that maximizes the local critic's approximation of
$Q_i^*$, where $i$ is the agent index. The averaging is with respect to a
probability distribution over the global state-action space. It captures the
asymptotics of all local training processes. Finally, we extend the analysis to
a fully decentralized setting where agents communicate over a wireless network
prone to delays and losses; a typical scenario in, e.g., robotic applications.
Related papers
- Networked Communication for Mean-Field Games with Function Approximation and Empirical Mean-Field Estimation [59.01527054553122]
Decentralised agents can learn equilibria in Mean-Field Games from a single, non-episodic run of the empirical system.
We introduce function approximation to the existing setting, drawing on the Munchausen Online Mirror Descent method.
We additionally provide new algorithms that allow agents to estimate the global empirical distribution based on a local neighbourhood.
arXiv Detail & Related papers (2024-08-21T13:32:46Z) - Efficient Reinforcement Learning for Global Decision Making in the Presence of Local Agents at Scale [5.3526997662068085]
We study reinforcement learning for global decision-making in the presence of local agents.
In this setting, scalability has been a long-standing challenge due to the size of the state space.
We show that this learned policy converges to the optimal policy in the order of $tildeO (1/sqrtk+epsilon_k,m)$ as the number of sub-sampled agents increases.
arXiv Detail & Related papers (2024-03-01T01:49:57Z) - Distributed Optimization via Kernelized Multi-armed Bandits [6.04275169308491]
We model a distributed optimization problem as a multi-agent kernelized multi-armed bandit problem with a heterogeneous reward setting.
We present a fully decentralized algorithm, Multi-agent IGP-UCB (MA-IGP-UCB), which achieves a sub-linear regret bound for popular classes for kernels.
We also propose an extension, Multi-agent Delayed IGP-UCB (MAD-IGP-UCB) algorithm, which reduces the dependence of the regret bound on the number of agents in the network.
arXiv Detail & Related papers (2023-12-07T21:57:48Z) - Local Optimization Achieves Global Optimality in Multi-Agent
Reinforcement Learning [139.53668999720605]
We present a multi-agent PPO algorithm in which the local policy of each agent is updated similarly to vanilla PPO.
We prove that with standard regularity conditions on the Markov game and problem-dependent quantities, our algorithm converges to the globally optimal policy at a sublinear rate.
arXiv Detail & Related papers (2023-05-08T16:20:03Z) - Convergence Rates for Localized Actor-Critic in Networked Markov
Potential Games [12.704529528199062]
We introduce a class of networked Markov potential games in which agents are associated with nodes in a network.
Each agent has its own local potential function, and the reward of each agent depends only on the states and actions of the agents within a neighborhood.
This is the first finite-sample bound for multi-agent competitive games that does not depend on the number of agents.
arXiv Detail & Related papers (2023-03-08T20:09:58Z) - Scalable Multi-Agent Reinforcement Learning with General Utilities [30.960413388976438]
We study the scalable multi-agent reinforcement learning (MARL) with general utilities.
The objective is to find a localized policy that maximizes the average of the team's local utility functions without the full observability of each agent in the team.
This is the first result in the literature on multi-agent RL with general utilities that does not require the full observability.
arXiv Detail & Related papers (2023-02-15T20:47:43Z) - Global Convergence of Localized Policy Iteration in Networked
Multi-Agent Reinforcement Learning [25.747559058350557]
We study a multi-agent reinforcement learning (MARL) problem where the agents interact over a given network.
The goal of the agents is to cooperatively maximize the average of their entropy-regularized long-term rewards.
To overcome the curse of dimensionality and to reduce communication, we propose a Localized Policy Iteration (LPI) that provably learns a near-globally-optimal policy using only local information.
arXiv Detail & Related papers (2022-11-30T15:58:00Z) - Plan Better Amid Conservatism: Offline Multi-Agent Reinforcement
Learning with Actor Rectification [74.10976684469435]
offline reinforcement learning (RL) algorithms can be transferred to multi-agent settings directly.
We propose a simple yet effective method, Offline Multi-Agent RL with Actor Rectification (OMAR), to tackle this critical challenge.
OMAR significantly outperforms strong baselines with state-of-the-art performance in multi-agent continuous control benchmarks.
arXiv Detail & Related papers (2021-11-22T13:27:42Z) - Distributed Q-Learning with State Tracking for Multi-agent Networked
Control [61.63442612938345]
This paper studies distributed Q-learning for Linear Quadratic Regulator (LQR) in a multi-agent network.
We devise a state tracking (ST) based Q-learning algorithm to design optimal controllers for agents.
arXiv Detail & Related papers (2020-12-22T22:03:49Z) - Single-Timescale Actor-Critic Provably Finds Globally Optimal Policy [122.01837436087516]
We study the global convergence and global optimality of actor-critic, one of the most popular families of reinforcement learning algorithms.
We establish the rate of convergence and global optimality of single-timescale actor-critic with linear function approximation for the first time.
arXiv Detail & Related papers (2020-08-02T14:01:49Z) - Implicit Distributional Reinforcement Learning [61.166030238490634]
implicit distributional actor-critic (IDAC) built on two deep generator networks (DGNs)
Semi-implicit actor (SIA) powered by a flexible policy distribution.
We observe IDAC outperforms state-of-the-art algorithms on representative OpenAI Gym environments.
arXiv Detail & Related papers (2020-07-13T02:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.