Shapley Value Based Multi-Agent Reinforcement Learning: Theory, Method
and Its Application to Energy Network
- URL: http://arxiv.org/abs/2402.15324v1
- Date: Fri, 23 Feb 2024 13:43:15 GMT
- Title: Shapley Value Based Multi-Agent Reinforcement Learning: Theory, Method
and Its Application to Energy Network
- Authors: Jianhong Wang
- Abstract summary: This thesis investigates the foundation of credit assignment in multi-agent reinforcement learning via cooperative game theory.
We first extend a game model called convex game and a payoff distribution scheme called Shapley value in cooperative game theory.
Based on Markov Shapley value, we propose three multi-agent reinforcement learning algorithms called SHAQ, SQDDPG and SPO.
- Score: 7.50196317304035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-agent reinforcement learning is an area of rapid advancement in
artificial intelligence and machine learning. One of the important questions to
be answered is how to conduct credit assignment in a multi-agent system. There
have been many schemes designed to conduct credit assignment by multi-agent
reinforcement learning algorithms. Although these credit assignment schemes
have been proved useful in improving the performance of multi-agent
reinforcement learning, most of them are designed heuristically without a
rigorous theoretic basis and therefore infeasible to understand how agents
cooperate. In this thesis, we aim at investigating the foundation of credit
assignment in multi-agent reinforcement learning via cooperative game theory.
We first extend a game model called convex game and a payoff distribution
scheme called Shapley value in cooperative game theory to Markov decision
process, named as Markov convex game and Markov Shapley value respectively. We
represent a global reward game as a Markov convex game under the grand
coalition. As a result, Markov Shapley value can be reasonably used as a credit
assignment scheme in the global reward game. Markov Shapley value possesses the
following virtues: (i) efficiency; (ii) identifiability of dummy agents; (iii)
reflecting the contribution and (iv) symmetry, which form the fair credit
assignment. Based on Markov Shapley value, we propose three multi-agent
reinforcement learning algorithms called SHAQ, SQDDPG and SMFPPO. Furthermore,
we extend Markov convex game to partial observability to deal with the
partially observable problems, named as partially observable Markov convex
game. In application, we evaluate SQDDPG and SMFPPO on the real-world problem
in energy networks.
Related papers
- Shapley Machine: A Game-Theoretic Framework for N-Agent Ad Hoc Teamwork [25.31548780542065]
We aim to investigate a recently proposed problem for open multi-agent systems, referred as n-agent ad hoc teamwork (NAHT)<n>Existing methods tend to be based on design and consequently lack theoretical rigor and ambiguous credit assignment among agents.<n>To address these limitations, we model and solve NAHT through the lens of cooperative game theory.
arXiv Detail & Related papers (2025-06-12T20:44:09Z) - SCAR: Shapley Credit Assignment for More Efficient RLHF [57.09358918390916]
Reinforcement Learning from Human Feedback (RLHF) is a widely used technique for aligning Large Language Models with human preferences.<n>It often suffers from sparse reward signals, making effective credit assignment challenging.<n>We propose Shapley Credit Assignment Rewards (SCAR), a novel method that leverages Shapley values in cooperative game theory.
arXiv Detail & Related papers (2025-05-26T18:06:52Z) - Tractable Equilibrium Computation in Markov Games through Risk Aversion [12.980882140751895]
We show that risk-averse quantal response equilibria (RQE) become tractable to compute in all $n$-player matrix and finite-horizon Markov games.
RQE is independent of the underlying game structure and only depends on agents' degree of risk-aversion and bounded rationality.
arXiv Detail & Related papers (2024-06-20T09:53:56Z) - Optimistic Policy Gradient in Multi-Player Markov Games with a Single
Controller: Convergence Beyond the Minty Property [89.96815099996132]
We develop a new framework to characterize optimistic policy gradient methods in multi-player games with a single controller.
Our approach relies on a natural generalization of the classical Minty property that we introduce, which we anticipate to have further applications beyond Markov games.
arXiv Detail & Related papers (2023-12-19T11:34:10Z) - Breaking the Curse of Multiagents in a Large State Space: RL in Markov
Games with Independent Linear Function Approximation [56.715186432566576]
We propose a new model, independent linear Markov game, for reinforcement learning with a large state space and a large number of agents.
We design new algorithms for learning correlated equilibria (CCE) and Markov correlated equilibria (CE) with sample bounds complexity that only scalely with each agent's own function class complexity.
Our algorithms rely on two key technical innovations: (1) utilizing policy replay to tackle non-stationarity incurred by multiple agents and the use of function approximation; and (2) separating learning Markov equilibria and exploration in the Markov games.
arXiv Detail & Related papers (2023-02-07T18:47:48Z) - Representation Learning for General-sum Low-rank Markov Games [63.119870889883224]
We study multi-agent general-sum Markov games with nonlinear function approximation.
We focus on low-rank Markov games whose transition matrix admits a hidden low-rank structure on top of an unknown non-linear representation.
arXiv Detail & Related papers (2022-10-30T22:58:22Z) - Collective eXplainable AI: Explaining Cooperative Strategies and Agent
Contribution in Multiagent Reinforcement Learning with Shapley Values [68.8204255655161]
This study proposes a novel approach to explain cooperative strategies in multiagent RL using Shapley values.
Results could have implications for non-discriminatory decision making, ethical and responsible AI-derived decisions or policy making under fairness constraints.
arXiv Detail & Related papers (2021-10-04T10:28:57Z) - Shapley Counterfactual Credits for Multi-Agent Reinforcement Learning [34.856522993714535]
We propose Shapley Counterfactual Credit Assignment, a novel method for explicit credit assignment which accounts for the coalition of agents.
Our method outperforms existing cooperative MARL algorithms significantly and achieves the state-of-the-art, with especially large margins on tasks with more severe difficulties.
arXiv Detail & Related papers (2021-06-01T07:38:34Z) - SHAQ: Incorporating Shapley Value Theory into Q-Learning for Multi-Agent
Reinforcement Learning [40.882696266783505]
We generalise the Shapley value in the coalitional game theory to a Markov convex game (MCG)
We show that the generalised Shapley value possesses several features such as accurate estimation of the maximum global value, (2) fairness in the factorisation of the global value, and (3) being sensitive to dummy agents.
The proposed theory yields a new learning algorithm called Sharpley Q-learning (SHAQ), which inherits the important merits of ordinary Q-learning but extends it to MARL.
arXiv Detail & Related papers (2021-05-31T14:50:52Z) - Softmax with Regularization: Better Value Estimation in Multi-Agent
Reinforcement Learning [72.28520951105207]
Overestimation in $Q$-learning is an important problem that has been extensively studied in single-agent reinforcement learning.
We propose a novel regularization-based update scheme that penalizes large joint action-values deviating from a baseline.
We show that our method provides a consistent performance improvement on a set of challenging StarCraft II micromanagement tasks.
arXiv Detail & Related papers (2021-03-22T14:18:39Z) - A Sharp Analysis of Model-based Reinforcement Learning with Self-Play [49.88233710867315]
We present a sharp analysis of model-based self-play algorithms for multi-agent Markov games.
We design an algorithm -- Optimistic Nash Value Iteration (Nash-VI) for two-player zero-sum Markov games.
We further adapt our analysis to designing a provably efficient task-agnostic algorithm for zero-sum Markov games.
arXiv Detail & Related papers (2020-10-04T15:27:39Z) - Learning Zero-Sum Simultaneous-Move Markov Games Using Function
Approximation and Correlated Equilibrium [116.56359444619441]
We develop provably efficient reinforcement learning algorithms for two-player zero-sum finite-horizon Markov games.
In the offline setting, we control both players and aim to find the Nash Equilibrium by minimizing the duality gap.
In the online setting, we control a single player playing against an arbitrary opponent and aim to minimize the regret.
arXiv Detail & Related papers (2020-02-17T17:04:16Z) - Learning Non-Markovian Reward Models in MDPs [0.0]
We show how to formalise the non-Markovian reward function using a Mealy machine.
In our formal setting, we consider a Markov decision process (MDP) that models the dynamic of the environment in which the agent evolves.
While the MDP is known by the agent, the reward function is unknown from the agent and must be learnt.
arXiv Detail & Related papers (2020-01-25T10:51:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.