A Hierarchical Game-Theoretic Decision-Making for Cooperative
Multi-Agent Systems Under the Presence of Adversarial Agents
- URL: http://arxiv.org/abs/2303.16641v1
- Date: Tue, 28 Mar 2023 15:16:23 GMT
- Title: A Hierarchical Game-Theoretic Decision-Making for Cooperative
Multi-Agent Systems Under the Presence of Adversarial Agents
- Authors: Qin Yang and Ramviyas Parasuraman
- Abstract summary: Underlying relationships among Multi-Agent Systems (MAS) in hazardous scenarios can be represented as Game-theoretic models.
This paper proposes a new hierarchical network-based model called Game-theoretic Utility Tree (GUT)
It decomposes high-level strategies into executable low-level actions for cooperative MAS decisions.
- Score: 1.52292571922932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Underlying relationships among Multi-Agent Systems (MAS) in hazardous
scenarios can be represented as Game-theoretic models. This paper proposes a
new hierarchical network-based model called Game-theoretic Utility Tree (GUT),
which decomposes high-level strategies into executable low-level actions for
cooperative MAS decisions. It combines with a new payoff measure based on agent
needs for real-time strategy games. We present an Explore game domain, where we
measure the performance of MAS achieving tasks from the perspective of
balancing the success probability and system costs. We evaluate the GUT
approach against state-of-the-art methods that greedily rely on rewards of the
composite actions. Conclusive results on extensive numerical simulations
indicate that GUT can organize more complex relationships among MAS
cooperation, helping the group achieve challenging tasks with lower costs and
higher winning rates. Furthermore, we demonstrated the applicability of the GUT
using the simulator-hardware testbed - Robotarium. The performances verified
the effectiveness of the GUT in the real robot application and validated that
the GUT could effectively organize MAS cooperation strategies, helping the
group with fewer advantages achieve higher performance.
Related papers
- Human-Agent Coordination in Games under Incomplete Information via Multi-Step Intent [21.170542003568674]
Strategic coordination between autonomous agents and human partners can be modeled as turn-based cooperative games.
We extend a turn-based game under incomplete information to allow players to take multiple actions per turn rather than a single action.
arXiv Detail & Related papers (2024-10-23T19:37:19Z) - Efficient Adaptation in Mixed-Motive Environments via Hierarchical Opponent Modeling and Planning [51.52387511006586]
We propose Hierarchical Opponent modeling and Planning (HOP), a novel multi-agent decision-making algorithm.
HOP is hierarchically composed of two modules: an opponent modeling module that infers others' goals and learns corresponding goal-conditioned policies.
HOP exhibits superior few-shot adaptation capabilities when interacting with various unseen agents, and excels in self-play scenarios.
arXiv Detail & Related papers (2024-06-12T08:48:06Z) - Reaching Consensus in Cooperative Multi-Agent Reinforcement Learning
with Goal Imagination [16.74629849552254]
We propose a model-based consensus mechanism to explicitly coordinate multiple agents.
The proposed Multi-agent Goal Imagination (MAGI) framework guides agents to reach consensus with an Imagined common goal.
We show that such efficient consensus mechanism can guide all agents cooperatively reaching valuable future states.
arXiv Detail & Related papers (2024-03-05T18:07:34Z) - Aligning Individual and Collective Objectives in Multi-Agent Cooperation [18.082268221987956]
Mixed-motive cooperation is one of the most prominent challenges in multi-agent learning.
We introduce a novel optimization method named textbftextitAltruistic textbftextitGradient textbftextitAdjustment (textbftextitAgA) that employs gradient adjustments to progressively align individual and collective objectives.
We evaluate the effectiveness of our algorithm AgA through benchmark environments for testing mixed-motive collaboration with small-scale agents.
arXiv Detail & Related papers (2024-02-19T08:18:53Z) - Cooperation Dynamics in Multi-Agent Systems: Exploring Game-Theoretic Scenarios with Mean-Field Equilibria [0.0]
This paper investigates strategies to invoke cooperation in game-theoretic scenarios, namely the Iterated Prisoner's Dilemma.
Existing cooperative strategies are analyzed for their effectiveness in promoting group-oriented behavior in repeated games.
The study extends to scenarios with exponentially growing agent populations.
arXiv Detail & Related papers (2023-09-28T08:57:01Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - AgentVerse: Facilitating Multi-Agent Collaboration and Exploring
Emergent Behaviors [93.38830440346783]
We propose a multi-agent framework framework that can collaboratively adjust its composition as a greater-than-the-sum-of-its-parts system.
Our experiments demonstrate that framework framework can effectively deploy multi-agent groups that outperform a single agent.
In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups.
arXiv Detail & Related papers (2023-08-21T16:47:11Z) - Learning Reward Machines in Cooperative Multi-Agent Tasks [75.79805204646428]
This paper presents a novel approach to Multi-Agent Reinforcement Learning (MARL)
It combines cooperative task decomposition with the learning of reward machines (RMs) encoding the structure of the sub-tasks.
The proposed method helps deal with the non-Markovian nature of the rewards in partially observable environments.
arXiv Detail & Related papers (2023-03-24T15:12:28Z) - HAVEN: Hierarchical Cooperative Multi-Agent Reinforcement Learning with
Dual Coordination Mechanism [17.993973801986677]
Multi-agent reinforcement learning often suffers from the exponentially larger action space caused by a large number of agents.
We propose a novel value decomposition framework HAVEN based on hierarchical reinforcement learning for the fully cooperative multi-agent problems.
arXiv Detail & Related papers (2021-10-14T10:43:47Z) - Softmax with Regularization: Better Value Estimation in Multi-Agent
Reinforcement Learning [72.28520951105207]
Overestimation in $Q$-learning is an important problem that has been extensively studied in single-agent reinforcement learning.
We propose a novel regularization-based update scheme that penalizes large joint action-values deviating from a baseline.
We show that our method provides a consistent performance improvement on a set of challenging StarCraft II micromanagement tasks.
arXiv Detail & Related papers (2021-03-22T14:18:39Z) - Forgetful Experience Replay in Hierarchical Reinforcement Learning from
Demonstrations [55.41644538483948]
In this paper, we propose a combination of approaches that allow the agent to use low-quality demonstrations in complex vision-based environments.
Our proposed goal-oriented structuring of replay buffer allows the agent to automatically highlight sub-goals for solving complex hierarchical tasks in demonstrations.
The solution based on our algorithm beats all the solutions for the famous MineRL competition and allows the agent to mine a diamond in the Minecraft environment.
arXiv Detail & Related papers (2020-06-17T15:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.