K-SHAP: Policy Clustering Algorithm for Anonymous Multi-Agent
State-Action Pairs
- URL: http://arxiv.org/abs/2302.11996v5
- Date: Mon, 26 Jun 2023 12:36:03 GMT
- Title: K-SHAP: Policy Clustering Algorithm for Anonymous Multi-Agent
State-Action Pairs
- Authors: Andrea Coletta, Svitlana Vyetrenko, Tucker Balch
- Abstract summary: In financial markets labeled data that identifies market participant strategies is typically proprietary.
In this paper, we propose a Policy Clustering algorithm that learns to group anonymous state-action pairs according to the agent policies.
- Score: 1.2891210250935146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning agent behaviors from observational data has shown to improve our
understanding of their decision-making processes, advancing our ability to
explain their interactions with the environment and other agents. While
multiple learning techniques have been proposed in the literature, there is one
particular setting that has not been explored yet: multi agent systems where
agent identities remain anonymous. For instance, in financial markets labeled
data that identifies market participant strategies is typically proprietary,
and only the anonymous state-action pairs that result from the interaction of
multiple market participants are publicly available. As a result, sequences of
agent actions are not observable, restricting the applicability of existing
work. In this paper, we propose a Policy Clustering algorithm, called K-SHAP,
that learns to group anonymous state-action pairs according to the agent
policies. We frame the problem as an Imitation Learning (IL) task, and we learn
a world-policy able to mimic all the agent behaviors upon different
environmental states. We leverage the world-policy to explain each anonymous
observation through an additive feature attribution method called SHAP (SHapley
Additive exPlanations). Finally, by clustering the explanations we show that we
are able to identify different agent policies and group observations
accordingly. We evaluate our approach on simulated synthetic market data and a
real-world financial dataset. We show that our proposal significantly and
consistently outperforms the existing methods, identifying different agent
strategies.
Related papers
- Enhancing Cooperative Multi-Agent Reinforcement Learning with State Modelling and Adversarial Exploration [9.80657085835352]
Learning to cooperate in distributed partially observable environments poses significant challenges for multi-agent deep reinforcement learning (MARL)<n>This paper addresses key concerns in this domain, focusing on inferring state representations from individual agent observations.<n>We propose a novel state modelling framework for cooperative MARL, where agents infer meaningful belief representations of the non-observable state.<n>We show that SMPE outperforms state-of-the-art MARL algorithms in complex fully cooperative tasks from the MPE, LBF, and RWARE benchmarks.
arXiv Detail & Related papers (2025-05-08T14:07:20Z) - MultiAgentBench: Evaluating the Collaboration and Competition of LLM agents [59.825725526176655]
Large Language Models (LLMs) have shown remarkable capabilities as autonomous agents.
Existing benchmarks either focus on single-agent tasks or are confined to narrow domains, failing to capture the dynamics of multi-agent coordination and competition.
We introduce MultiAgentBench, a benchmark designed to evaluate LLM-based multi-agent systems across diverse, interactive scenarios.
arXiv Detail & Related papers (2025-03-03T05:18:50Z) - Offline Multi-Agent Reinforcement Learning via In-Sample Sequential Policy Optimization [8.877649895977479]
offline Multi-Agent Reinforcement Learning (MARL) is an emerging field that aims to learn optimal multi-agent policies from pre-collected datasets.
In this work, we revisit the existing offline MARL methods and show that in certain scenarios they can be problematic.
We propose a new offline MARL algorithm, named In-Sample Sequential Policy Optimization (InSPO)
arXiv Detail & Related papers (2024-12-10T16:19:08Z) - Uniting contrastive and generative learning for event sequences models [51.547576949425604]
This study investigates the integration of two self-supervised learning techniques - instance-wise contrastive learning and a generative approach based on restoring masked events in latent space.
Experiments conducted on several public datasets, focusing on sequence classification and next-event type prediction, show that the integrated method achieves superior performance compared to individual approaches.
arXiv Detail & Related papers (2024-08-19T13:47:17Z) - Deep Multi-Agent Reinforcement Learning for Decentralized Active
Hypothesis Testing [11.639503711252663]
We tackle the multi-agent active hypothesis testing (AHT) problem by introducing a novel algorithm rooted in the framework of deep multi-agent reinforcement learning.
We present a comprehensive set of experimental results that effectively showcase the agents' ability to learn collaborative strategies and enhance performance.
arXiv Detail & Related papers (2023-09-14T01:18:04Z) - SACHA: Soft Actor-Critic with Heuristic-Based Attention for Partially
Observable Multi-Agent Path Finding [3.4260993997836753]
We propose a novel multi-agent actor-critic method called Soft Actor-Critic with Heuristic-Based Attention (SACHA)
SACHA learns a neural network for each agent to selectively pay attention to the shortest path guidance from multiple agents within its field of view.
We demonstrate decent improvements over several state-of-the-art learning-based MAPF methods with respect to success rate and solution quality.
arXiv Detail & Related papers (2023-07-05T23:36:33Z) - Graph Exploration for Effective Multi-agent Q-Learning [46.723361065955544]
This paper proposes an exploration technique for multi-agent reinforcement learning (MARL) with graph-based communication among agents.
We assume the individual rewards received by the agents are independent of the actions by the other agents, while their policies are coupled.
In the proposed framework, neighbouring agents collaborate to estimate the uncertainty about the state-action space in order to execute more efficient explorative behaviour.
arXiv Detail & Related papers (2023-04-19T10:28:28Z) - Learning From Good Trajectories in Offline Multi-Agent Reinforcement
Learning [98.07495732562654]
offline multi-agent reinforcement learning (MARL) aims to learn effective multi-agent policies from pre-collected datasets.
One agent learned by offline MARL often inherits this random policy, jeopardizing the performance of the entire team.
We propose a novel framework called Shared Individual Trajectories (SIT) to address this problem.
arXiv Detail & Related papers (2022-11-28T18:11:26Z) - Distributed Adaptive Learning Under Communication Constraints [54.22472738551687]
This work examines adaptive distributed learning strategies designed to operate under communication constraints.
We consider a network of agents that must solve an online optimization problem from continual observation of streaming data.
arXiv Detail & Related papers (2021-12-03T19:23:48Z) - Decentralized Multi-Agent Reinforcement Learning: An Off-Policy Method [6.261762915564555]
We discuss the problem of decentralized multi-agent reinforcement learning (MARL) in this work.
In our setting, the global state, action, and reward are assumed to be fully observable, while the local policy is protected as privacy by each agent, and thus cannot be shared with others.
The policy evaluation and policy improvement algorithms are designed for discrete and continuous state-action-space Markov Decision Process (MDP) respectively.
arXiv Detail & Related papers (2021-10-31T09:08:46Z) - Revisiting Parameter Sharing in Multi-Agent Deep Reinforcement Learning [14.017603575774361]
We formalize the notion of agent indication and prove that it enables convergence to optimal policies for the first time.
Next, we formally introduce methods to extend parameter sharing to learning in heterogeneous observation and action spaces.
arXiv Detail & Related papers (2020-05-27T20:14:28Z) - Variational Policy Propagation for Multi-agent Reinforcement Learning [68.26579560607597]
We propose a emphcollaborative multi-agent reinforcement learning algorithm named variational policy propagation (VPP) to learn a emphjoint policy through the interactions over agents.
We prove that the joint policy is a Markov Random Field under some mild conditions, which in turn reduces the policy space effectively.
We integrate the variational inference as special differentiable layers in policy such as the actions can be efficiently sampled from the Markov Random Field and the overall policy is differentiable.
arXiv Detail & Related papers (2020-04-19T15:42:55Z) - Multi-Agent Interactions Modeling with Correlated Policies [53.38338964628494]
In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework.
We develop a Decentralized Adrial Imitation Learning algorithm with Correlated policies (CoDAIL)
Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators.
arXiv Detail & Related papers (2020-01-04T17:31:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.