CAMMARL: Conformal Action Modeling in Multi Agent Reinforcement Learning
- URL: http://arxiv.org/abs/2306.11128v2
- Date: Fri, 9 Feb 2024 01:06:17 GMT
- Title: CAMMARL: Conformal Action Modeling in Multi Agent Reinforcement Learning
- Authors: Nikunj Gupta, Somjit Nath and Samira Ebrahimi Kahou
- Abstract summary: We propose a novel multi-agent reinforcement learning algorithm CAMMARL.
It involves modeling the actions of other agents in different situations in the form of confident sets.
We show that CAMMARL elevates the capabilities of an autonomous agent in MARL by modeling conformal prediction sets.
- Score: 5.865719902445064
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Before taking actions in an environment with more than one intelligent agent,
an autonomous agent may benefit from reasoning about the other agents and
utilizing a notion of a guarantee or confidence about the behavior of the
system. In this article, we propose a novel multi-agent reinforcement learning
(MARL) algorithm CAMMARL, which involves modeling the actions of other agents
in different situations in the form of confident sets, i.e., sets containing
their true actions with a high probability. We then use these estimates to
inform an agent's decision-making. For estimating such sets, we use the concept
of conformal predictions, by means of which, we not only obtain an estimate of
the most probable outcome but get to quantify the operable uncertainty as well.
For instance, we can predict a set that provably covers the true predictions
with high probabilities (e.g., 95%). Through several experiments in two fully
cooperative multi-agent tasks, we show that CAMMARL elevates the capabilities
of an autonomous agent in MARL by modeling conformal prediction sets over the
behavior of other agents in the environment and utilizing such estimates to
enhance its policy learning.
Related papers
- Episodic Future Thinking Mechanism for Multi-agent Reinforcement Learning [2.992602379681373]
We introduce an episodic future thinking (EFT) mechanism for a reinforcement learning (RL) agent.
We first develop a multi-character policy that captures diverse characters with an ensemble of heterogeneous policies.
Once the character is inferred, the agent predicts the upcoming actions of target agents and simulates the potential future scenario.
arXiv Detail & Related papers (2024-10-22T19:12:42Z) - Contrastive learning-based agent modeling for deep reinforcement
learning [31.293496061727932]
Agent modeling is essential when designing adaptive policies for intelligent machine agents in multiagent systems.
We devised a Contrastive Learning-based Agent Modeling (CLAM) method that relies only on the local observations from the ego agent during training and execution.
CLAM is capable of generating consistent high-quality policy representations in real-time right from the beginning of each episode.
arXiv Detail & Related papers (2023-12-30T03:44:12Z) - DCIR: Dynamic Consistency Intrinsic Reward for Multi-Agent Reinforcement
Learning [84.22561239481901]
We propose a new approach that enables agents to learn whether their behaviors should be consistent with that of other agents.
We evaluate DCIR in multiple environments including Multi-agent Particle, Google Research Football and StarCraft II Micromanagement.
arXiv Detail & Related papers (2023-12-10T06:03:57Z) - On the Complexity of Multi-Agent Decision Making: From Learning in Games
to Partial Monitoring [105.13668993076801]
A central problem in the theory of multi-agent reinforcement learning (MARL) is to understand what structural conditions and algorithmic principles lead to sample-efficient learning guarantees.
We study this question in a general framework for interactive decision making with multiple agents.
We show that characterizing the statistical complexity for multi-agent decision making is equivalent to characterizing the statistical complexity of single-agent decision making.
arXiv Detail & Related papers (2023-05-01T06:46:22Z) - PAC: Assisted Value Factorisation with Counterfactual Predictions in
Multi-Agent Reinforcement Learning [43.862956745961654]
Multi-agent reinforcement learning (MARL) has witnessed significant progress with the development of value function factorization methods.
In this paper, we show that in partially observable MARL problems, an agent's ordering over its own actions could impose concurrent constraints.
We propose PAC, a new framework leveraging information generated from Counterfactual Predictions of optimal joint action selection.
arXiv Detail & Related papers (2022-06-22T23:34:30Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Differential Assessment of Black-Box AI Agents [29.98710357871698]
We propose a novel approach to differentially assess black-box AI agents that have drifted from their previously known models.
We leverage sparse observations of the drifted agent's current behavior and knowledge of its initial model to generate an active querying policy.
Empirical evaluation shows that our approach is much more efficient than re-learning the agent model from scratch.
arXiv Detail & Related papers (2022-03-24T17:48:58Z) - Deceptive Decision-Making Under Uncertainty [25.197098169762356]
We study the design of autonomous agents that are capable of deceiving outside observers about their intentions while carrying out tasks.
By modeling the agent's behavior as a Markov decision process, we consider a setting where the agent aims to reach one of multiple potential goals.
We propose a novel approach to model observer predictions based on the principle of maximum entropy and to efficiently generate deceptive strategies.
arXiv Detail & Related papers (2021-09-14T14:56:23Z) - Instance-Aware Predictive Navigation in Multi-Agent Environments [93.15055834395304]
We propose an Instance-Aware Predictive Control (IPC) approach, which forecasts interactions between agents as well as future scene structures.
We adopt a novel multi-instance event prediction module to estimate the possible interaction among agents in the ego-centric view.
We design a sequential action sampling strategy to better leverage predicted states on both scene-level and instance-level.
arXiv Detail & Related papers (2021-01-14T22:21:25Z) - Deep Interactive Bayesian Reinforcement Learning via Meta-Learning [63.96201773395921]
The optimal adaptive behaviour under uncertainty over the other agents' strategies can be computed using the Interactive Bayesian Reinforcement Learning framework.
We propose to meta-learn approximate belief inference and Bayes-optimal behaviour for a given prior.
We show empirically that our approach outperforms existing methods that use a model-free approach, sample from the approximate posterior, maintain memory-free models of others, or do not fully utilise the known structure of the environment.
arXiv Detail & Related papers (2021-01-11T13:25:13Z) - Randomized Entity-wise Factorization for Multi-Agent Reinforcement
Learning [59.62721526353915]
Multi-agent settings in the real world often involve tasks with varying types and quantities of agents and non-agent entities.
Our method aims to leverage these commonalities by asking the question: What is the expected utility of each agent when only considering a randomly selected sub-group of its observed entities?''
arXiv Detail & Related papers (2020-06-07T18:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.