Contextual and Possibilistic Reasoning for Coalition Formation
- URL: http://arxiv.org/abs/2006.11097v2
- Date: Tue, 6 Oct 2020 18:45:37 GMT
- Title: Contextual and Possibilistic Reasoning for Coalition Formation
- Authors: Antonis Bikakis, Patrice Caire
- Abstract summary: This article addresses the question of how to find and evaluate coalitions among agents in multiagent systems.
We first compute the solution space for the formation of coalitions using a contextual reasoning approach.
Second, we model agents as contexts in Multi-Context Systems (MCS), and dependence relations among agents seeking to achieve their goals, as bridge rules.
Third, we systematically compute all potential coalitions using algorithms for MCS equilibria, and given a set of functional and non-functional requirements, we propose ways to select the best solutions.
- Score: 0.9790236766474201
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In multiagent systems, agents often have to rely on other agents to reach
their goals, for example when they lack a needed resource or do not have the
capability to perform a required action. Agents therefore need to cooperate.
Then, some of the questions raised are: Which agent(s) to cooperate with? What
are the potential coalitions in which agents can achieve their goals? As the
number of possibilities is potentially quite large, how to automate the
process? And then, how to select the most appropriate coalition, taking into
account the uncertainty in the agents' abilities to carry out certain tasks? In
this article, we address the question of how to find and evaluate coalitions
among agents in multiagent systems using MCS tools, while taking into
consideration the uncertainty around the agents' actions. Our methodology is
the following: We first compute the solution space for the formation of
coalitions using a contextual reasoning approach. Second, we model agents as
contexts in Multi-Context Systems (MCS), and dependence relations among agents
seeking to achieve their goals, as bridge rules. Third, we systematically
compute all potential coalitions using algorithms for MCS equilibria, and given
a set of functional and non-functional requirements, we propose ways to select
the best solutions. Finally, in order to handle the uncertainty in the agents'
actions, we extend our approach with features of possibilistic reasoning. We
illustrate our approach with an example from robotics.
Related papers
- Agent-as-a-Judge: Evaluate Agents with Agents [61.33974108405561]
We introduce the Agent-as-a-Judge framework, wherein agentic systems are used to evaluate agentic systems.
This is an organic extension of the LLM-as-a-Judge framework, incorporating agentic features that enable intermediate feedback for the entire task-solving process.
We present DevAI, a new benchmark of 55 realistic automated AI development tasks.
arXiv Detail & Related papers (2024-10-14T17:57:02Z) - Principal-Agent Reinforcement Learning: Orchestrating AI Agents with Contracts [20.8288955218712]
We propose a framework where a principal guides an agent in a Markov Decision Process (MDP) using a series of contracts.
We present and analyze a meta-algorithm that iteratively optimize the policies of the principal and agent.
We then scale our algorithm with deep Q-learning and analyze its convergence in the presence of approximation error.
arXiv Detail & Related papers (2024-07-25T14:28:58Z) - Reaching Consensus in Cooperative Multi-Agent Reinforcement Learning
with Goal Imagination [16.74629849552254]
We propose a model-based consensus mechanism to explicitly coordinate multiple agents.
The proposed Multi-agent Goal Imagination (MAGI) framework guides agents to reach consensus with an Imagined common goal.
We show that such efficient consensus mechanism can guide all agents cooperatively reaching valuable future states.
arXiv Detail & Related papers (2024-03-05T18:07:34Z) - Multi-Agent Consensus Seeking via Large Language Models [6.922356864800498]
Multi-agent systems driven by large language models (LLMs) have shown promising abilities for solving complex tasks in a collaborative manner.
This work considers a fundamental problem in multi-agent collaboration: consensus seeking.
arXiv Detail & Related papers (2023-10-31T03:37:11Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - On the Complexity of Multi-Agent Decision Making: From Learning in Games
to Partial Monitoring [105.13668993076801]
A central problem in the theory of multi-agent reinforcement learning (MARL) is to understand what structural conditions and algorithmic principles lead to sample-efficient learning guarantees.
We study this question in a general framework for interactive decision making with multiple agents.
We show that characterizing the statistical complexity for multi-agent decision making is equivalent to characterizing the statistical complexity of single-agent decision making.
arXiv Detail & Related papers (2023-05-01T06:46:22Z) - Multi-agent Deep Covering Skill Discovery [50.812414209206054]
We propose Multi-agent Deep Covering Option Discovery, which constructs the multi-agent options through minimizing the expected cover time of the multiple agents' joint state space.
Also, we propose a novel framework to adopt the multi-agent options in the MARL process.
We show that the proposed algorithm can effectively capture the agent interactions with the attention mechanism, successfully identify multi-agent options, and significantly outperforms prior works using single-agent options or no options.
arXiv Detail & Related papers (2022-10-07T00:40:59Z) - Emergence of Theory of Mind Collaboration in Multiagent Systems [65.97255691640561]
We propose an adaptive training algorithm to develop effective collaboration between agents with ToM.
We evaluate our algorithms with two games, where our algorithm surpasses all previous decentralized execution algorithms without modeling ToM.
arXiv Detail & Related papers (2021-09-30T23:28:00Z) - DSDF: An approach to handle stochastic agents in collaborative
multi-agent reinforcement learning [0.0]
We show how thisity of agents, which could be a result of malfunction or aging of robots, can add to the uncertainty in coordination.
Our solution, DSDF which tunes the discounted factor for the agents according to uncertainty and use the values to update the utility networks of individual agents.
arXiv Detail & Related papers (2021-09-14T12:02:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.