Subjective Knowledge and Reasoning about Agents in Multi-Agent Systems
- URL: http://arxiv.org/abs/2001.08016v1
- Date: Wed, 22 Jan 2020 13:50:26 GMT
- Title: Subjective Knowledge and Reasoning about Agents in Multi-Agent Systems
- Authors: Shikha Singh, Deepak Khemani
- Abstract summary: In multi-agent systems, agents can influence other agents' mental states by (mis)informing them about the presence/absence of collaborators or adversaries.
In this paper, we investigate how Kripke structure-based epistemic models can be extended to express the above notion based on an agent's subjective knowledge.
- Score: 5.983405936883194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Though a lot of work in multi-agent systems is focused on reasoning about
knowledge and beliefs of artificial agents, an explicit representation and
reasoning about the presence/absence of agents, especially in the scenarios
where agents may be unaware of other agents joining in or going offline in a
multi-agent system, leading to partial knowledge/asymmetric knowledge of the
agents is mostly overlooked by the MAS community. Such scenarios lay the
foundations of cases where an agent can influence other agents' mental states
by (mis)informing them about the presence/absence of collaborators or
adversaries. In this paper, we investigate how Kripke structure-based epistemic
models can be extended to express the above notion based on an agent's
subjective knowledge and we discuss the challenges that come along.
Related papers
- EvoAgent: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms [55.77492625524141]
EvoAgent is a generic method to automatically extend expert agents to multi-agent systems via the evolutionary algorithm.
We show that EvoAgent can automatically generate multiple expert agents and significantly enhance the task-solving capabilities of LLM-based agents.
arXiv Detail & Related papers (2024-06-20T11:49:23Z) - AgentGym: Evolving Large Language Model-based Agents across Diverse Environments [116.97648507802926]
Large language models (LLMs) are considered a promising foundation to build such agents.
We take the first step towards building generally-capable LLM-based agents with self-evolution ability.
We propose AgentGym, a new framework featuring a variety of environments and tasks for broad, real-time, uni-format, and concurrent agent exploration.
arXiv Detail & Related papers (2024-06-06T15:15:41Z) - PsySafe: A Comprehensive Framework for Psychological-based Attack,
Defense, and Evaluation of Multi-agent System Safety [73.51336434996931]
Multi-agent systems, when enhanced with Large Language Models (LLMs), exhibit profound capabilities in collective intelligence.
However, the potential misuse of this intelligence for malicious purposes presents significant risks.
We propose a framework (PsySafe) grounded in agent psychology, focusing on identifying how dark personality traits in agents can lead to risky behaviors.
Our experiments reveal several intriguing phenomena, such as the collective dangerous behaviors among agents, agents' self-reflection when engaging in dangerous behavior, and the correlation between agents' psychological assessments and dangerous behaviors.
arXiv Detail & Related papers (2024-01-22T12:11:55Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - Causal Explanations for Sequential Decision-Making in Multi-Agent
Systems [31.674391914683888]
CEMA is a framework for creating causal natural language explanations of an agent's decisions in sequential multi-agent systems.
We show CEMA correctly identifies the causes behind the agent's decisions, even when a large number of other agents is present.
We show via a user study that CEMA's explanations have a positive effect on participants' trust in autonomous vehicles.
arXiv Detail & Related papers (2023-02-21T16:34:07Z) - Diversifying Agent's Behaviors in Interactive Decision Models [11.125175635860169]
Modelling other agents' behaviors plays an important role in decision models for interactions among multiple agents.
In this article, we investigate into diversifying behaviors of other agents in the subject agent's decision model prior to their interactions.
arXiv Detail & Related papers (2022-03-06T23:05:00Z) - Human-Inspired Multi-Agent Navigation using Knowledge Distillation [4.659427498118277]
We propose a framework for learning a human-like general collision avoidance policy for agent-agent interactions.
Our approach uses knowledge distillation with reinforcement learning to shape the reward function.
We show that agents trained with our approach can take human-like trajectories in collision avoidance and goal-directed steering tasks.
arXiv Detail & Related papers (2021-03-18T03:24:38Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - Multi-Agent Systems based on Contextual Defeasible Logic considering
Focus [0.0]
We extend previous work on distributed reasoning using Contextual Defeasible Logic (CDL)
This work presents a multi-agent model based on CDL that allows agents to reason with their local knowledge bases and mapping rules.
We present a use case scenario, some formalisations of the model proposed, and an initial implementation based on the BDI (Belief-Desire-Intention) agent model.
arXiv Detail & Related papers (2020-10-01T01:50:08Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.