Subjective Knowledge and Reasoning about Agents in Multi-Agent Systems
- URL: http://arxiv.org/abs/2001.08016v1
- Date: Wed, 22 Jan 2020 13:50:26 GMT
- Title: Subjective Knowledge and Reasoning about Agents in Multi-Agent Systems
- Authors: Shikha Singh, Deepak Khemani
- Abstract summary: In multi-agent systems, agents can influence other agents' mental states by (mis)informing them about the presence/absence of collaborators or adversaries.
In this paper, we investigate how Kripke structure-based epistemic models can be extended to express the above notion based on an agent's subjective knowledge.
- Score: 5.983405936883194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Though a lot of work in multi-agent systems is focused on reasoning about
knowledge and beliefs of artificial agents, an explicit representation and
reasoning about the presence/absence of agents, especially in the scenarios
where agents may be unaware of other agents joining in or going offline in a
multi-agent system, leading to partial knowledge/asymmetric knowledge of the
agents is mostly overlooked by the MAS community. Such scenarios lay the
foundations of cases where an agent can influence other agents' mental states
by (mis)informing them about the presence/absence of collaborators or
adversaries. In this paper, we investigate how Kripke structure-based epistemic
models can be extended to express the above notion based on an agent's
subjective knowledge and we discuss the challenges that come along.
Related papers
- Multi-Agents are Social Groups: Investigating Social Influence of Multiple Agents in Human-Agent Interactions [7.421573539569854]
We investigate whether a group of AI agents can create social pressure on users to agree with them.
We found that conversing with multiple agents increased the social pressure felt by participants.
Our study shows the potential advantages of multi-agent systems over single-agent platforms in causing opinion change.
arXiv Detail & Related papers (2024-11-07T10:00:46Z) - Inverse Attention Agent for Multi-Agent System [6.196239958087161]
A major challenge for Multi-Agent Systems is enabling agents to adapt dynamically to diverse environments in which opponents and teammates may continually change.
We introduce Inverse Attention Agents that adopt concepts from the Theory of Mind, implemented algorithmically using an attention mechanism and trained in an end-to-end manner.
We demonstrate that the inverse attention network successfully infers the attention of other agents, and that this information improves agent performance.
arXiv Detail & Related papers (2024-10-29T06:59:11Z) - On the Resilience of Multi-Agent Systems with Malicious Agents [58.79302663733702]
This paper investigates what is the resilience of multi-agent system structures under malicious agents.
We devise two methods, AutoTransform and AutoInject, to transform any agent into a malicious one.
We show that two defense methods, introducing a mechanism for each agent to challenge others' outputs, or an additional agent to review and correct messages, can enhance system resilience.
arXiv Detail & Related papers (2024-08-02T03:25:20Z) - EvoAgent: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms [55.77492625524141]
EvoAgent is a generic method to automatically extend expert agents to multi-agent systems via the evolutionary algorithm.
We show that EvoAgent can automatically generate multiple expert agents and significantly enhance the task-solving capabilities of LLM-based agents.
arXiv Detail & Related papers (2024-06-20T11:49:23Z) - AgentGym: Evolving Large Language Model-based Agents across Diverse Environments [116.97648507802926]
Large language models (LLMs) are considered a promising foundation to build such agents.
We take the first step towards building generally-capable LLM-based agents with self-evolution ability.
We propose AgentGym, a new framework featuring a variety of environments and tasks for broad, real-time, uni-format, and concurrent agent exploration.
arXiv Detail & Related papers (2024-06-06T15:15:41Z) - PsySafe: A Comprehensive Framework for Psychological-based Attack, Defense, and Evaluation of Multi-agent System Safety [70.84902425123406]
Multi-agent systems, when enhanced with Large Language Models (LLMs), exhibit profound capabilities in collective intelligence.
However, the potential misuse of this intelligence for malicious purposes presents significant risks.
We propose a framework (PsySafe) grounded in agent psychology, focusing on identifying how dark personality traits in agents can lead to risky behaviors.
Our experiments reveal several intriguing phenomena, such as the collective dangerous behaviors among agents, agents' self-reflection when engaging in dangerous behavior, and the correlation between agents' psychological assessments and dangerous behaviors.
arXiv Detail & Related papers (2024-01-22T12:11:55Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - Diversifying Agent's Behaviors in Interactive Decision Models [11.125175635860169]
Modelling other agents' behaviors plays an important role in decision models for interactions among multiple agents.
In this article, we investigate into diversifying behaviors of other agents in the subject agent's decision model prior to their interactions.
arXiv Detail & Related papers (2022-03-06T23:05:00Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - Multi-Agent Systems based on Contextual Defeasible Logic considering
Focus [0.0]
We extend previous work on distributed reasoning using Contextual Defeasible Logic (CDL)
This work presents a multi-agent model based on CDL that allows agents to reason with their local knowledge bases and mapping rules.
We present a use case scenario, some formalisations of the model proposed, and an initial implementation based on the BDI (Belief-Desire-Intention) agent model.
arXiv Detail & Related papers (2020-10-01T01:50:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.