Instigating Cooperation among LLM Agents Using Adaptive Information Modulation
- URL: http://arxiv.org/abs/2409.10372v3
- Date: Wed, 30 Oct 2024 16:45:15 GMT
- Title: Instigating Cooperation among LLM Agents Using Adaptive Information Modulation
- Authors: Qiliang Chen, Sepehr Ilami, Nunzio Lore, Babak Heydari,
- Abstract summary: This paper introduces a novel framework combining LLM agents as proxies for human strategic behavior with reinforcement learning.
We show that pro-social promoting RL agent modulates information access across agents in a network, optimizing social welfare and promoting pro-social behavior.
This framework offers significant insights into AI-mediated social dynamics, contributing to the deployment of AI in real-world team settings.
- Score: 0.6749750044497731
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a novel framework combining LLM agents as proxies for human strategic behavior with reinforcement learning (RL) to engage these agents in evolving strategic interactions within team environments. Our approach extends traditional agent-based simulations by using strategic LLM agents (SLA) and introducing dynamic and adaptive governance through a pro-social promoting RL agent (PPA) that modulates information access across agents in a network, optimizing social welfare and promoting pro-social behavior. Through validation in iterative games, including the prisoner dilemma, we demonstrate that SLA agents exhibit nuanced strategic adaptations. The PPA agent effectively learns to adjust information transparency, resulting in enhanced cooperation rates. This framework offers significant insights into AI-mediated social dynamics, contributing to the deployment of AI in real-world team settings.
Related papers
- From Prompt-Response to Goal-Directed Systems: The Evolution of Agentic AI Software Architecture [0.0]
Agentic AI denotes an architectural transition from stateless, prompt-driven generative models toward goal-directed systems.<n>This paper examines this transition by connecting intelligent agent theories, with contemporary LLM-centric approaches.<n>The study identifies a convergence toward standardized agent loops, registries, and auditable control mechanisms.
arXiv Detail & Related papers (2026-02-11T03:34:48Z) - Agentifying Agentic AI [4.40254362206179]
Agentic AI seeks to endow systems with sustained autonomy, reasoning, and interaction capabilities.<n>To realize this vision, its assumptions about agency must be complemented by explicit models of cognition, cooperation, and governance.
arXiv Detail & Related papers (2025-11-21T15:54:44Z) - A Comprehensive Survey of Self-Evolving AI Agents: A New Paradigm Bridging Foundation Models and Lifelong Agentic Systems [53.37728204835912]
Most existing AI systems rely on manually crafted configurations that remain static after deployment.<n>Recent research has explored agent evolution techniques that aim to automatically enhance agent systems based on interaction data and environmental feedback.<n>This survey aims to provide researchers and practitioners with a systematic understanding of self-evolving AI agents.
arXiv Detail & Related papers (2025-08-10T16:07:32Z) - Agentic Web: Weaving the Next Web with AI Agents [109.13815627467514]
The emergence of AI agents powered by large language models (LLMs) marks a pivotal shift toward the Agentic Web.<n>In this paradigm, agents interact directly with one another to plan, coordinate, and execute complex tasks on behalf of users.<n>We present a structured framework for understanding and building the Agentic Web.
arXiv Detail & Related papers (2025-07-28T17:58:12Z) - Symbiotic Agents: A Novel Paradigm for Trustworthy AGI-driven Networks [2.5782420501870296]
Large Language Model (LLM)-based autonomous agents are expected to play a vital role in the evolution of 6G networks.<n>We introduce a novel agentic paradigm that combines LLMs real-time optimization algorithms towards Trustworthy AI.<n>We propose an end-to-end architecture for AGI networks and evaluate it on a 5G testbed capturing channel fluctuations from moving vehicles.
arXiv Detail & Related papers (2025-07-23T17:01:23Z) - Distinguishing Autonomous AI Agents from Collaborative Agentic Systems: A Comprehensive Framework for Understanding Modern Intelligent Architectures [0.0]
The emergence of large language models has catalyzed two distinct yet interconnected paradigms in artificial intelligence: standalone AI Agents and collaborative Agentic AI ecosystems.<n>This study establishes a definitive framework for distinguishing these architectures through systematic analysis of their operational principles, structural compositions, and deployment methodologies.
arXiv Detail & Related papers (2025-06-02T08:52:23Z) - Internet of Agents: Fundamentals, Applications, and Challenges [68.9543153075464]
We introduce the Internet of Agents (IoA) as a foundational framework that enables seamless interconnection, dynamic discovery, and collaborative orchestration among heterogeneous agents at scale.<n>We analyze the key operational enablers of IoA, including capability notification and discovery, adaptive communication protocols, dynamic task matching, consensus and conflict-resolution mechanisms, and incentive models.
arXiv Detail & Related papers (2025-05-12T02:04:37Z) - Emergence of Roles in Robotic Teams with Model Sharing and Limited Communication [0.0]
We present a reinforcement learning strategy for use in multi-agent foraging systems in which the learning is centralised to a single agent.
This approach aims to significantly reduce the computational and energy demands compared to approaches such as MARL and centralised learning models.
arXiv Detail & Related papers (2025-05-01T14:05:46Z) - Do LLMs trust AI regulation? Emerging behaviour of game-theoretic LLM agents [61.132523071109354]
This paper investigates the interplay between AI developers, regulators and users, modelling their strategic choices under different regulatory scenarios.
Our research identifies emerging behaviours of strategic AI agents, which tend to adopt more "pessimistic" stances than pure game-theoretic agents.
arXiv Detail & Related papers (2025-04-11T15:41:21Z) - Toward Agentic AI: Generative Information Retrieval Inspired Intelligent Communications and Networking [87.82985288731489]
Agentic AI has emerged as a key paradigm for intelligent communications and networking.<n>This article emphasizes the role of knowledge acquisition, processing, and retrieval in agentic AI for telecom systems.
arXiv Detail & Related papers (2025-02-24T06:02:25Z) - EPO: Explicit Policy Optimization for Strategic Reasoning in LLMs via Reinforcement Learning [69.55982246413046]
We propose explicit policy optimization (EPO) for strategic reasoning.
EPO provides strategies in open-ended action space and can be plugged into arbitrary LLM agents to motivate goal-directed behavior.
Experiments across social and physical domains demonstrate EPO's ability of long-term goal alignment.
arXiv Detail & Related papers (2025-02-18T03:15:55Z) - From Novice to Expert: LLM Agent Policy Optimization via Step-wise Reinforcement Learning [62.54484062185869]
We introduce StepAgent, which utilizes step-wise reward to optimize the agent's reinforcement learning process.
We propose implicit-reward and inverse reinforcement learning techniques to facilitate agent reflection and policy adjustment.
arXiv Detail & Related papers (2024-11-06T10:35:11Z) - Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Large Language Model-based Human-Agent Collaboration for Complex Task
Solving [94.3914058341565]
We introduce the problem of Large Language Models (LLMs)-based human-agent collaboration for complex task-solving.
We propose a Reinforcement Learning-based Human-Agent Collaboration method, ReHAC.
This approach includes a policy model designed to determine the most opportune stages for human intervention within the task-solving process.
arXiv Detail & Related papers (2024-02-20T11:03:36Z) - Investigate-Consolidate-Exploit: A General Strategy for Inter-Task Agent
Self-Evolution [92.84441068115517]
Investigate-Consolidate-Exploit (ICE) is a novel strategy for enhancing the adaptability and flexibility of AI agents.
ICE promotes the transfer of knowledge between tasks for genuine self-evolution.
Our experiments on the XAgent framework demonstrate ICE's effectiveness, reducing API calls by as much as 80%.
arXiv Detail & Related papers (2024-01-25T07:47:49Z) - MetaAgents: Simulating Interactions of Human Behaviors for LLM-based
Task-oriented Coordination via Collaborative Generative Agents [27.911816995891726]
We introduce collaborative generative agents, endowing LLM-based Agents with consistent behavior patterns and task-solving abilities.
We propose a novel framework that equips collaborative generative agents with human-like reasoning abilities and specialized skills.
Our work provides valuable insights into the role and evolution of Large Language Models in task-oriented social simulations.
arXiv Detail & Related papers (2023-10-10T10:17:58Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Self-Adaptive Large Language Model (LLM)-Based Multiagent Systems [0.0]
We propose the integration of large language models (LLMs) into multiagent systems.
We anchor our methodology on the MAPE-K model, which is renowned for its robust support in monitoring, analyzing, planning, and executing system adaptations.
arXiv Detail & Related papers (2023-07-12T14:26:46Z) - Depthwise Convolution for Multi-Agent Communication with Enhanced
Mean-Field Approximation [9.854975702211165]
We propose a new method based on local communication learning to tackle the multi-agent RL (MARL) challenge.
First, we design a new communication protocol that exploits the ability of depthwise convolution to efficiently extract local relations.
Second, we introduce the mean-field approximation into our method to reduce the scale of agent interactions.
arXiv Detail & Related papers (2022-03-06T07:42:43Z) - Iterated Reasoning with Mutual Information in Cooperative and Byzantine
Decentralized Teaming [0.0]
We show that reformulating an agent's policy to be conditional on the policies of its teammates inherently maximizes Mutual Information (MI) lower-bound when optimizing under Policy Gradient (PG)
Our approach, InfoPG, outperforms baselines in learning emergent collaborative behaviors and sets the state-of-the-art in decentralized cooperative MARL tasks.
arXiv Detail & Related papers (2022-01-20T22:54:32Z) - Conditional Imitation Learning for Multi-Agent Games [89.897635970366]
We study the problem of conditional multi-agent imitation learning, where we have access to joint trajectory demonstrations at training time.
We propose a novel approach to address the difficulties of scalability and data scarcity.
Our model learns a low-rank subspace over ego and partner agent strategies, then infers and adapts to a new partner strategy by interpolating in the subspace.
arXiv Detail & Related papers (2022-01-05T04:40:13Z) - Distributed Adaptive Learning Under Communication Constraints [54.22472738551687]
This work examines adaptive distributed learning strategies designed to operate under communication constraints.
We consider a network of agents that must solve an online optimization problem from continual observation of streaming data.
arXiv Detail & Related papers (2021-12-03T19:23:48Z) - A game-theoretic analysis of networked system control for common-pool
resource management using multi-agent reinforcement learning [54.55119659523629]
Multi-agent reinforcement learning has recently shown great promise as an approach to networked system control.
Common-pool resources include arable land, fresh water, wetlands, wildlife, fish stock, forests and the atmosphere.
arXiv Detail & Related papers (2020-10-15T14:12:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.