On the limits of agency in agent-based models
- URL: http://arxiv.org/abs/2409.10568v3
- Date: Sun, 10 Nov 2024 21:31:43 GMT
- Title: On the limits of agency in agent-based models
- Authors: Ayush Chopra, Shashank Kumar, Nurullah Giray-Kuru, Ramesh Raskar, Arnau Quera-Bofarull,
- Abstract summary: Agent-based modeling offers powerful insights into complex systems, but its practical utility has been limited by computational constraints.
Recent advancements in large language models (LLMs) could enhance ABMs with adaptive agents, but their integration into large-scale simulations remains challenging.
We present LLM archetypes, a technique that balances behavioral complexity with computational efficiency, allowing for nuanced agent behavior in large-scale simulations.
- Score: 13.130587222524305
- License:
- Abstract: Agent-based modeling (ABM) offers powerful insights into complex systems, but its practical utility has been limited by computational constraints and simplistic agent behaviors, especially when simulating large populations. Recent advancements in large language models (LLMs) could enhance ABMs with adaptive agents, but their integration into large-scale simulations remains challenging. This work introduces a novel methodology that bridges this gap by efficiently integrating LLMs into ABMs, enabling the simulation of millions of adaptive agents. We present LLM archetypes, a technique that balances behavioral complexity with computational efficiency, allowing for nuanced agent behavior in large-scale simulations. Our analysis explores the crucial trade-off between simulation scale and individual agent expressiveness, comparing different agent architectures ranging from simple heuristic-based agents to fully adaptive LLM-powered agents. We demonstrate the real-world applicability of our approach through a case study of the COVID-19 pandemic, simulating 8.4 million agents representing New York City and capturing the intricate interplay between health behaviors and economic outcomes. Our method significantly enhances ABM capabilities for predictive and counterfactual analyses, addressing limitations of historical data in policy design. By implementing these advances in an open-source framework, we facilitate the adoption of LLM archetypes across diverse ABM applications. Our results show that LLM archetypes can markedly improve the realism and utility of large-scale ABMs while maintaining computational feasibility, opening new avenues for modeling complex societal challenges and informing data-driven policy decisions.
Related papers
- SRAP-Agent: Simulating and Optimizing Scarce Resource Allocation Policy with LLM-based Agent [45.41401816514924]
We propose an innovative framework, SRAP-Agent, which integrates Large Language Models (LLMs) into economic simulations.
We conduct extensive policy simulation experiments to verify the feasibility and effectiveness of the SRAP-Agent.
arXiv Detail & Related papers (2024-10-18T03:43:42Z) - EVOLvE: Evaluating and Optimizing LLMs For Exploration [76.66831821738927]
Large language models (LLMs) remain under-studied in scenarios requiring optimal decision-making under uncertainty.
We measure LLMs' (in)ability to make optimal decisions in bandits, a state-less reinforcement learning setting relevant to many applications.
Motivated by the existence of optimal exploration algorithms, we propose efficient ways to integrate this algorithmic knowledge into LLMs.
arXiv Detail & Related papers (2024-10-08T17:54:03Z) - On the Modeling Capabilities of Large Language Models for Sequential Decision Making [52.128546842746246]
Large pretrained models are showing increasingly better performance in reasoning and planning tasks.
We evaluate their ability to produce decision-making policies, either directly, by generating actions, or indirectly.
In environments with unfamiliar dynamics, we explore how fine-tuning LLMs with synthetic data can significantly improve their reward modeling capabilities.
arXiv Detail & Related papers (2024-10-08T03:12:57Z) - GenSim: A General Social Simulation Platform with Large Language Model based Agents [111.00666003559324]
We propose a novel large language model (LLMs)-based simulation platform called textitGenSim.
Our platform supports one hundred thousand agents to better simulate large-scale populations in real-world contexts.
To our knowledge, GenSim represents an initial step toward a general, large-scale, and correctable social simulation platform.
arXiv Detail & Related papers (2024-10-06T05:02:23Z) - Coalitions of Large Language Models Increase the Robustness of AI Agents [3.216132991084434]
Large Language Models (LLMs) have fundamentally altered the way we interact with digital systems.
LLMs are powerful and capable of demonstrating some emergent properties, but struggle to perform well at all sub-tasks carried out by an AI agent.
We assess if a system comprising of a coalition of pretrained LLMs, each exhibiting specialised performance at individual sub-tasks, can match the performance of single model agents.
arXiv Detail & Related papers (2024-08-02T16:37:44Z) - Enhancing the General Agent Capabilities of Low-Parameter LLMs through Tuning and Multi-Branch Reasoning [56.82041895921434]
Open-source pre-trained Large Language Models (LLMs) exhibit strong language understanding and generation capabilities.
When used as agents for dealing with complex problems in the real world, their performance is far inferior to large commercial models such as ChatGPT and GPT-4.
arXiv Detail & Related papers (2024-03-29T03:48:12Z) - Large Language Model-based Human-Agent Collaboration for Complex Task
Solving [94.3914058341565]
We introduce the problem of Large Language Models (LLMs)-based human-agent collaboration for complex task-solving.
We propose a Reinforcement Learning-based Human-Agent Collaboration method, ReHAC.
This approach includes a policy model designed to determine the most opportune stages for human intervention within the task-solving process.
arXiv Detail & Related papers (2024-02-20T11:03:36Z) - Model-Based RL for Mean-Field Games is not Statistically Harder than Single-Agent RL [57.745700271150454]
We study the sample complexity of reinforcement learning in Mean-Field Games (MFGs) with model-based function approximation.
We introduce the Partial Model-Based Eluder Dimension (P-MBED), a more effective notion to characterize the model class complexity.
arXiv Detail & Related papers (2024-02-08T14:54:47Z) - Solution-oriented Agent-based Models Generation with Verifier-assisted
Iterative In-context Learning [10.67134969207797]
Agent-based models (ABMs) stand as an essential paradigm for proposing and validating hypothetical solutions or policies.
Large language models (LLMs) encapsulating cross-domain knowledge and programming proficiency could potentially alleviate the difficulty of this process.
We present SAGE, a general solution-oriented ABM generation framework designed for automatic modeling and generating solutions for targeted problems.
arXiv Detail & Related papers (2024-02-04T07:59:06Z) - Computational Experiments Meet Large Language Model Based Agents: A
Survey and Perspective [16.08517740276261]
Computational experiments have emerged as a valuable method for studying complex systems.
accurately representing real social systems in Agent-based Modeling (ABM) is challenging due to the diverse and intricate characteristics of humans.
The integration of Large Language Models (LLMs) has been proposed, enabling agents to possess anthropomorphic abilities.
arXiv Detail & Related papers (2024-02-01T01:17:46Z) - Smart Agent-Based Modeling: On the Use of Large Language Models in
Computer Simulations [19.84766478633828]
Agent-Based Modeling (ABM) harnesses the interactions of individual agents to emulate intricate system dynamics.
This paper seeks to transcend these boundaries by integrating Large Language Models (LLMs) like GPT into ABM.
This amalgamation gives birth to a novel framework, Smart Agent-Based Modeling (SABM)
arXiv Detail & Related papers (2023-11-10T18:54:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.