GenSim: A General Social Simulation Platform with Large Language Model based Agents
- URL: http://arxiv.org/abs/2410.04360v2
- Date: Wed, 9 Oct 2024 09:03:48 GMT
- Title: GenSim: A General Social Simulation Platform with Large Language Model based Agents
- Authors: Jiakai Tang, Heyang Gao, Xuchen Pan, Lei Wang, Haoran Tan, Dawei Gao, Yushuo Chen, Xu Chen, Yankai Lin, Yaliang Li, Bolin Ding, Jingren Zhou, Jun Wang, Ji-Rong Wen,
- Abstract summary: We propose a novel large language model (LLMs)-based simulation platform called textitGenSim.
Our platform supports one hundred thousand agents to better simulate large-scale populations in real-world contexts.
To our knowledge, GenSim represents an initial step toward a general, large-scale, and correctable social simulation platform.
- Score: 111.00666003559324
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid advancement of large language models (LLMs), recent years have witnessed many promising studies on leveraging LLM-based agents to simulate human social behavior. While prior work has demonstrated significant potential across various domains, much of it has focused on specific scenarios involving a limited number of agents and has lacked the ability to adapt when errors occur during simulation. To overcome these limitations, we propose a novel LLM-agent-based simulation platform called \textit{GenSim}, which: (1) \textbf{Abstracts a set of general functions} to simplify the simulation of customized social scenarios; (2) \textbf{Supports one hundred thousand agents} to better simulate large-scale populations in real-world contexts; (3) \textbf{Incorporates error-correction mechanisms} to ensure more reliable and long-term simulations. To evaluate our platform, we assess both the efficiency of large-scale agent simulations and the effectiveness of the error-correction mechanisms. To our knowledge, GenSim represents an initial step toward a general, large-scale, and correctable social simulation platform based on LLM agents, promising to further advance the field of social science.
Related papers
- AI Metropolis: Scaling Large Language Model-based Multi-Agent Simulation with Out-of-order Execution [15.596642151634319]
AI Metropolis is a simulation engine that improves the efficiency of LLM agent simulations by incorporating out-of-order execution scheduling.
Our evaluations demonstrate that AI Metropolis achieves speedups from 1.3x to 4.15x over standard parallel simulation with global synchronization.
arXiv Detail & Related papers (2024-11-05T21:54:14Z) - Synergistic Simulations: Multi-Agent Problem Solving with Large Language Models [36.571597246832326]
Large Language Models (LLMs) have increasingly demonstrated the ability to facilitate the development of multi-agent systems.
This paper aims to integrate agents & world interaction into a single simulation where multiple agents can work together to solve a problem.
We implement two simulations: a physical studio apartment with two roommates, and another where agents collaborate to complete a programming task.
arXiv Detail & Related papers (2024-09-14T21:53:35Z) - Sparse Rewards Can Self-Train Dialogue Agents [22.799506097310008]
We introduce a novel self-improvement paradigm that empowers LLM agents to autonomously enhance their performance without external human feedback.
We present ToolWOZ, a sparse reward tool-calling simulation environment derived from MultiWOZ.
We demonstrate that models trained with JOSH, both small and frontier, significantly improve tool-based interactions while preserving general model capabilities across diverse benchmarks.
arXiv Detail & Related papers (2024-09-06T21:00:57Z) - LLM-Augmented Agent-Based Modelling for Social Simulations: Challenges and Opportunities [0.0]
Integrating large language models with agent-based simulations offers a transformational potential for understanding complex social systems.
We explore architectures and methods to systematically develop LLM-augmented social simulations.
We conclude that integrating LLMs with agent-based simulations offers a powerful toolset for researchers and scientists.
arXiv Detail & Related papers (2024-05-08T08:57:54Z) - BASES: Large-scale Web Search User Simulation with Large Language Model
based Agents [108.97507653131917]
BASES is a novel user simulation framework with large language models (LLMs)
Our simulation framework can generate unique user profiles at scale, which subsequently leads to diverse search behaviors.
WARRIORS is a new large-scale dataset encompassing web search user behaviors, including both Chinese and English versions.
arXiv Detail & Related papers (2024-02-27T13:44:09Z) - Large Language Models Empowered Agent-based Modeling and Simulation: A
Survey and Perspectives [35.04018349811483]
Integrating large language models into agent-based modeling and simulation presents a promising avenue for enhancing simulation capabilities.
We first introduce the background of agent-based modeling and simulation and large language model-empowered agents.
Most importantly, we provide a comprehensive overview of the recent works of large language model-empowered agent-based modeling and simulation in multiple scenarios.
arXiv Detail & Related papers (2023-12-19T09:06:45Z) - Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous
Driving Research [76.93956925360638]
Waymax is a new data-driven simulator for autonomous driving in multi-agent scenes.
It runs entirely on hardware accelerators such as TPUs/GPUs and supports in-graph simulation for training.
We benchmark a suite of popular imitation and reinforcement learning algorithms with ablation studies on different design decisions.
arXiv Detail & Related papers (2023-10-12T20:49:15Z) - User Behavior Simulation with Large Language Model based Agents [116.74368915420065]
We propose an LLM-based agent framework and design a sandbox environment to simulate real user behaviors.
Based on extensive experiments, we find that the simulated behaviors of our method are very close to the ones of real humans.
arXiv Detail & Related papers (2023-06-05T02:58:35Z) - ERMAS: Becoming Robust to Reward Function Sim-to-Real Gaps in
Multi-Agent Simulations [110.72725220033983]
Epsilon-Robust Multi-Agent Simulation (ERMAS) is a framework for learning AI policies that are robust to such multiagent sim-to-real gaps.
ERMAS learns tax policies that are robust to changes in agent risk aversion, improving social welfare by up to 15% in complextemporal simulations.
In particular, ERMAS learns tax policies that are robust to changes in agent risk aversion, improving social welfare by up to 15% in complextemporal simulations.
arXiv Detail & Related papers (2021-06-10T04:32:20Z) - TrafficSim: Learning to Simulate Realistic Multi-Agent Behaviors [74.67698916175614]
We propose TrafficSim, a multi-agent behavior model for realistic traffic simulation.
In particular, we leverage an implicit latent variable model to parameterize a joint actor policy.
We show TrafficSim generates significantly more realistic and diverse traffic scenarios as compared to a diverse set of baselines.
arXiv Detail & Related papers (2021-01-17T00:29:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.