GATSim: Urban Mobility Simulation with Generative Agents
- URL: http://arxiv.org/abs/2506.23306v2
- Date: Fri, 18 Jul 2025 04:20:16 GMT
- Title: GATSim: Urban Mobility Simulation with Generative Agents
- Authors: Qi Liu, Can Li, Wanjing Ma,
- Abstract summary: We introduce GATSim, a novel framework to simulate urban mobility using generative agents with rich, human-like behaviors.<n>Unlike conventional approaches, GATSim agents are characterized by diverse socioeconomic profiles, individual lifestyles, and evolving preferences shaped through psychologically informed memory systems.<n>We implement a prototype system and conduct systematic validation, demonstrating that generative agents produce believable and coherent travel behaviors.
- Score: 12.893057419094932
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional agent-based urban mobility simulations often rely on rigid rule-based systems that struggle to capture the complexity, adaptability, and behavioral diversity inherent in human travel decision making. Recent advancements in large language models and AI agent technologies present new opportunities to develop agents with enhanced reasoning capabilities, persistent memory, and adaptive learning. We introduce GATSim (Generative-Agent Transport Simulation), a novel framework that leverages these advancements to simulate urban mobility using generative agents with rich, human-like behaviors. Unlike conventional approaches, GATSim agents are characterized by diverse socioeconomic profiles, individual lifestyles, and evolving preferences shaped through psychologically informed memory systems, tool usage, and lifelong learning. The main contributions of this work are: (1) a comprehensive architecture that integrates an urban mobility foundation model with agent cognitive systems and a transport simulation environment; (2) a hierarchical memory designed for efficient retrieval of contextually relevant information, incorporating spatial and temporal associations, keyword matching, and semantic relevance; (3) innovative planning and reactive mechanisms for modeling adaptive mobility behaviors which integrate a multi-scale reflection process to transform specific travel experiences into generalized behavioral insights. We implement a prototype system and conduct systematic validation, demonstrating that generative agents produce believable and coherent travel behaviors. Experimental results indicate that generative agents perform at least as well as human annotators with 92\% posterior probability, while naturally producing realistic macroscopic traffic patterns. The code for the prototype implementation is publicly available at https://github.com/qiliuchn/gatsim.
Related papers
- CitySim: Modeling Urban Behaviors and City Dynamics with Large-Scale LLM-Driven Agent Simulation [1.2430809884830318]
We envision an urban simulator (CitySim), capitalizing on breakthroughs in human-level intelligence exhibited by large language models.<n>In CitySim, agents generate realistic daily schedules using a value-driven approach that balances mandatory activities, personal habits, and situational factors.<n>CitySim exhibits closer alignment with real humans than prior work, both at micro and macro levels.
arXiv Detail & Related papers (2025-06-26T23:11:42Z) - CAMS: A CityGPT-Powered Agentic Framework for Urban Human Mobility Simulation [9.907406552578607]
textbfCAMS is an agentic framework that leverages the language based urban foundation model to simulate human mobility in urban space.<n>textbfCAMS achieves superior performance without relying on externally provided geospatial information.
arXiv Detail & Related papers (2025-06-16T15:24:07Z) - Modeling Earth-Scale Human-Like Societies with One Billion Agents [54.465233996410156]
Light Society is an agent-based simulation framework.<n>It formalizes social processes as structured transitions of agent and environment states.<n>It supports efficient simulation of societies with over one billion agents.
arXiv Detail & Related papers (2025-06-07T09:14:12Z) - MobileCity: An Efficient Framework for Large-Scale Urban Behavior Simulation [22.340422693575547]
We present a virtual city that features multiple functional buildings and transportation modes.<n>We then conduct extensive surveys to model behavioral choices and mobility preferences among population groups.<n>We introduce a simulation framework that captures the complexity of urban mobility while remaining scalable, enabling the simulation of over 4,000 agents.
arXiv Detail & Related papers (2025-04-18T07:01:05Z) - CHARMS: A Cognitive Hierarchical Agent for Reasoning and Motion Stylization in Autonomous Driving [7.672737334176452]
This paper proposes a Cognitive Hierarchical Agent for Reasoning and Motion Stylization (CHARMS)<n>CHARMS captures human-like reasoning patterns through a two-stage training pipeline comprising reinforcement learning pretraining and supervised fine-tuning.<n>It is capable of both making intelligent driving decisions as an ego vehicle and generating diverse, realistic driving scenarios as environment vehicles.
arXiv Detail & Related papers (2025-04-03T10:15:19Z) - TrajLLM: A Modular LLM-Enhanced Agent-Based Framework for Realistic Human Trajectory Simulation [3.8106509573548286]
This work leverages Large Language Models (LLMs) to simulate human mobility, addressing challenges like high costs and privacy concerns in traditional models.<n>Our hierarchical framework integrates persona generation, activity selection, and destination prediction, using real-world demographic and psychological data.
arXiv Detail & Related papers (2025-02-26T00:13:26Z) - LMAgent: A Large-scale Multimodal Agents Society for Multi-user Simulation [66.52371505566815]
Large language models (LLMs)-based AI agents have made significant progress, enabling them to achieve human-like intelligence.<n>We present LMAgent, a very large-scale and multimodal agents society based on multimodal LLMs.<n>In LMAgent, besides chatting with friends, the agents can autonomously browse, purchase, and review products, even perform live streaming e-commerce.
arXiv Detail & Related papers (2024-12-12T12:47:09Z) - GenSim: A General Social Simulation Platform with Large Language Model based Agents [111.00666003559324]
We propose a novel large language model (LLMs)-based simulation platform called textitGenSim.<n>Our platform supports one hundred thousand agents to better simulate large-scale populations in real-world contexts.<n>To our knowledge, GenSim represents an initial step toward a general, large-scale, and correctable social simulation platform.
arXiv Detail & Related papers (2024-10-06T05:02:23Z) - User Behavior Simulation with Large Language Model based Agents [116.74368915420065]
We propose an LLM-based agent framework and design a sandbox environment to simulate real user behaviors.
Based on extensive experiments, we find that the simulated behaviors of our method are very close to the ones of real humans.
arXiv Detail & Related papers (2023-06-05T02:58:35Z) - TrafficSim: Learning to Simulate Realistic Multi-Agent Behaviors [74.67698916175614]
We propose TrafficSim, a multi-agent behavior model for realistic traffic simulation.
In particular, we leverage an implicit latent variable model to parameterize a joint actor policy.
We show TrafficSim generates significantly more realistic and diverse traffic scenarios as compared to a diverse set of baselines.
arXiv Detail & Related papers (2021-01-17T00:29:30Z) - Adaptive Synthetic Characters for Military Training [0.9802137009065037]
Behaviors of synthetic characters in current military simulations are limited since they are generally generated by rule-based and reactive computational models.
This paper introduces a framework that aims to create autonomous synthetic characters that can perform coherent sequences of believable behavior.
arXiv Detail & Related papers (2021-01-06T18:45:48Z) - How Do We Move: Modeling Human Movement with System Dynamics [34.13127840909941]
We learn the human movement with Generative Adversarial Imitation Learning.
We are the first to learn to model the state transition of moving agents with system dynamics.
arXiv Detail & Related papers (2020-03-01T23:43:22Z) - Learning to Move with Affordance Maps [57.198806691838364]
The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent.
Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry.
We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.
arXiv Detail & Related papers (2020-01-08T04:05:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.