CrimeMind: Simulating Urban Crime with Multi-Modal LLM Agents
- URL: http://arxiv.org/abs/2506.05981v2
- Date: Tue, 10 Jun 2025 02:29:07 GMT
- Title: CrimeMind: Simulating Urban Crime with Multi-Modal LLM Agents
- Authors: Qingbin Zeng, Ruotong Zhao, Jinzhu Mao, Haoyang Li, Fengli Xu, Yong Li,
- Abstract summary: We propose CrimeMind, a novel framework for simulating urban crime within a multi-modal urban context.<n>A key innovation of our design is the integration of the Routine Activity Theory (RAT) into the agentic workflow of CrimeMind.<n> Experiments across four major U.S. cities demonstrate that CrimeMind outperforms both traditional ABMs and deep learning baselines in crime hotspot prediction and spatial distribution accuracy.
- Score: 15.700232503447737
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modeling urban crime is an important yet challenging task that requires understanding the subtle visual, social, and cultural cues embedded in urban environments. Previous work has mainly focused on rule-based agent-based modeling (ABM) and deep learning methods. ABMs offer interpretability of internal mechanisms but exhibit limited predictive accuracy. In contrast, deep learning methods are often effective in prediction but are less interpretable and require extensive training data. Moreover, both lines of work lack the cognitive flexibility to adapt to changing environments. Leveraging the capabilities of large language models (LLMs), we propose CrimeMind, a novel LLM-driven ABM framework for simulating urban crime within a multi-modal urban context. A key innovation of our design is the integration of the Routine Activity Theory (RAT) into the agentic workflow of CrimeMind, enabling it to process rich multi-modal urban features and reason about criminal behavior. However, RAT requires LLM agents to infer subtle cues in evaluating environmental safety as part of assessing guardianship, which can be challenging for LLMs. To address this, we collect a small-scale human-annotated dataset and align CrimeMind's perception with human judgment via a training-free textual gradient method. Experiments across four major U.S. cities demonstrate that CrimeMind outperforms both traditional ABMs and deep learning baselines in crime hotspot prediction and spatial distribution accuracy, achieving up to a 24% improvement over the strongest baseline. Furthermore, we conduct counterfactual simulations of external incidents and policy interventions and it successfully captures the expected changes in crime patterns, demonstrating its ability to reflect counterfactual scenarios. Overall, CrimeMind enables fine-grained modeling of individual behaviors and facilitates evaluation of real-world interventions.
Related papers
- PRISON: Unmasking the Criminal Potential of Large Language Models [34.15161583767304]
We quantify large language models' criminal potential across five traits: False Statements, Frame-Up, Psychological Manipulation, Emotional Disguise, and Moral Disengagement.<n>Results show that state-of-the-art LLMs frequently exhibit emergent criminal tendencies, such as proposing misleading statements or evasion tactics.<n>When placed in a detective role, models recognize deceptive behavior with only 44% accuracy on average, revealing a striking mismatch between conducting and detecting criminal behavior.
arXiv Detail & Related papers (2025-06-19T09:06:27Z) - LLM Post-Training: A Deep Dive into Reasoning Large Language Models [131.10969986056]
Large Language Models (LLMs) have transformed the natural language processing landscape and brought to life diverse applications.<n>Post-training methods enable LLMs to refine their knowledge, improve reasoning, enhance factual accuracy, and align more effectively with user intents and ethical considerations.
arXiv Detail & Related papers (2025-02-28T18:59:54Z) - TrajLLM: A Modular LLM-Enhanced Agent-Based Framework for Realistic Human Trajectory Simulation [3.8106509573548286]
This work leverages Large Language Models (LLMs) to simulate human mobility, addressing challenges like high costs and privacy concerns in traditional models.<n>Our hierarchical framework integrates persona generation, activity selection, and destination prediction, using real-world demographic and psychological data.
arXiv Detail & Related papers (2025-02-26T00:13:26Z) - An Overview of Large Language Models for Statisticians [109.38601458831545]
Large Language Models (LLMs) have emerged as transformative tools in artificial intelligence (AI)<n>This paper explores potential areas where statisticians can make important contributions to the development of LLMs.<n>We focus on issues such as uncertainty quantification, interpretability, fairness, privacy, watermarking and model adaptation.
arXiv Detail & Related papers (2025-02-25T03:40:36Z) - Latent-Predictive Empowerment: Measuring Empowerment without a Simulator [56.53777237504011]
We present Latent-Predictive Empowerment (LPE), an algorithm that can compute empowerment in a more practical manner.
LPE learns large skillsets by maximizing an objective that is a principled replacement for the mutual information between skills and states.
arXiv Detail & Related papers (2024-10-15T00:41:18Z) - Machine Learning for Public Good: Predicting Urban Crime Patterns to Enhance Community Safety [0.0]
This paper explores the effectiveness of ML techniques to predict spatial and temporal patterns of crimes in urban areas.
Research goal is to achieve a high degree of accuracy in categorizing calls into priority levels.
arXiv Detail & Related papers (2024-09-17T02:07:14Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, a framework for better data construction and model tuning.<n>For insufficient data usage, we incorporate strategies such as Chain-of-Thought prompting and anti-induction.<n>For rigid behavior patterns, we design the tuning process and introduce automated DPO to enhance the specificity and dynamism of the models' personalities.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Unveiling the Misuse Potential of Base Large Language Models via In-Context Learning [61.2224355547598]
Open-sourcing of large language models (LLMs) accelerates application development, innovation, and scientific progress.
Our investigation exposes a critical oversight in this belief.
By deploying carefully designed demonstrations, our research demonstrates that base LLMs could effectively interpret and execute malicious instructions.
arXiv Detail & Related papers (2024-04-16T13:22:54Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Spatial-Temporal Meta-path Guided Explainable Crime Prediction [40.03641583647572]
We present a Spatial-Temporal Metapath guided Explainable Crime prediction (STMEC) framework to capture dynamic patterns of crime behaviours.
We show the superiority of STMEC compared with other advancedtemporal models, especially in predicting felonies.
arXiv Detail & Related papers (2022-05-04T05:42:23Z) - Spatial-Temporal Hypergraph Self-Supervised Learning for Crime
Prediction [60.508960752148454]
This work proposes a Spatial-Temporal Hypergraph Self-Supervised Learning framework to tackle the label scarcity issue in crime prediction.
We propose the cross-region hypergraph structure learning to encode region-wise crime dependency under the entire urban space.
We also design the dual-stage self-supervised learning paradigm, to not only jointly capture local- and global-level spatial-temporal crime patterns, but also supplement the sparse crime representation by augmenting region self-discrimination.
arXiv Detail & Related papers (2022-04-18T23:46:01Z) - AIST: An Interpretable Attention-based Deep Learning Model for Crime
Prediction [0.30458514384586394]
We develop AIST, an Attention-based Interpretable S Temporal Network for crime prediction.
AIST models the dynamic spatial dependency and temporal patterns of a specific crime category.
Experiments show the superiority of our model in terms of both accuracy and interpretability using real datasets.
arXiv Detail & Related papers (2020-12-16T03:01:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.