Large language models for spreading dynamics in complex systems
- URL: http://arxiv.org/abs/2602.08085v1
- Date: Sun, 08 Feb 2026 18:58:43 GMT
- Title: Large language models for spreading dynamics in complex systems
- Authors: Shuyu Jiang, Hao Ren, Yichang Gao, Yi-Cheng Zhang, Li Qi, Dayong Xiao, Jie Fan, Rui Tang, Wei Wang,
- Abstract summary: Spreading dynamics is a central topic in the physics of complex systems and network science.<n>Large language models (LLMs) have exhibited strong capabilities in natural language understanding, reasoning, and generation.<n>LLMs can act as interactive agents embedded in propagation systems, potentially influencing spreading pathways and feedback structures.
- Score: 15.581915022853337
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Spreading dynamics is a central topic in the physics of complex systems and network science, providing a unified framework for understanding how information, behaviors, and diseases propagate through interactions among system units. In many propagation contexts, spreading processes are influenced by multiple interacting factors, such as information expression patterns, cultural contexts, living environments, cognitive preferences, and public policies, which are difficult to incorporate directly into classical modeling frameworks. Recently, large language models (LLMs) have exhibited strong capabilities in natural language understanding, reasoning, and generation, enabling explicit perception of semantic content and contextual cues in spreading processes, thereby supporting the analysis of the different influencing factors. Beyond serving as external analytical tools, LLMs can also act as interactive agents embedded in propagation systems, potentially influencing spreading pathways and feedback structures. Consequently, the roles and impacts of LLMs on spreading dynamics have become an active and rapidly growing research area across multiple research disciplines. This review provides a comprehensive overview of recent advances in applying LLMs to the study of spreading dynamics across two representative domains: digital epidemics, such as misinformation and rumors, and biological epidemics, including infectious disease outbreaks. We first examine the foundations of epidemic modeling from a complex-systems perspective and discuss how LLM-based approaches relate to traditional frameworks. We then systematically review recent studies from three key perspectives, which are epidemic modeling, epidemic detection and surveillance, and epidemic prediction and management, to clarify how LLMs enhance these areas. Finally, open challenges and potential research directions are discussed.
Related papers
- A time for monsters: Organizational knowing after LLMs [0.0]
Large Language Models (LLMs) are reshaping organizational knowing by unsettling foundations of representational and practice-based perspectives.<n>We conceptualize LLMs as Haraway-ian monsters, that is, hybrid categories, boundary-crossing entities that destabilize established categories while opening new possibilities for inquiry.
arXiv Detail & Related papers (2025-11-19T14:07:47Z) - MTOS: A LLM-Driven Multi-topic Opinion Simulation Framework for Exploring Echo Chamber Dynamics [4.784214920683191]
In real-world networks, information often spans multiple interrelated topics, posing challenges for opinion evolution.<n>Existing studies based on large language models (LLMs) focus largely on single topics, limiting the capture of cognitive transfer in multi-topic, cross-domain contexts.<n>Traditional numerical models, meanwhile, simplify complex linguistic attitudes into discrete values, lacking interpretability, behavioral consistency, and the ability to integrate multiple topics.<n>We propose Multi-topic Opinion Simulation (MTOS), a social simulation framework integrating multi-topic contexts with LLMs.
arXiv Detail & Related papers (2025-10-14T11:59:47Z) - From Perception to Cognition: A Survey of Vision-Language Interactive Reasoning in Multimodal Large Language Models [66.36007274540113]
Multimodal Large Language Models (MLLMs) strive to achieve a profound, human-like understanding of and interaction with the physical world.<n>They often exhibit a shallow and incoherent integration when acquiring information (Perception) and conducting reasoning (Cognition)<n>This survey introduces a novel and unified analytical framework: From Perception to Cognition"
arXiv Detail & Related papers (2025-09-29T18:25:40Z) - Revealing Multimodal Causality with Large Language Models [80.95511545591107]
We propose MLLM-CD, a novel framework for multimodal causal discovery from unstructured data.<n>It consists of three key components: (1) a novel contrastive factor discovery module to identify genuine multimodal factors; (2) a statistical causal structure discovery module to infer causal relationships among discovered factors; and (3) an iterative multimodal counterfactual reasoning module to refine the discovery outcomes.<n>Extensive experiments on both synthetic and real-world datasets demonstrate the effectiveness of the proposed MLLM-CD.
arXiv Detail & Related papers (2025-09-22T13:45:17Z) - Unraveling the cognitive patterns of Large Language Models through module communities [45.399985422756224]
Large Language Models (LLMs) have reshaped our world with significant advancements in science, engineering, and society.<n>Despite their ubiquity and utility, the underlying mechanisms of LLM remain concealed within billions of parameters and complex structures.<n>We address this gap by adopting approaches to understanding emerging cognition in biology.
arXiv Detail & Related papers (2025-08-25T16:49:38Z) - How do Large Language Models Understand Relevance? A Mechanistic Interpretability Perspective [64.00022624183781]
Large language models (LLMs) can assess relevance and support information retrieval (IR) tasks.<n>We investigate how different LLM modules contribute to relevance judgment through the lens of mechanistic interpretability.
arXiv Detail & Related papers (2025-04-10T16:14:55Z) - Large Language Models for Zero-shot Inference of Causal Structures in Biology [4.650342334505084]
We present a framework to evaluate large language models (LLMs) for zero-shot inference of causal relationships in biology.<n>We systematically evaluate causal claims obtained from an LLM using real-world interventional data.<n>Our results show that even relatively small LLMs can capture meaningful aspects of causal structure in biological systems.
arXiv Detail & Related papers (2025-03-06T11:43:30Z) - BioMaze: Benchmarking and Enhancing Large Language Models for Biological Pathway Reasoning [49.487327661584686]
We introduce BioMaze, a dataset with 5.1K complex pathway problems from real research.<n>Our evaluation of methods such as CoT and graph-augmented reasoning, shows that LLMs struggle with pathway reasoning.<n>To address this, we propose PathSeeker, an LLM agent that enhances reasoning through interactive subgraph-based navigation.
arXiv Detail & Related papers (2025-02-23T17:38:10Z) - A Survey on Mechanistic Interpretability for Multi-Modal Foundation Models [74.48084001058672]
The rise of foundation models has transformed machine learning research.<n> multimodal foundation models (MMFMs) pose unique interpretability challenges beyond unimodal frameworks.<n>This survey explores two key aspects: (1) the adaptation of LLM interpretability methods to multimodal models and (2) understanding the mechanistic differences between unimodal language models and crossmodal systems.
arXiv Detail & Related papers (2025-02-22T20:55:26Z) - Digital Epidemiology: A review [0.0]
The epidemiology has recently witnessed great advances based on computational models.
Big Data along with apps are enabling for validating and refining models with real world data at scale.
Ebolas have to be approached from the lens of complexity as they require systemic solutions.
arXiv Detail & Related papers (2021-04-08T08:45:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.