Simulating Generative Social Agents via Theory-Informed Workflow Design
- URL: http://arxiv.org/abs/2508.08726v1
- Date: Tue, 12 Aug 2025 08:14:48 GMT
- Title: Simulating Generative Social Agents via Theory-Informed Workflow Design
- Authors: Yuwei Yan, Jinghua Piao, Xiaochong Lan, Chenyang Shao, Pan Hui, Yong Li,
- Abstract summary: We propose a theory-informed framework that provides a systematic design process for social agents.<n>Our framework is grounded in principles from Social Cognition Theory and introduces three key modules: motivation, action planning, and learning.<n>Experiments demonstrate that our theory-driven agents reproduce realistic human behavior patterns under complex conditions.
- Score: 11.992123170134185
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in large language models have demonstrated strong reasoning and role-playing capabilities, opening new opportunities for agent-based social simulations. However, most existing agents' implementations are scenario-tailored, without a unified framework to guide the design. This lack of a general social agent limits their ability to generalize across different social contexts and to produce consistent, realistic behaviors. To address this challenge, we propose a theory-informed framework that provides a systematic design process for LLM-based social agents. Our framework is grounded in principles from Social Cognition Theory and introduces three key modules: motivation, action planning, and learning. These modules jointly enable agents to reason about their goals, plan coherent actions, and adapt their behavior over time, leading to more flexible and contextually appropriate responses. Comprehensive experiments demonstrate that our theory-driven agents reproduce realistic human behavior patterns under complex conditions, achieving up to 75% lower deviation from real-world behavioral data across multiple fidelity metrics compared to classical generative baselines. Ablation studies further show that removing motivation, planning, or learning modules increases errors by 1.5 to 3.2 times, confirming their distinct and essential contributions to generating realistic and coherent social behaviors.
Related papers
- Agentic Reasoning for Large Language Models [122.81018455095999]
Reasoning is a fundamental cognitive process underlying inference, problem-solving, and decision-making.<n>Large language models (LLMs) demonstrate strong reasoning capabilities in closed-world settings, but struggle in open-ended and dynamic environments.<n>Agentic reasoning marks a paradigm shift by reframing LLMs as autonomous agents that plan, act, and learn through continual interaction.
arXiv Detail & Related papers (2026-01-18T18:58:23Z) - SVBench: Evaluation of Video Generation Models on Social Reasoning [35.06131184286366]
We introduce the first benchmark for social reasoning in video generation.<n>We develop a fully training-free agent-based pipeline that distills the reasoning mechanism of each experiment.<n>We conduct the first large-scale study across seven state-of-the-art video generation systems.
arXiv Detail & Related papers (2025-12-25T04:44:59Z) - Social World Model-Augmented Mechanism Design Policy Learning [58.739456918502704]
We introduce SWM-AP (Social World Model-Augmented Mechanism Design Policy Learning), which learns a social world model hierarchically to enhance mechanism design.<n>We show that SWM-AP outperforms established model-based and model-free RL baselines in cumulative rewards and sample efficiency.
arXiv Detail & Related papers (2025-10-22T06:01:21Z) - Emotional Cognitive Modeling Framework with Desire-Driven Objective Optimization for LLM-empowered Agent in Social Simulation [9.34696493928592]
This paper constructs an emotional cognition framework incorporating desire generation and objective management.<n>It modeling the complete decision-making process of LLM-based agents, encompassing state evolution, desire generation, objective optimization, decision generation, and action execution.<n> Experimental results demonstrate that agents governed by our framework not only exhibit behaviors congruent with their emotional states but also, in comparative assessments against other agent types, demonstrate superior ecological validity and generate decision outcomes that significantly more closely approximate human behavioral patterns.
arXiv Detail & Related papers (2025-10-15T06:33:11Z) - Implicit Behavioral Alignment of Language Agents in High-Stakes Crowd Simulations [3.0112218223206173]
Language-driven generative agents have enabled social simulations with transformative uses, from interpersonal training to aiding global policy-making.<n>Recent studies indicate that generative agent behaviors often deviate from expert expectations and real-world data--a phenomenon we term the Behavior-Realism Gap.<n>We introduce a theoretical framework called Persona-Environment Behavioral Alignment (PEBA), formulated as a distribution matching problem grounded in Lewin's behavior equation.<n>We propose PersonaEvolve (PEvo), an LLM-based optimization algorithm that iteratively refines agent personas, implicitly aligning their collective behaviors with realistic expert benchmarks within a specified environmental context.
arXiv Detail & Related papers (2025-09-19T22:35:13Z) - DynamiX: Large-Scale Dynamic Social Network Simulator [101.65679342680542]
DynamiX is a novel large-scale social network simulator dedicated to dynamic social network modeling.<n>For opinion leaders, we propose an information-stream-based link prediction method recommending potential users with similar stances.<n>For ordinary users, we construct an inequality-oriented behavior decision-making module.
arXiv Detail & Related papers (2025-07-26T12:13:30Z) - LLM-Based Social Simulations Require a Boundary [3.351170542925928]
This position paper argues that large language model (LLM)-based social simulations should establish clear boundaries.<n>We examine three key boundary problems: alignment (simulated behaviors matching real-world patterns), consistency (maintaining coherent agent behavior over time), and robustness.
arXiv Detail & Related papers (2025-06-24T17:14:47Z) - Position: Simulating Society Requires Simulating Thought [9.150119344618497]
Simulating society with large language models (LLMs) requires cognitively grounded reasoning that is structured, revisable, and traceable.<n>We present a conceptual modeling paradigm, Generative Minds (GenMinds), which draws from cognitive science to support structured belief representations in generative agents.<n>These contributions advance a broader shift: from surface-level mimicry to generative agents that simulate thought -- not just language -- for social simulations.
arXiv Detail & Related papers (2025-06-08T00:59:02Z) - Modeling Earth-Scale Human-Like Societies with One Billion Agents [54.465233996410156]
Light Society is an agent-based simulation framework.<n>It formalizes social processes as structured transitions of agent and environment states.<n>It supports efficient simulation of societies with over one billion agents.
arXiv Detail & Related papers (2025-06-07T09:14:12Z) - SocialMaze: A Benchmark for Evaluating Social Reasoning in Large Language Models [41.68365456601248]
We introduce SocialMaze, a new benchmark specifically designed to evaluate social reasoning.<n>SocialMaze systematically incorporates three core challenges: deep reasoning, dynamic interaction, and information uncertainty.<n>It provides six diverse tasks across three key settings: social reasoning games, daily-life interactions, and digital community platforms.
arXiv Detail & Related papers (2025-05-29T17:47:36Z) - SocioVerse: A World Model for Social Simulation Powered by LLM Agents and A Pool of 10 Million Real-World Users [70.02370111025617]
We introduce SocioVerse, an agent-driven world model for social simulation.<n>Our framework features four powerful alignment components and a user pool of 10 million real individuals.<n>Results demonstrate that SocioVerse can reflect large-scale population dynamics while ensuring diversity, credibility, and representativeness.
arXiv Detail & Related papers (2025-04-14T12:12:52Z) - GenSim: A General Social Simulation Platform with Large Language Model based Agents [111.00666003559324]
We propose a novel large language model (LLMs)-based simulation platform called textitGenSim.<n>Our platform supports one hundred thousand agents to better simulate large-scale populations in real-world contexts.<n>To our knowledge, GenSim represents an initial step toward a general, large-scale, and correctable social simulation platform.
arXiv Detail & Related papers (2024-10-06T05:02:23Z) - Training Socially Aligned Language Models on Simulated Social
Interactions [99.39979111807388]
Social alignment in AI systems aims to ensure that these models behave according to established societal values.
Current language models (LMs) are trained to rigidly replicate their training corpus in isolation.
This work presents a novel training paradigm that permits LMs to learn from simulated social interactions.
arXiv Detail & Related papers (2023-05-26T14:17:36Z) - Models we Can Trust: Toward a Systematic Discipline of (Agent-Based)
Model Interpretation and Validation [0.0]
We advocate the development of a discipline of interacting with and extracting information from models.
We outline some directions for the development of a such a discipline.
arXiv Detail & Related papers (2021-02-23T10:52:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.