Predicting Effects, Missing Distributions: Evaluating LLMs as Human Behavior Simulators in Operations Management
- URL: http://arxiv.org/abs/2510.03310v1
- Date: Tue, 30 Sep 2025 20:20:58 GMT
- Title: Predicting Effects, Missing Distributions: Evaluating LLMs as Human Behavior Simulators in Operations Management
- Authors: Runze Zhang, Xiaowei Zhang, Mingyang Zhao,
- Abstract summary: LLMs are emerging tools for simulating human behavior in business, economics, and social science.<n>This paper evaluates how well LLMs replicate human behavior in operations management.
- Score: 11.302500716500893
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: LLMs are emerging tools for simulating human behavior in business, economics, and social science, offering a lower-cost complement to laboratory experiments, field studies, and surveys. This paper evaluates how well LLMs replicate human behavior in operations management. Using nine published experiments in behavioral operations, we assess two criteria: replication of hypothesis-test outcomes and distributional alignment via Wasserstein distance. LLMs reproduce most hypothesis-level effects, capturing key decision biases, but their response distributions diverge from human data, including for strong commercial models. We also test two lightweight interventions -- chain-of-thought prompting and hyperparameter tuning -- which reduce misalignment and can sometimes let smaller or open-source models match or surpass larger systems.
Related papers
- This human study did not involve human subjects: Validating LLM simulations as behavioral evidence [15.56427716190418]
Heuristic approaches seek to establish that simulated and observed human behavior are interchangeable.<n> statistical calibration combines auxiliary human data with statistical adjustments to account for discrepancies between observed and simulated responses.
arXiv Detail & Related papers (2026-02-17T18:18:38Z) - Can Finetuing LLMs on Small Human Samples Increase Heterogeneity, Alignment, and Belief-Action Coherence? [9.310571879281186]
Large language models (LLMs) can serve as substitutes for human participants in survey and experimental research.<n>LLMs often fail to align with real human behavior, exhibiting limited diversity, systematic misalignment for minority subgroups, insufficient within-group variance, and discrepancies between stated beliefs and actions.<n>This study examines whether fine-tuning on a small subset of human survey data, such as that obtainable from a pilot study, can mitigate these issues and yield realistic simulated outcomes.
arXiv Detail & Related papers (2025-11-26T09:50:42Z) - Large language models replicate and predict human cooperation across experiments in game theory [0.8166364251367626]
How closely large language models mirror actual human decision-making remains poorly understood.<n>We develop a digital twin of game-theoretic experiments and introduce a systematic prompting and probing framework for machine-behavioral evaluation.<n>We find that Llama reproduces human cooperation patterns with high fidelity, capturing human deviations from rational choice theory.
arXiv Detail & Related papers (2025-11-06T16:21:27Z) - Leveraging LLM-based agents for social science research: insights from citation network simulations [132.4334196445918]
We introduce the CiteAgent framework, designed to generate citation networks based on human-behavior simulation.<n>CiteAgent captures predominant phenomena in real-world citation networks, including power-law distribution, citational distortion, and shrinking diameter.<n>We establish two LLM-based research paradigms in social science, allowing us to validate and challenge existing theories.
arXiv Detail & Related papers (2025-11-05T08:47:04Z) - Mitigating Spurious Correlations in LLMs via Causality-Aware Post-Training [57.03005244917803]
Large language models (LLMs) often fail on out-of-distribution (OOD) samples due to spurious correlations acquired during pre-training.<n>Here, we aim to mitigate such spurious correlations through causality-aware post-training (CAPT)<n> Experiments on the formal causal inference benchmark CLadder and the logical reasoning dataset PrOntoQA show that 3B-scale language models fine-tuned with CAPT can outperform both traditional SFT and larger LLMs on in-distribution (ID) and OOD tasks.
arXiv Detail & Related papers (2025-06-11T06:30:28Z) - Can Generative AI agents behave like humans? Evidence from laboratory market experiments [0.0]
We explore the potential of Large Language Models to replicate human behavior in economic market experiments.<n>We compare LLM behavior to market dynamics observed in laboratory settings and assess their alignment with human participants' behavior.<n>These results suggest that LLMs hold promise as tools for simulating realistic human behavior in economic contexts.
arXiv Detail & Related papers (2025-05-12T11:44:46Z) - Prompting is Not All You Need! Evaluating LLM Agent Simulation Methodologies with Real-World Online Customer Behavior Data [62.61900377170456]
We focus on evaluating LLM's objective accuracy'' rather than the subjective believability'' in simulating human behavior.<n>We present the first comprehensive evaluation of state-of-the-art LLMs on the task of web shopping action generation.
arXiv Detail & Related papers (2025-03-26T17:33:27Z) - Evaluating Interventional Reasoning Capabilities of Large Language Models [58.52919374786108]
Large language models (LLMs) are used to automate decision-making tasks.<n>In this paper, we evaluate whether LLMs can accurately update their knowledge of a data-generating process in response to an intervention.<n>We create benchmarks that span diverse causal graphs (e.g., confounding, mediation) and variable types.<n>These benchmarks allow us to isolate the ability of LLMs to accurately predict changes resulting from their ability to memorize facts or find other shortcuts.
arXiv Detail & Related papers (2024-04-08T14:15:56Z) - Explaining Large Language Models Decisions Using Shapley Values [1.223779595809275]
Large language models (LLMs) have opened up exciting possibilities for simulating human behavior and cognitive processes.
However, the validity of utilizing LLMs as stand-ins for human subjects remains uncertain.
This paper presents a novel approach based on Shapley values to interpret LLM behavior and quantify the relative contribution of each prompt component to the model's output.
arXiv Detail & Related papers (2024-03-29T22:49:43Z) - A Theory of Response Sampling in LLMs: Part Descriptive and Part Prescriptive [53.08398658452411]
Large Language Models (LLMs) are increasingly utilized in autonomous decision-making.<n>We show that this sampling behavior resembles that of human decision-making.<n>We show that this deviation of a sample from the statistical norm towards a prescriptive component consistently appears in concepts across diverse real-world domains.
arXiv Detail & Related papers (2024-02-16T18:28:43Z) - Systematic Biases in LLM Simulations of Debates [12.933509143906141]
We study the limitations of Large Language Models in simulating human interactions.<n>Our findings indicate a tendency for LLM agents to conform to the model's inherent social biases.<n>These results underscore the need for further research to develop methods that help agents overcome these biases.
arXiv Detail & Related papers (2024-02-06T14:51:55Z) - Are You Sure? Challenging LLMs Leads to Performance Drops in The
FlipFlop Experiment [82.60594940370919]
We propose the FlipFlop experiment to study the multi-turn behavior of Large Language Models (LLMs)
We show that models flip their answers on average 46% of the time and that all models see a deterioration of accuracy between their first and final prediction, with an average drop of 17% (the FlipFlop effect)
We conduct finetuning experiments on an open-source LLM and find that finetuning on synthetically created data can mitigate - reducing performance deterioration by 60% - but not resolve sycophantic behavior entirely.
arXiv Detail & Related papers (2023-11-14T23:40:22Z) - Do LLMs exhibit human-like response biases? A case study in survey
design [66.1850490474361]
We investigate the extent to which large language models (LLMs) reflect human response biases, if at all.
We design a dataset and framework to evaluate whether LLMs exhibit human-like response biases in survey questionnaires.
Our comprehensive evaluation of nine models shows that popular open and commercial LLMs generally fail to reflect human-like behavior.
arXiv Detail & Related papers (2023-11-07T15:40:43Z) - On Learning to Summarize with Large Language Models as References [101.79795027550959]
Large language models (LLMs) are favored by human annotators over the original reference summaries in commonly used summarization datasets.
We study an LLM-as-reference learning setting for smaller text summarization models to investigate whether their performance can be substantially improved.
arXiv Detail & Related papers (2023-05-23T16:56:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.