Socio-Economic Model of AI Agents
- URL: http://arxiv.org/abs/2509.23270v1
- Date: Sat, 27 Sep 2025 11:56:48 GMT
- Title: Socio-Economic Model of AI Agents
- Authors: Yuxinyue Qian, Jun Liu,
- Abstract summary: We study the impact of AI collaboration under resource constraints on aggregate social output.<n>We find that the introduction of AI agents can significantly increase aggregate social output.
- Score: 6.345776306229298
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern socio-economic systems are undergoing deep integration with artificial intelligence technologies. This paper constructs a heterogeneous agent-based modeling framework that incorporates both human workers and autonomous AI agents, to study the impact of AI collaboration under resource constraints on aggregate social output. We build five progressively extended models: Model 1 serves as the baseline of pure human collaboration; Model 2 introduces AI as collaborators; Model 3 incorporates network effects among agents; Model 4 treats agents as independent producers; and Model 5 integrates both network effects and independent agent production. Through theoretical derivation and simulation analysis, we find that the introduction of AI agents can significantly increase aggregate social output. When considering network effects among agents, this increase exhibits nonlinear growth far exceeding the simple sum of individual contributions. Under the same resource inputs, treating agents as independent producers provides higher long-term growth potential; introducing network effects further demonstrates strong characteristics of increasing returns to scale.
Related papers
- AgentEvolver: Towards Efficient Self-Evolving Agent System [51.54882384204726]
We present AgentEvolver, a self-evolving agent system that drives autonomous agent learning.<n>AgentEvolver introduces three synergistic mechanisms: self-questioning, self-navigating, and self-attributing.<n>Preliminary experiments indicate that AgentEvolver achieves more efficient exploration, better sample utilization, and faster adaptation compared to traditional RL-based baselines.
arXiv Detail & Related papers (2025-11-13T15:14:47Z) - Social World Model-Augmented Mechanism Design Policy Learning [58.739456918502704]
We introduce SWM-AP (Social World Model-Augmented Mechanism Design Policy Learning), which learns a social world model hierarchically to enhance mechanism design.<n>We show that SWM-AP outperforms established model-based and model-free RL baselines in cumulative rewards and sample efficiency.
arXiv Detail & Related papers (2025-10-22T06:01:21Z) - Modeling AI-Driven Production and Competitiveness A Multi-Agent Economic Simulation of China and the United States [6.345776306229298]
With the rapid development of artificial intelligence (AI) technology, socio-economic systems are entering a new stage of "human-AI co-creation"<n>This paper conducts simulation-based comparisons of macroeconomic output evolution in China and the United States under different mechanisms.<n>The results show that when AI functions as an independent productive entity, the overall growth rate of social output far exceeds that of traditional human-labor-based models.
arXiv Detail & Related papers (2025-10-13T07:28:14Z) - The Collaboration Paradox: Why Generative AI Requires Both Strategic Intelligence and Operational Stability in Supply Chain Management [0.0]
The rise of autonomous, AI-driven agents in economic settings raises critical questions about their emergent strategic behavior.<n>This paper investigates these dynamics in the cooperative context of a multi-echelon supply chain.<n>Our central finding is the "collaboration paradox": a novel, catastrophic failure mode where theoretically superior collaborative AI agents perform even worse than non-AI baselines.
arXiv Detail & Related papers (2025-08-19T15:31:23Z) - Agentic Web: Weaving the Next Web with AI Agents [109.13815627467514]
The emergence of AI agents powered by large language models (LLMs) marks a pivotal shift toward the Agentic Web.<n>In this paradigm, agents interact directly with one another to plan, coordinate, and execute complex tasks on behalf of users.<n>We present a structured framework for understanding and building the Agentic Web.
arXiv Detail & Related papers (2025-07-28T17:58:12Z) - Modeling AI-Human Collaboration as a Multi-Agent Adaptation [0.0]
We develop an agent-based simulation to formalize AI-human collaboration as a function of a task.<n>We show that in modular tasks, AI often substitutes for humans - delivering higher payoffs unless human expertise is very high.<n>We also show that even "hallucinatory" AI - lacking memory or structure - can improve outcomes when augmenting low-capability humans by helping escape local optima.
arXiv Detail & Related papers (2025-04-29T16:19:53Z) - Variance reduction in output from generative AI [11.248899695350323]
We demonstrate that generative AI models are inherently prone to the phenomenon of "regression toward the mean"<n>We discuss potential social implications of this phenomenon across three levels-societal, group, and individual-and two dimensions-material and non-material.
arXiv Detail & Related papers (2025-03-02T21:34:10Z) - Designing AI-Agents with Personalities: A Psychometric Approach [2.854338743097065]
We introduce a methodology for assigning quantifiable and psychometrically validated personalities to AI-Agents.<n>Across three studies, we evaluate its feasibility and limitations.
arXiv Detail & Related papers (2024-10-25T01:05:04Z) - xLAM: A Family of Large Action Models to Empower AI Agent Systems [111.5719694445345]
We release xLAM, a series of large action models designed for AI agent tasks.
xLAM consistently delivers exceptional performance across multiple agent ability benchmarks.
arXiv Detail & Related papers (2024-09-05T03:22:22Z) - Large Language Model-based Human-Agent Collaboration for Complex Task
Solving [94.3914058341565]
We introduce the problem of Large Language Models (LLMs)-based human-agent collaboration for complex task-solving.
We propose a Reinforcement Learning-based Human-Agent Collaboration method, ReHAC.
This approach includes a policy model designed to determine the most opportune stages for human intervention within the task-solving process.
arXiv Detail & Related papers (2024-02-20T11:03:36Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Modeling Bounded Rationality in Multi-Agent Simulations Using Rationally
Inattentive Reinforcement Learning [85.86440477005523]
We study more human-like RL agents which incorporate an established model of human-irrationality, the Rational Inattention (RI) model.
RIRL models the cost of cognitive information processing using mutual information.
We show that using RIRL yields a rich spectrum of new equilibrium behaviors that differ from those found under rational assumptions.
arXiv Detail & Related papers (2022-01-18T20:54:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.