Computational Experiments Meet Large Language Model Based Agents: A
Survey and Perspective
- URL: http://arxiv.org/abs/2402.00262v1
- Date: Thu, 1 Feb 2024 01:17:46 GMT
- Title: Computational Experiments Meet Large Language Model Based Agents: A
Survey and Perspective
- Authors: Qun Ma, Xiao Xue, Deyu Zhou, Xiangning Yu, Donghua Liu, Xuwen Zhang,
Zihan Zhao, Yifan Shen, Peilin Ji, Juanjuan Li, Gang Wang, Wanpeng Ma
- Abstract summary: Computational experiments have emerged as a valuable method for studying complex systems.
accurately representing real social systems in Agent-based Modeling (ABM) is challenging due to the diverse and intricate characteristics of humans.
The integration of Large Language Models (LLMs) has been proposed, enabling agents to possess anthropomorphic abilities.
- Score: 16.08517740276261
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computational experiments have emerged as a valuable method for studying
complex systems, involving the algorithmization of counterfactuals. However,
accurately representing real social systems in Agent-based Modeling (ABM) is
challenging due to the diverse and intricate characteristics of humans,
including bounded rationality and heterogeneity. To address this limitation,
the integration of Large Language Models (LLMs) has been proposed, enabling
agents to possess anthropomorphic abilities such as complex reasoning and
autonomous learning. These agents, known as LLM-based Agent, offer the
potential to enhance the anthropomorphism lacking in ABM. Nonetheless, the
absence of explicit explainability in LLMs significantly hinders their
application in the social sciences. Conversely, computational experiments excel
in providing causal analysis of individual behaviors and complex phenomena.
Thus, combining computational experiments with LLM-based Agent holds
substantial research potential. This paper aims to present a comprehensive
exploration of this fusion. Primarily, it outlines the historical development
of agent structures and their evolution into artificial societies, emphasizing
their importance in computational experiments. Then it elucidates the
advantages that computational experiments and LLM-based Agents offer each
other, considering the perspectives of LLM-based Agent for computational
experiments and vice versa. Finally, this paper addresses the challenges and
future trends in this research domain, offering guidance for subsequent related
studies.
Related papers
- From Linguistic Giants to Sensory Maestros: A Survey on Cross-Modal Reasoning with Large Language Models [56.9134620424985]
Cross-modal reasoning (CMR) is increasingly recognized as a crucial capability in the progression toward more sophisticated artificial intelligence systems.
The recent trend of deploying Large Language Models (LLMs) to tackle CMR tasks has marked a new mainstream of approaches for enhancing their effectiveness.
This survey offers a nuanced exposition of current methodologies applied in CMR using LLMs, classifying these into a detailed three-tiered taxonomy.
arXiv Detail & Related papers (2024-09-19T02:51:54Z) - Interpreting and Improving Large Language Models in Arithmetic Calculation [72.19753146621429]
Large language models (LLMs) have demonstrated remarkable potential across numerous applications.
In this work, we delve into uncovering a specific mechanism by which LLMs execute calculations.
We investigate the potential benefits of selectively fine-tuning these essential heads/MLPs to boost the LLMs' computational performance.
arXiv Detail & Related papers (2024-09-03T07:01:46Z) - Predicting and Understanding Human Action Decisions: Insights from Large Language Models and Cognitive Instance-Based Learning [0.0]
Large Language Models (LLMs) have demonstrated their capabilities across various tasks.
This paper exploits the reasoning and generative capabilities of the LLMs to predict human behavior in two sequential decision-making tasks.
We compare the performance of LLMs with a cognitive instance-based learning model, which imitates human experiential decision-making.
arXiv Detail & Related papers (2024-07-12T14:13:06Z) - LLM-Augmented Agent-Based Modelling for Social Simulations: Challenges and Opportunities [0.0]
Integrating large language models with agent-based simulations offers a transformational potential for understanding complex social systems.
We explore architectures and methods to systematically develop LLM-augmented social simulations.
We conclude that integrating LLMs with agent-based simulations offers a powerful toolset for researchers and scientists.
arXiv Detail & Related papers (2024-05-08T08:57:54Z) - Large Language Model-based Human-Agent Collaboration for Complex Task
Solving [94.3914058341565]
We introduce the problem of Large Language Models (LLMs)-based human-agent collaboration for complex task-solving.
We propose a Reinforcement Learning-based Human-Agent Collaboration method, ReHAC.
This approach includes a policy model designed to determine the most opportune stages for human intervention within the task-solving process.
arXiv Detail & Related papers (2024-02-20T11:03:36Z) - LLM-driven Imitation of Subrational Behavior : Illusion or Reality? [3.2365468114603937]
Existing work highlights the ability of Large Language Models to address complex reasoning tasks and mimic human communication.
We propose to investigate the use of LLMs to generate synthetic human demonstrations, which are then used to learn subrational agent policies.
We experimentally evaluate the ability of our framework to model sub-rationality through four simple scenarios.
arXiv Detail & Related papers (2024-02-13T19:46:39Z) - Model-Based RL for Mean-Field Games is not Statistically Harder than Single-Agent RL [57.745700271150454]
We study the sample complexity of reinforcement learning in Mean-Field Games (MFGs) with model-based function approximation.
We introduce the Partial Model-Based Eluder Dimension (P-MBED), a more effective notion to characterize the model class complexity.
arXiv Detail & Related papers (2024-02-08T14:54:47Z) - Systematic Biases in LLM Simulations of Debates [12.933509143906141]
We study the limitations of Large Language Models in simulating human interactions.
Our findings indicate a tendency for LLM agents to conform to the model's inherent social biases.
These results underscore the need for further research to develop methods that help agents overcome these biases.
arXiv Detail & Related papers (2024-02-06T14:51:55Z) - Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View [60.80731090755224]
This paper probes the collaboration mechanisms among contemporary NLP systems by practical experiments with theoretical insights.
We fabricate four unique societies' comprised of LLM agents, where each agent is characterized by a specific trait' (easy-going or overconfident) and engages in collaboration with a distinct thinking pattern' (debate or reflection)
Our results further illustrate that LLM agents manifest human-like social behaviors, such as conformity and consensus reaching, mirroring social psychology theories.
arXiv Detail & Related papers (2023-10-03T15:05:52Z) - A Survey on Large Language Model based Autonomous Agents [105.2509166861984]
Large language models (LLMs) have demonstrated remarkable potential in achieving human-level intelligence.
This paper delivers a systematic review of the field of LLM-based autonomous agents from a holistic perspective.
We present a comprehensive overview of the diverse applications of LLM-based autonomous agents in the fields of social science, natural science, and engineering.
arXiv Detail & Related papers (2023-08-22T13:30:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.