Can Large Language Models Replace Economic Choice Prediction Labs?
- URL: http://arxiv.org/abs/2401.17435v3
- Date: Thu, 7 Mar 2024 16:47:00 GMT
- Title: Can Large Language Models Replace Economic Choice Prediction Labs?
- Authors: Eilam Shapira, Omer Madmon, Roi Reichart, Moshe Tennenholtz
- Abstract summary: We show that a model trained solely on LLM-generated data can effectively predict human behavior in a language-based persuasion game.
In particular, we show that a model trained solely on LLM-generated data can effectively predict human behavior in a language-based persuasion game.
- Score: 24.05034588588407
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Economic choice prediction is an essential challenging task, often
constrained by the difficulties in acquiring human choice data. Indeed,
experimental economics studies had focused mostly on simple choice settings.
The AI community has recently contributed to that effort in two ways:
considering whether LLMs can substitute for humans in the above-mentioned
simple choice prediction settings, and the study through ML lens of more
elaborated but still rigorous experimental economics settings, employing
incomplete information, repetitive play, and natural language communication,
notably language-based persuasion games. This leaves us with a major
inspiration: can LLMs be used to fully simulate the economic environment and
generate data for efficient human choice prediction, substituting for the
elaborated economic lab studies? We pioneer the study of this subject,
demonstrating its feasibility. In particular, we show that a model trained
solely on LLM-generated data can effectively predict human behavior in a
language-based persuasion game, and can even outperform models trained on
actual human data.
Related papers
- LLM-Select: Feature Selection with Large Language Models [64.5099482021597]
Large language models (LLMs) are capable of selecting the most predictive features, with performance rivaling the standard tools of data science.
Our findings suggest that LLMs may be useful not only for selecting the best features for training but also for deciding which features to collect in the first place.
arXiv Detail & Related papers (2024-07-02T22:23:40Z) - EconNLI: Evaluating Large Language Models on Economics Reasoning [22.754757518792395]
Large Language Models (LLMs) are widely used for writing economic analysis reports or providing financial advice.
We propose a new dataset, natural language inference on economic events (EconNLI), to evaluate LLMs' knowledge and reasoning abilities in the economic domain.
Our experiments reveal that LLMs are not sophisticated in economic reasoning and may generate wrong or hallucinated answers.
arXiv Detail & Related papers (2024-07-01T11:58:24Z) - LABOR-LLM: Language-Based Occupational Representations with Large Language Models [8.909328013944567]
This paper considers an alternative where the fine-tuning of the CAREER foundation model is replaced by fine-tuning LLMs.
We show that our fine-tuned LLM-based models' predictions are more representative of the career trajectories of various workforce subpopulations than off-the-shelf LLM models and CAREER.
arXiv Detail & Related papers (2024-06-25T23:07:18Z) - Character is Destiny: Can Large Language Models Simulate Persona-Driven Decisions in Role-Playing? [59.0123596591807]
We benchmark the ability of Large Language Models in persona-driven decision-making.
We investigate whether LLMs can predict characters' decisions provided with the preceding stories in high-quality novels.
The results demonstrate that state-of-the-art LLMs exhibit promising capabilities in this task, yet there is substantial room for improvement.
arXiv Detail & Related papers (2024-04-18T12:40:59Z) - Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in LLMs [1.1704154007740835]
This work investigates the impact of fine-tuning and data selection on economic and political biases in Large Language Models (LLMs)
We employ Efficient Fine-Tuning (PEFT) techniques to align LLMs with targeted ideologies by modifying a small subset of parameters.
Our work contributes to the dialogue on the ethical application of AI, highlighting the importance of deploying AI in a manner that aligns with societal values.
arXiv Detail & Related papers (2024-04-10T16:30:09Z) - Where Would I Go Next? Large Language Models as Human Mobility
Predictors [21.100313868232995]
We introduce a novel method, LLM-Mob, which leverages the language understanding and reasoning capabilities of LLMs for analysing human mobility data.
Comprehensive evaluations of our method reveal that LLM-Mob excels in providing accurate and interpretable predictions.
arXiv Detail & Related papers (2023-08-29T10:24:23Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - A Survey of Large Language Models [81.06947636926638]
Language modeling has been widely studied for language understanding and generation in the past two decades.
Recently, pre-trained language models (PLMs) have been proposed by pre-training Transformer models over large-scale corpora.
To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size.
arXiv Detail & Related papers (2023-03-31T17:28:46Z) - What Artificial Neural Networks Can Tell Us About Human Language
Acquisition [47.761188531404066]
Rapid progress in machine learning for natural language processing has the potential to transform debates about how humans learn language.
To increase the relevance of learnability results from computational models, we need to train model learners without significant advantages over humans.
arXiv Detail & Related papers (2022-08-17T00:12:37Z) - Data Augmentation for Spoken Language Understanding via Pretrained
Language Models [113.56329266325902]
Training of spoken language understanding (SLU) models often faces the problem of data scarcity.
We put forward a data augmentation method using pretrained language models to boost the variability and accuracy of generated utterances.
arXiv Detail & Related papers (2020-04-29T04:07:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.