Predicting Human Choice Between Textually Described Lotteries
- URL: http://arxiv.org/abs/2503.14004v1
- Date: Tue, 18 Mar 2025 08:10:33 GMT
- Title: Predicting Human Choice Between Textually Described Lotteries
- Authors: Eyal Marantz, Ori Plonsky,
- Abstract summary: This study conducts the first large-scale exploration of human decision-making in such tasks.<n>We evaluate multiple computational approaches, including fine-tuning Large Language Models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predicting human decision-making under risk and uncertainty is a long-standing challenge in cognitive science, economics, and AI. While prior research has focused on numerically described lotteries, real-world decisions often rely on textual descriptions. This study conducts the first large-scale exploration of human decision-making in such tasks using a large dataset of one-shot binary choices between textually described lotteries. We evaluate multiple computational approaches, including fine-tuning Large Language Models (LLMs), leveraging embeddings, and integrating behavioral theories of choice under risk. Our results show that fine-tuned LLMs, specifically RoBERTa and GPT-4o outperform hybrid models that incorporate behavioral theory, challenging established methods in numerical settings. These findings highlight fundamental differences in how textual and numerical information influence decision-making and underscore the need for new modeling strategies to bridge this gap.
Related papers
- Contextual Online Uncertainty-Aware Preference Learning for Human Feedback [13.478503755314344]
Reinforcement Learning from Human Feedback (RLHF) has become a pivotal paradigm in artificial intelligence.
We propose a novel statistical framework to simultaneously conduct the online decision-making and statistical inference on the optimal model.
We apply the proposed framework to analyze the human preference data for ranking large language models on the Massive Multitask Language Understanding dataset.
arXiv Detail & Related papers (2025-04-27T19:59:11Z) - Why Reasoning Matters? A Survey of Advancements in Multimodal Reasoning (v1) [66.51642638034822]
Reasoning is central to human intelligence, enabling structured problem-solving across diverse tasks.
Recent advances in large language models (LLMs) have greatly enhanced their reasoning abilities in arithmetic, commonsense, and symbolic domains.
This paper offers a concise yet insightful overview of reasoning techniques in both textual and multimodal LLMs.
arXiv Detail & Related papers (2025-04-04T04:04:56Z) - FFAA: Multimodal Large Language Model based Explainable Open-World Face Forgery Analysis Assistant [59.2438504610849]
We introduce FFAA: Face Forgery Analysis Assistant, consisting of a fine-tuned Multimodal Large Language Model (MLLM) and Multi-answer Intelligent Decision System (MIDS)
Our method not only provides user-friendly and explainable results but also significantly boosts accuracy and robustness compared to previous methods.
arXiv Detail & Related papers (2024-08-19T15:15:20Z) - Enhancing Language Model Rationality with Bi-Directional Deliberation Reasoning [73.77288647011295]
This paper introduces BI-Directional DEliberation Reasoning (BIDDER) to enhance the decision rationality of language models.
Our approach involves three key processes:.
Inferring hidden states to represent uncertain information in the decision-making process from historical data;.
Using hidden states to predict future potential states and potential outcomes;.
Integrating historical information (past contexts) and long-term outcomes (future contexts) to inform reasoning.
arXiv Detail & Related papers (2024-07-08T16:48:48Z) - MR-Ben: A Meta-Reasoning Benchmark for Evaluating System-2 Thinking in LLMs [55.20845457594977]
Large language models (LLMs) have shown increasing capability in problem-solving and decision-making.
We present a process-based benchmark MR-Ben that demands a meta-reasoning skill.
Our meta-reasoning paradigm is especially suited for system-2 slow thinking.
arXiv Detail & Related papers (2024-06-20T03:50:23Z) - Re-Reading Improves Reasoning in Large Language Models [87.46256176508376]
We introduce a simple, yet general and effective prompting method, Re2, to enhance the reasoning capabilities of off-the-shelf Large Language Models (LLMs)
Unlike most thought-eliciting prompting methods, such as Chain-of-Thought (CoT), Re2 shifts the focus to the input by processing questions twice, thereby enhancing the understanding process.
We evaluate Re2 on extensive reasoning benchmarks across 14 datasets, spanning 112 experiments, to validate its effectiveness and generality.
arXiv Detail & Related papers (2023-09-12T14:36:23Z) - A Survey of Contextual Optimization Methods for Decision Making under
Uncertainty [47.73071218563257]
This review article identifies three main frameworks for learning policies from data and discusses their strengths and limitations.
We present the existing models and methods under a uniform notation and terminology and classify them according to the three main frameworks.
arXiv Detail & Related papers (2023-06-17T15:21:02Z) - Ground(less) Truth: A Causal Framework for Proxy Labels in
Human-Algorithm Decision-Making [29.071173441651734]
We identify five sources of target variable bias that can impact the validity of proxy labels in human-AI decision-making tasks.
We develop a causal framework to disentangle the relationship between each bias.
We conclude by discussing opportunities to better address target variable bias in future research.
arXiv Detail & Related papers (2023-02-13T16:29:11Z) - An Empirical Comparison of Explainable Artificial Intelligence Methods
for Clinical Data: A Case Study on Traumatic Brain Injury [8.913544654492696]
We implement two prediction models for short- and long-term outcomes of traumatic brain injury.
Six different interpretation techniques were used to describe both prediction models at the local and global levels.
The implemented methods were compared to one another in terms of several XAI characteristics such as understandability, fidelity, and stability.
arXiv Detail & Related papers (2022-08-13T19:44:00Z) - From Cognitive to Computational Modeling: Text-based Risky
Decision-Making Guided by Fuzzy Trace Theory [5.154015755506085]
Fuzzy trace theory (FTT) is a powerful paradigm that explains human decision-making by incorporating gists.
We propose a computational framework which combines the effects of the underlying semantics and sentiments on text-based decision-making.
In particular, we introduce Category-2- to learn categorical gists and categorical sentiments, and demonstrate how our computational model can be optimised to predict risky decision-making in groups and individuals.
arXiv Detail & Related papers (2022-05-15T02:25:28Z) - The Statistical Complexity of Interactive Decision Making [126.04974881555094]
We provide a complexity measure, the Decision-Estimation Coefficient, that is proven to be both necessary and sufficient for sample-efficient interactive learning.
A unified algorithm design principle, Estimation-to-Decisions (E2D), transforms any algorithm for supervised estimation into an online algorithm for decision making.
arXiv Detail & Related papers (2021-12-27T02:53:44Z) - Predicting human decisions with behavioral theories and machine learning [13.000185375686325]
We introduce BEAST Gradient Boosting (BEAST-GB), a novel hybrid model that synergizes behavioral theories with machine learning techniques.
We show that BEAST-GB achieves state-of-the-art performance on the largest publicly available dataset of human risky choice.
We also show BEAST-GB displays robust domain generalization capabilities as it effectively predicts choice behavior in new experimental contexts.
arXiv Detail & Related papers (2019-04-15T06:12:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.