Determinants of LLM-assisted Decision-Making
- URL: http://arxiv.org/abs/2402.17385v1
- Date: Tue, 27 Feb 2024 10:24:50 GMT
- Title: Determinants of LLM-assisted Decision-Making
- Authors: Eva Eigner and Thorsten H\"andler
- Abstract summary: Large Language Models (LLMs) provide multifaceted support in enhancing human decision-making processes.
This study provides a structural overview and detailed analysis of determinants impacting decision-making with LLM support.
Our findings can be seen as crucial for improving decision quality in human-AI collaboration.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decision-making is a fundamental capability in everyday life. Large Language
Models (LLMs) provide multifaceted support in enhancing human decision-making
processes. However, understanding the influencing factors of LLM-assisted
decision-making is crucial for enabling individuals to utilize LLM-provided
advantages and minimize associated risks in order to make more informed and
better decisions. This study presents the results of a comprehensive literature
analysis, providing a structural overview and detailed analysis of determinants
impacting decision-making with LLM support. In particular, we explore the
effects of technological aspects of LLMs, including transparency and prompt
engineering, psychological factors such as emotions and decision-making styles,
as well as decision-specific determinants such as task difficulty and
accountability. In addition, the impact of the determinants on the
decision-making process is illustrated via multiple application scenarios.
Drawing from our analysis, we develop a dependency framework that systematizes
possible interactions in terms of reciprocal interdependencies between these
determinants. Our research reveals that, due to the multifaceted interactions
with various determinants, factors such as trust in or reliance on LLMs, the
user's mental model, and the characteristics of information processing are
identified as significant aspects influencing LLM-assisted decision-making
processes. Our findings can be seen as crucial for improving decision quality
in human-AI collaboration, empowering both users and organizations, and
designing more effective LLM interfaces. Additionally, our work provides a
foundation for future empirical investigations on the determinants of
decision-making assisted by LLMs.
Related papers
- STRUX: An LLM for Decision-Making with Structured Explanations [17.518955158367305]
We introduce a new framework called STRUX, which enhances LLM decision-making by providing structured explanations.
STRUX begins by distilling lengthy information into a concise table of key facts.
It then employs a series of self-reflection steps to determine which of these facts are pivotal, categorizing them as either favorable or adverse in relation to a specific decision.
arXiv Detail & Related papers (2024-10-16T14:01:22Z) - Embodied Agent Interface: Benchmarking LLMs for Embodied Decision Making [85.24399869971236]
We aim to evaluate Large Language Models (LLMs) for embodied decision making.
Existing evaluations tend to rely solely on a final success rate.
We propose a generalized interface (Embodied Agent Interface) that supports the formalization of various types of tasks.
arXiv Detail & Related papers (2024-10-09T17:59:00Z) - Cognitive LLMs: Towards Integrating Cognitive Architectures and Large Language Models for Manufacturing Decision-making [51.737762570776006]
LLM-ACTR is a novel neuro-symbolic architecture that provides human-aligned and versatile decision-making.
Our framework extracts and embeds knowledge of ACT-R's internal decision-making process as latent neural representations.
Our experiments on novel Design for Manufacturing tasks show both improved task performance as well as improved grounded decision-making capability.
arXiv Detail & Related papers (2024-08-17T11:49:53Z) - Predicting and Understanding Human Action Decisions: Insights from Large Language Models and Cognitive Instance-Based Learning [0.0]
Large Language Models (LLMs) have demonstrated their capabilities across various tasks.
This paper exploits the reasoning and generative capabilities of the LLMs to predict human behavior in two sequential decision-making tasks.
We compare the performance of LLMs with a cognitive instance-based learning model, which imitates human experiential decision-making.
arXiv Detail & Related papers (2024-07-12T14:13:06Z) - Evaluating Interventional Reasoning Capabilities of Large Language Models [58.52919374786108]
Large language models (LLMs) can estimate causal effects under interventions on different parts of a system.
We conduct empirical analyses to evaluate whether LLMs can accurately update their knowledge of a data-generating process in response to an intervention.
We create benchmarks that span diverse causal graphs (e.g., confounding, mediation) and variable types, and enable a study of intervention-based reasoning.
arXiv Detail & Related papers (2024-04-08T14:15:56Z) - Explaining Large Language Models Decisions Using Shapley Values [1.223779595809275]
Large language models (LLMs) have opened up exciting possibilities for simulating human behavior and cognitive processes.
However, the validity of utilizing LLMs as stand-ins for human subjects remains uncertain.
This paper presents a novel approach based on Shapley values to interpret LLM behavior and quantify the relative contribution of each prompt component to the model's output.
arXiv Detail & Related papers (2024-03-29T22:49:43Z) - On the Decision-Making Abilities in Role-Playing using Large Language
Models [6.550638804145713]
Large language models (LLMs) are increasingly utilized for role-playing tasks.
This paper focuses on evaluating the decision-making abilities of LLMs post role-playing.
arXiv Detail & Related papers (2024-02-29T02:22:23Z) - Rational Decision-Making Agent with Internalized Utility Judgment [91.80700126895927]
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications.
This paper proposes RadAgent, which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning.
Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks.
arXiv Detail & Related papers (2023-08-24T03:11:45Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.