Decision Theoretic Foundations for Experiments Evaluating Human
Decisions
- URL: http://arxiv.org/abs/2401.15106v2
- Date: Thu, 15 Feb 2024 16:51:16 GMT
- Title: Decision Theoretic Foundations for Experiments Evaluating Human
Decisions
- Authors: Jessica Hullman, Alex Kale, Jason Hartline
- Abstract summary: We present a widely applicable definition of a decision problem synthesized from statistical decision theory and information economics.
We argue that to attribute loss in human performance to forms of bias, an experiment must provide participants with the information that a rational agent would need to identify the normative decision.
- Score: 20.5402873175161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decision-making with information displays is a key focus of research in areas
like explainable AI, human-AI teaming, and data visualization. However, what
constitutes a decision problem, and what is required for an experiment to be
capable of concluding that human decisions are flawed in some way, remain open
to speculation. We present a widely applicable definition of a decision problem
synthesized from statistical decision theory and information economics. We
argue that to attribute loss in human performance to forms of bias, an
experiment must provide participants with the information that a rational agent
would need to identify the normative decision. We evaluate the extent to which
recent evaluations of decision-making from the literature on AI-assisted
decisions achieve this criteria. We find that only 10 (26\%) of 39 studies that
claim to identify biased behavior present participants with sufficient
information to characterize their behavior as deviating from good
decision-making in at least one treatment condition. We motivate the value of
studying well-defined decision problems by describing a characterization of
performance losses they allow us to conceive. In contrast, the ambiguities of a
poorly communicated decision problem preclude normative interpretation. We
conclude with recommendations for practice.
Related papers
- A Decision Theoretic Framework for Measuring AI Reliance [23.353778024330165]
Humans frequently make decisions with the aid of artificially intelligent (AI) systems.
Researchers have identified ensuring that a human has appropriate reliance on an AI as a critical component of achieving complementary performance.
We propose a formal definition of reliance, based on statistical decision theory, which separates the concepts of reliance as the probability the decision-maker follows the AI's recommendation.
arXiv Detail & Related papers (2024-01-27T09:13:09Z) - Explaining by Imitating: Understanding Decisions by Interpretable Policy
Learning [72.80902932543474]
Understanding human behavior from observed data is critical for transparency and accountability in decision-making.
Consider real-world settings such as healthcare, in which modeling a decision-maker's policy is challenging.
We propose a data-driven representation of decision-making behavior that inheres transparency by design, accommodates partial observability, and operates completely offline.
arXiv Detail & Related papers (2023-10-28T13:06:14Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Personalized Decision Making -- A Conceptual Introduction [8.008051073614174]
We show that by combining experimental and observational studies we can obtain valuable information about individual behavior.
We conclude that by combining experimental and observational studies we can improve decisions over those obtained from experimental studies alone.
arXiv Detail & Related papers (2022-08-19T22:21:29Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Explainability's Gain is Optimality's Loss? -- How Explanations Bias
Decision-making [0.0]
Explanations help to facilitate communication between the algorithm and the human decision-maker.
Feature-based explanations' semantics of causal models induce leakage from the decision-maker's prior beliefs.
Such differences can lead to sub-optimal and biased decision outcomes.
arXiv Detail & Related papers (2022-06-17T11:43:42Z) - On the Fairness of Machine-Assisted Human Decisions [3.4069627091757178]
We show that the inclusion of a biased human decision-maker can revert common relationships between the structure of the algorithm and the qualities of resulting decisions.
In the lab experiment, we demonstrate how predictions informed by gender-specific information can reduce average gender disparities in decisions.
arXiv Detail & Related papers (2021-10-28T17:24:45Z) - Learning the Preferences of Uncertain Humans with Inverse Decision
Theory [10.926992035470372]
We study the setting of inverse decision theory (IDT), a framework where a human is observed making non-sequential binary decisions under uncertainty.
In IDT, the human's preferences are conveyed through their loss function, which expresses a tradeoff between different types of mistakes.
We show that it is actually easier to identify preferences when the decision problem is more uncertain.
arXiv Detail & Related papers (2021-06-19T00:11:13Z) - Indecision Modeling [50.00689136829134]
It is important that AI systems act in ways which align with human values.
People are often indecisive, and especially so when their decision has moral implications.
arXiv Detail & Related papers (2020-12-15T18:32:37Z) - "A cold, technical decision-maker": Can AI provide explainability,
negotiability, and humanity? [47.36687555570123]
We present results of a qualitative study of algorithmic decision-making, comprised of five workshops conducted with a total of 60 participants.
We discuss participants' consideration of humanity in decision-making, and introduce the concept of 'negotiability,' the ability to go beyond formal criteria and work flexibly around the system.
arXiv Detail & Related papers (2020-12-01T22:36:54Z) - Inverse Active Sensing: Modeling and Understanding Timely
Decision-Making [111.07204912245841]
We develop a framework for the general setting of evidence-based decision-making under endogenous, context-dependent time pressure.
We demonstrate how it enables modeling intuitive notions of surprise, suspense, and optimality in decision strategies.
arXiv Detail & Related papers (2020-06-25T02:30:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.