When is using AI the rational choice? The importance of counterfactuals in AI deployment decisions
- URL: http://arxiv.org/abs/2504.05333v1
- Date: Fri, 04 Apr 2025 14:59:29 GMT
- Title: When is using AI the rational choice? The importance of counterfactuals in AI deployment decisions
- Authors: Paul Lehner, Elinor Yeo,
- Abstract summary: Counterfactual misses may have disproportionate disutility to AI deployment decision makers.<n>This paper explores how to include counterfactual outcomes into usage decision expected utility assessments.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decisions to deploy AI capabilities are often driven by counterfactuals - a comparison of decisions made using AI to decisions that would have been made if the AI were not used. Counterfactual misses, which are poor decisions that are attributable to using AI, may have disproportionate disutility to AI deployment decision makers. Counterfactual hits, which are good decisions attributable to AI usage, may provide little benefit beyond the benefit of better decisions. This paper explores how to include counterfactual outcomes into usage decision expected utility assessments. Several properties emerge when counterfactuals are explicitly included. First, there are many contexts where the expected utility of AI usage is positive for intended beneficiaries and strongly negative for stakeholders and deployment decision makers. Second, high levels of complementarity, where differing AI and user assessments are merged beneficially, often leads to substantial disutility for stakeholders. Third, apparently small changes in how users interact with an AI capability can substantially impact stakeholder utility. Fourth, cognitive biases such as expert overconfidence and hindsight bias exacerbate the perceived frequency of costly counterfactual misses. The expected utility assessment approach presented here is intended to help AI developers and deployment decision makers to navigate the subtle but substantial impact of counterfactuals so as to better ensure that beneficial AI capabilities are used.
Related papers
- Human-Alignment Influences the Utility of AI-assisted Decision Making [16.732483972136418]
We investigate what extent the degree of alignment actually influences the utility of AI-assisted decision making.<n>Our results show a positive association between the degree of alignment and the utility of AI-assisted decision making.
arXiv Detail & Related papers (2025-01-23T19:01:47Z) - How Performance Pressure Influences AI-Assisted Decision Making [57.53469908423318]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Should XAI Nudge Human Decisions with Explanation Biasing? [2.6396287656676725]
This paper reviews our previous trials of Nudge-XAI, an approach that introduces automatic biases into explanations from explainable AIs (XAIs)
Nudge-XAI uses a user model that predicts the influence of providing an explanation or emphasizing it and attempts to guide users toward AI-suggested decisions without coercion.
arXiv Detail & Related papers (2024-06-11T14:53:07Z) - Overcoming Anchoring Bias: The Potential of AI and XAI-based Decision Support [0.0]
Information systems (IS) are frequently designed to leverage the negative effect of anchoring bias to influence individuals' decision-making.
Recent advances in Artificial Intelligence (AI) have opened new opportunities for mitigating biased decisions.
arXiv Detail & Related papers (2024-05-08T11:25:04Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - AI Reliance and Decision Quality: Fundamentals, Interdependence, and the Effects of Interventions [6.356355538824237]
We argue that reliance and decision quality are often inappropriately conflated in the current literature on AI-assisted decision-making.<n>Our research highlights the importance of distinguishing between reliance behavior and decision quality in AI-assisted decision-making.
arXiv Detail & Related papers (2023-04-18T08:08:05Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Knowing About Knowing: An Illusion of Human Competence Can Hinder
Appropriate Reliance on AI Systems [13.484359389266864]
This paper addresses whether the Dunning-Kruger Effect (DKE) can hinder appropriate reliance on AI systems.
DKE is a metacognitive bias due to which less-competent individuals overestimate their own skill and performance.
We found that participants who overestimate their performance tend to exhibit under-reliance on AI systems.
arXiv Detail & Related papers (2023-01-25T14:26:10Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.