Boosting human decision-making with AI-generated decision aids
- URL: http://arxiv.org/abs/2203.02776v2
- Date: Tue, 19 Jul 2022 00:02:43 GMT
- Title: Boosting human decision-making with AI-generated decision aids
- Authors: Frederic Becker, Julian Skirzy\'nski, Bas van Opheusden, Falk Lieder
- Abstract summary: We developed an algorithm for translating the output of our previous method into procedural instructions.
Experiments showed that these automatically generated decision-aids significantly improved people's performance in planning a road trip and choosing a mortgage.
These findings suggest that AI-powered boosting might have potential for improving human decision-making in the real world.
- Score: 8.373151777137792
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human decision-making is plagued by many systematic errors. Many of these
errors can be avoided by providing decision aids that guide decision-makers to
attend to the important information and integrate it according to a rational
decision strategy. Designing such decision aids used to be a tedious manual
process. Advances in cognitive science might make it possible to automate this
process in the future. We recently introduced machine learning methods for
discovering optimal strategies for human decision-making automatically and an
automatic method for explaining those strategies to people. Decision aids
constructed by this method were able to improve human decision-making. However,
following the descriptions generated by this method is very tedious. We
hypothesized that this problem can be overcome by conveying the automatically
discovered decision strategy as a series of natural language instructions for
how to reach a decision. Experiment 1 showed that people do indeed understand
such procedural instructions more easily than the decision aids generated by
our previous method. Encouraged by this finding, we developed an algorithm for
translating the output of our previous method into procedural instructions. We
applied the improved method to automatically generate decision aids for a
naturalistic planning task (i.e., planning a road trip) and a naturalistic
decision task (i.e., choosing a mortgage). Experiment 2 showed that these
automatically generated decision-aids significantly improved people's
performance in planning a road trip and choosing a mortgage. These findings
suggest that AI-powered boosting might have potential for improving human
decision-making in the real world.
Related papers
- Utilizing Human Behavior Modeling to Manipulate Explanations in AI-Assisted Decision Making: The Good, the Bad, and the Scary [19.884253335528317]
Recent advances in AI models have increased the integration of AI-based decision aids into the human decision making process.
To fully unlock the potential of AI-assisted decision making, researchers have computationally modeled how humans incorporate AI recommendations into their final decisions.
Providing AI explanations to human decision makers to help them rely on AI recommendations more appropriately has become a common practice.
arXiv Detail & Related papers (2024-11-02T18:33:28Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Leveraging automatic strategy discovery to teach people how to select better projects [0.9821874476902969]
The decisions of individuals and organizations are often suboptimal because normative decision strategies are too demanding in the real world.
Recent work suggests that some errors can be prevented by leveraging artificial intelligence to discover and teach prescriptive decision strategies.
This article is the first to extend this approach to a real-world decision problem, namely project selection.
arXiv Detail & Related papers (2024-06-06T13:51:44Z) - Decoding AI's Nudge: A Unified Framework to Predict Human Behavior in
AI-assisted Decision Making [24.258056813524167]
We propose a computational framework that can provide an interpretable characterization of the influence of different forms of AI assistance on decision makers.
By conceptualizing AI assistance as the em nudge'' in human decision making processes, our approach centers around modelling how different forms of AI assistance modify humans' strategy in weighing different information in making their decisions.
arXiv Detail & Related papers (2024-01-11T11:22:36Z) - Optimising Human-AI Collaboration by Learning Convincing Explanations [62.81395661556852]
We propose a method for a collaborative system that remains safe by having a human making decisions.
Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations.
arXiv Detail & Related papers (2023-11-13T16:00:16Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - Improving Human Decision-Making by Discovering Efficient Strategies for
Hierarchical Planning [0.6882042556551609]
People need efficient planning strategies because their computational resources are limited.
Our ability to compute those strategies used to be limited to very small and very simple planning tasks.
We introduce a cognitively-inspired reinforcement learning method that can overcome this limitation.
arXiv Detail & Related papers (2021-01-31T19:46:00Z) - Indecision Modeling [50.00689136829134]
It is important that AI systems act in ways which align with human values.
People are often indecisive, and especially so when their decision has moral implications.
arXiv Detail & Related papers (2020-12-15T18:32:37Z) - Automatic Discovery of Interpretable Planning Strategies [9.410583483182657]
We introduce AI-Interpret, a method for transforming idiosyncratic policies into simple and interpretable descriptions.
We show that prividing the decision rules generated by AI-Interpret as flowcharts significantly improved people's planning strategies and decisions.
arXiv Detail & Related papers (2020-05-24T12:24:52Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.