Automatic Discovery of Interpretable Planning Strategies
- URL: http://arxiv.org/abs/2005.11730v3
- Date: Sat, 10 Apr 2021 05:28:59 GMT
- Title: Automatic Discovery of Interpretable Planning Strategies
- Authors: Julian Skirzy\'nski, Frederic Becker and Falk Lieder
- Abstract summary: We introduce AI-Interpret, a method for transforming idiosyncratic policies into simple and interpretable descriptions.
We show that prividing the decision rules generated by AI-Interpret as flowcharts significantly improved people's planning strategies and decisions.
- Score: 9.410583483182657
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When making decisions, people often overlook critical information or are
overly swayed by irrelevant information. A common approach to mitigate these
biases is to provide decision-makers, especially professionals such as medical
doctors, with decision aids, such as decision trees and flowcharts. Designing
effective decision aids is a difficult problem. We propose that recently
developed reinforcement learning methods for discovering clever heuristics for
good decision-making can be partially leveraged to assist human experts in this
design process. One of the biggest remaining obstacles to leveraging the
aforementioned methods is that the policies they learn are opaque to people. To
solve this problem, we introduce AI-Interpret: a general method for
transforming idiosyncratic policies into simple and interpretable descriptions.
Our algorithm combines recent advances in imitation learning and program
induction with a new clustering method for identifying a large subset of
demonstrations that can be accurately described by a simple, high-performing
decision rule. We evaluate our new algorithm and employ it to translate
information-acquisition policies discovered through metalevel reinforcement
learning. The results of large behavioral experiments showed that prividing the
decision rules generated by AI-Interpret as flowcharts significantly improved
people's planning strategies and decisions across three diferent classes of
sequential decision problems. Moreover, another experiment revealed that this
approach is significantly more effective than training people by giving them
performance feedback. Finally, a series of ablation studies confirmed that
AI-Interpret is critical to the discovery of interpretable decision rules. We
conclude that the methods and findings presented herein are an important step
towards leveraging automatic strategy discovery to improve human
decision-making.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Leveraging automatic strategy discovery to teach people how to select better projects [0.9821874476902969]
The decisions of individuals and organizations are often suboptimal because normative decision strategies are too demanding in the real world.
Recent work suggests that some errors can be prevented by leveraging artificial intelligence to discover and teach prescriptive decision strategies.
This article is the first to extend this approach to a real-world decision problem, namely project selection.
arXiv Detail & Related papers (2024-06-06T13:51:44Z) - Designing Algorithmic Recommendations to Achieve Human-AI Complementarity [2.4247752614854203]
We formalize the design of recommendation algorithms that assist human decision-makers.
We use a potential-outcomes framework to model the effect of recommendations on a human decision-maker's binary treatment choice.
We derive minimax optimal recommendation algorithms that can be implemented with machine learning.
arXiv Detail & Related papers (2024-05-02T17:15:30Z) - Optimising Human-AI Collaboration by Learning Convincing Explanations [62.81395661556852]
We propose a method for a collaborative system that remains safe by having a human making decisions.
Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations.
arXiv Detail & Related papers (2023-11-13T16:00:16Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Boosting human decision-making with AI-generated decision aids [8.373151777137792]
We developed an algorithm for translating the output of our previous method into procedural instructions.
Experiments showed that these automatically generated decision-aids significantly improved people's performance in planning a road trip and choosing a mortgage.
These findings suggest that AI-powered boosting might have potential for improving human decision-making in the real world.
arXiv Detail & Related papers (2022-03-05T15:57:20Z) - Improving Human Sequential Decision-Making with Reinforcement Learning [29.334511328067777]
We design a novel machine learning algorithm that is capable of extracting "best practices" from trace data.
Our algorithm selects the tip that best bridges the gap between the actions taken by human workers and those taken by the optimal policy.
Experiments show that the tips generated by our algorithm can significantly improve human performance.
arXiv Detail & Related papers (2021-08-19T02:57:58Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.