Predicting and Understanding Human Action Decisions during Skillful
Joint-Action via Machine Learning and Explainable-AI
- URL: http://arxiv.org/abs/2206.02739v1
- Date: Mon, 6 Jun 2022 16:54:43 GMT
- Title: Predicting and Understanding Human Action Decisions during Skillful
Joint-Action via Machine Learning and Explainable-AI
- Authors: Fabrizia Auletta, Rachel W. Kallen, Mario di Bernardo, Micheal J.
Richardson
- Abstract summary: This study uses supervised machine learning and explainable artificial intelligence to model, predict and understand human decision-making.
Long short-term memory networks were trained to predict the target selection decisions of expert and novice actors completing a dyadic herding task.
- Score: 1.3381749415517021
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study uses supervised machine learning (SML) and explainable artificial
intelligence (AI) to model, predict and understand human decision-making during
skillful joint-action. Long short-term memory networks were trained to predict
the target selection decisions of expert and novice actors completing a dyadic
herding task. Results revealed that the trained models were expertise specific
and could not only accurately predict the target selection decisions of expert
and novice herders but could do so at timescales that preceded an actor's
conscious intent. To understand what differentiated the target selection
decisions of expert and novice actors, we then employed the explainable-AI
technique, SHapley Additive exPlanation, to identify the importance of
informational features (variables) on model predictions. This analysis revealed
that experts were more influenced by information about the state of their
co-herders compared to novices. The utility of employing SML and explainable-AI
techniques for investigating human decision-making is discussed.
Related papers
- Explain To Decide: A Human-Centric Review on the Role of Explainable
Artificial Intelligence in AI-assisted Decision Making [1.0878040851638]
Machine learning models are error-prone and cannot be used autonomously.
Explainable Artificial Intelligence (XAI) aids end-user understanding of the model.
This paper surveyed the recent empirical studies on XAI's impact on human-AI decision-making.
arXiv Detail & Related papers (2023-12-11T22:35:21Z) - Training Towards Critical Use: Learning to Situate AI Predictions
Relative to Human Knowledge [22.21959942886099]
We introduce a process-oriented notion of appropriate reliance called critical use that centers the human's ability to situate AI predictions against knowledge that is uniquely available to them but unavailable to the AI model.
We conduct a randomized online experiment in a complex social decision-making setting: child maltreatment screening.
We find that, by providing participants with accelerated, low-stakes opportunities to practice AI-assisted decision-making, novices came to exhibit patterns of disagreement with AI that resemble those of experienced workers.
arXiv Detail & Related papers (2023-08-30T01:54:31Z) - From DDMs to DNNs: Using process data and models of decision-making to
improve human-AI interactions [1.1510009152620668]
We argue that artificial intelligence (AI) research would benefit from a stronger focus on insights about how decisions emerge over time.
First, we introduce a highly established computational framework that assumes decisions to emerge from the noisy accumulation of evidence.
Next, we discuss to what extent current approaches in multi-agent AI do or do not incorporate process data and models of decision making.
arXiv Detail & Related papers (2023-08-29T11:27:22Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - On the Fairness of Machine-Assisted Human Decisions [3.4069627091757178]
We show that the inclusion of a biased human decision-maker can revert common relationships between the structure of the algorithm and the qualities of resulting decisions.
In the lab experiment, we demonstrate how predictions informed by gender-specific information can reduce average gender disparities in decisions.
arXiv Detail & Related papers (2021-10-28T17:24:45Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - When Does Uncertainty Matter?: Understanding the Impact of Predictive
Uncertainty in ML Assisted Decision Making [68.19284302320146]
We carry out user studies to assess how people with differing levels of expertise respond to different types of predictive uncertainty.
We found that showing posterior predictive distributions led to smaller disagreements with the ML model's predictions.
This suggests that posterior predictive distributions can potentially serve as useful decision aids which should be used with caution and take into account the type of distribution and the expertise of the human.
arXiv Detail & Related papers (2020-11-12T02:23:53Z) - Local Post-Hoc Explanations for Predictive Process Monitoring in
Manufacturing [0.0]
This study proposes an innovative explainable predictive quality analytics solution to facilitate data-driven decision-making in manufacturing.
It combines process mining, machine learning, and explainable artificial intelligence (XAI) methods.
arXiv Detail & Related papers (2020-09-22T13:07:17Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.