From DDMs to DNNs: Using process data and models of decision-making to
improve human-AI interactions
- URL: http://arxiv.org/abs/2308.15225v2
- Date: Thu, 7 Sep 2023 15:54:26 GMT
- Title: From DDMs to DNNs: Using process data and models of decision-making to
improve human-AI interactions
- Authors: Mrugsen Nagsen Gopnarayan, Jaan Aru, Sebastian Gluth
- Abstract summary: We argue that artificial intelligence (AI) research would benefit from a stronger focus on insights about how decisions emerge over time.
First, we introduce a highly established computational framework that assumes decisions to emerge from the noisy accumulation of evidence.
Next, we discuss to what extent current approaches in multi-agent AI do or do not incorporate process data and models of decision making.
- Score: 1.1510009152620668
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Over the past decades, cognitive neuroscientists and behavioral economists
have recognized the value of describing the process of decision making in
detail and modeling the emergence of decisions over time. For example, the time
it takes to decide can reveal more about an agent's true hidden preferences
than only the decision itself. Similarly, data that track the ongoing decision
process such as eye movements or neural recordings contain critical information
that can be exploited, even if no decision is made. Here, we argue that
artificial intelligence (AI) research would benefit from a stronger focus on
insights about how decisions emerge over time and incorporate related process
data to improve AI predictions in general and human-AI interactions in
particular. First, we introduce a highly established computational framework
that assumes decisions to emerge from the noisy accumulation of evidence, and
we present related empirical work in psychology, neuroscience, and economics.
Next, we discuss to what extent current approaches in multi-agent AI do or do
not incorporate process data and models of decision making. Finally, we outline
how a more principled inclusion of the evidence-accumulation framework into the
training and use of AI can help to improve human-AI interactions in the future.
Related papers
- Explain To Decide: A Human-Centric Review on the Role of Explainable
Artificial Intelligence in AI-assisted Decision Making [1.0878040851638]
Machine learning models are error-prone and cannot be used autonomously.
Explainable Artificial Intelligence (XAI) aids end-user understanding of the model.
This paper surveyed the recent empirical studies on XAI's impact on human-AI decision-making.
arXiv Detail & Related papers (2023-12-11T22:35:21Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Designing explainable artificial intelligence with active inference: A
framework for transparent introspection and decision-making [0.0]
We discuss how active inference can be leveraged to design explainable AI systems.
We propose an architecture for explainable AI systems using active inference.
arXiv Detail & Related papers (2023-06-06T21:38:09Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Toward Supporting Perceptual Complementarity in Human-AI Collaboration
via Reflection on Unobservables [7.3043497134309145]
We conduct an online experiment to understand whether and how explicitly communicating potentially relevant unobservables influences how people integrate model outputs and unobservables when making predictions.
Our findings indicate that presenting prompts about unobservables can change how humans integrate model outputs and unobservables, but do not necessarily lead to improved performance.
arXiv Detail & Related papers (2022-07-28T00:05:14Z) - Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations [62.997667081978825]
Explosive growth in big data technologies and artificial intelligence [AI] applications have led to increasing pervasiveness of information facets.
Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information.
We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary.
arXiv Detail & Related papers (2022-04-25T02:47:25Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Towards Explainable Artificial Intelligence in Banking and Financial
Services [0.0]
We study and analyze the recent work done in Explainable Artificial Intelligence (XAI) methods and tools.
We introduce a novel XAI process, which facilitates producing explainable models while maintaining a high level of learning performance.
We develop a digital dashboard to facilitate interacting with the algorithm results.
arXiv Detail & Related papers (2021-12-14T08:02:13Z) - The human-AI relationship in decision-making: AI explanation to support
people on justifying their decisions [4.169915659794568]
People need more awareness of how AI works and its outcomes to build a relationship with that system.
In decision-making scenarios, people need more awareness of how AI works and its outcomes to build a relationship with that system.
arXiv Detail & Related papers (2021-02-10T14:28:34Z) - Indecision Modeling [50.00689136829134]
It is important that AI systems act in ways which align with human values.
People are often indecisive, and especially so when their decision has moral implications.
arXiv Detail & Related papers (2020-12-15T18:32:37Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.