Toward Supporting Perceptual Complementarity in Human-AI Collaboration
via Reflection on Unobservables
- URL: http://arxiv.org/abs/2207.13834v1
- Date: Thu, 28 Jul 2022 00:05:14 GMT
- Title: Toward Supporting Perceptual Complementarity in Human-AI Collaboration
via Reflection on Unobservables
- Authors: Kenneth Holstein, Maria De-Arteaga, Lakshmi Tumati, Yanghuidi Cheng
- Abstract summary: We conduct an online experiment to understand whether and how explicitly communicating potentially relevant unobservables influences how people integrate model outputs and unobservables when making predictions.
Our findings indicate that presenting prompts about unobservables can change how humans integrate model outputs and unobservables, but do not necessarily lead to improved performance.
- Score: 7.3043497134309145
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In many real world contexts, successful human-AI collaboration requires
humans to productively integrate complementary sources of information into
AI-informed decisions. However, in practice human decision-makers often lack
understanding of what information an AI model has access to in relation to
themselves. There are few available guidelines regarding how to effectively
communicate about unobservables: features that may influence the outcome, but
which are unavailable to the model. In this work, we conducted an online
experiment to understand whether and how explicitly communicating potentially
relevant unobservables influences how people integrate model outputs and
unobservables when making predictions. Our findings indicate that presenting
prompts about unobservables can change how humans integrate model outputs and
unobservables, but do not necessarily lead to improved performance.
Furthermore, the impacts of these prompts can vary depending on
decision-makers' prior domain expertise. We conclude by discussing implications
for future research and design of AI-based decision support tools.
Related papers
- Unexploited Information Value in Human-AI Collaboration [23.353778024330165]
How to improve performance of a human-AI team is often not clear without knowing what particular information and strategies each agent employs.
We propose a model based in statistically decision theory to analyze human-AI collaboration.
arXiv Detail & Related papers (2024-11-03T01:34:45Z) - Human-AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review [6.013543974938446]
Leveraging Artificial Intelligence in decision support systems has disproportionately focused on technological advancements.
A human-centered perspective attempts to alleviate this concern by designing AI solutions for seamless integration with existing processes.
arXiv Detail & Related papers (2023-10-30T17:46:38Z) - Explaining by Imitating: Understanding Decisions by Interpretable Policy
Learning [72.80902932543474]
Understanding human behavior from observed data is critical for transparency and accountability in decision-making.
Consider real-world settings such as healthcare, in which modeling a decision-maker's policy is challenging.
We propose a data-driven representation of decision-making behavior that inheres transparency by design, accommodates partial observability, and operates completely offline.
arXiv Detail & Related papers (2023-10-28T13:06:14Z) - From DDMs to DNNs: Using process data and models of decision-making to
improve human-AI interactions [1.1510009152620668]
We argue that artificial intelligence (AI) research would benefit from a stronger focus on insights about how decisions emerge over time.
First, we introduce a highly established computational framework that assumes decisions to emerge from the noisy accumulation of evidence.
Next, we discuss to what extent current approaches in multi-agent AI do or do not incorporate process data and models of decision making.
arXiv Detail & Related papers (2023-08-29T11:27:22Z) - "Help Me Help the AI": Understanding How Explainability Can Support
Human-AI Interaction [22.00514030715286]
We conducted a study of a real-world AI application via interviews with 20 end-users of Merlin, a bird-identification app.
We found that people express a need for practically useful information that can improve their collaboration with the AI system.
We also assessed end-users' perceptions of existing XAI approaches, finding that they prefer part-based explanations.
arXiv Detail & Related papers (2022-10-02T20:17:11Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Adversarial Interaction Attack: Fooling AI to Misinterpret Human
Intentions [46.87576410532481]
We show that, despite their current huge success, deep learning based AI systems can be easily fooled by subtle adversarial noise.
Based on a case study of skeleton-based human interactions, we propose a novel adversarial attack on interactions.
Our study highlights potential risks in the interaction loop with AI and humans, which need to be carefully addressed when deploying AI systems in safety-critical applications.
arXiv Detail & Related papers (2021-01-17T16:23:20Z) - Understanding the Effect of Out-of-distribution Examples and Interactive
Explanations on Human-AI Decision Making [19.157591744997355]
We argue that the typical experimental setup limits the potential of human-AI teams.
We develop novel interfaces to support interactive explanations so that humans can actively engage with AI assistance.
arXiv Detail & Related papers (2021-01-13T19:01:32Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.