Information That Matters: Exploring Information Needs of People Affected by Algorithmic Decisions
- URL: http://arxiv.org/abs/2401.13324v6
- Date: Mon, 28 Oct 2024 12:45:52 GMT
- Title: Information That Matters: Exploring Information Needs of People Affected by Algorithmic Decisions
- Authors: Timothée Schmude, Laura Koesten, Torsten Möller, Sebastian Tschiatschek,
- Abstract summary: "XAI Novice Question Bank" is an extension of the XAI Question Bank containing a catalog of information needs from AI novices.
"XAI Novice Question Bank" contains a catalog of information needs from AI novices in two use cases: employment prediction and health monitoring.
Our work aims to support the inclusion of AI novices in explainability efforts by highlighting their information needs, aims, and challenges.
- Score: 11.421963387588864
- License:
- Abstract: Every AI system that makes decisions about people has a group of stakeholders that are personally affected by these decisions. However, explanations of AI systems rarely address the information needs of this stakeholder group, who often are AI novices. This creates a gap between conveyed information and information that matters to those who are impacted by the system's decisions, such as domain experts and decision subjects. To address this, we present the "XAI Novice Question Bank," an extension of the XAI Question Bank containing a catalog of information needs from AI novices in two use cases: employment prediction and health monitoring. The catalog covers the categories of data, system context, system usage, and system specifications. We gathered information needs through task-based interviews where participants asked questions about two AI systems to decide on their adoption and received verbal explanations in response. Our analysis showed that participants' confidence increased after receiving explanations but that their understanding faced challenges. These included difficulties in locating information and in assessing their own understanding, as well as attempts to outsource understanding. Additionally, participants' prior perceptions of the systems' risks and benefits influenced their information needs. Participants who perceived high risks sought explanations about the intentions behind a system's deployment, while those who perceived low risks rather asked about the system's operation. Our work aims to support the inclusion of AI novices in explainability efforts by highlighting their information needs, aims, and challenges. We summarize our findings as five key implications that can inform the design of future explanations for lay stakeholder audiences.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Sociotechnical Implications of Generative Artificial Intelligence for Information Access [4.3867169221012645]
Generative AI technologies may enable new ways to access information and improve effectiveness of existing information retrieval systems.
We present an overview of some of the systemic consequences and risks of employing generative AI in the context of information access.
arXiv Detail & Related papers (2024-05-19T17:04:39Z) - Notion of Explainable Artificial Intelligence -- An Empirical
Investigation from A Users Perspective [0.3069335774032178]
This study aims to investigate usercentric explainable AI and considered recommendation systems as the study context.
We conducted focus group interviews to collect qualitative data on the recommendation system.
Our findings reveal that end users want a non-technical and tailor-made explanation with on-demand supplementary information.
arXiv Detail & Related papers (2023-11-01T22:20:14Z) - ChoiceMates: Supporting Unfamiliar Online Decision-Making with
Multi-Agent Conversational Interactions [58.71970923420007]
We present ChoiceMates, a system that enables conversations with a dynamic set of LLM-powered agents.
Agents, as opinionated personas, flexibly join the conversation, not only providing responses but also conversing among themselves to elicit each agent's preferences.
Our study (n=36) comparing ChoiceMates to conventional web search and single-agent showed that ChoiceMates was more helpful in discovering, diving deeper, and managing information compared to Web with higher confidence.
arXiv Detail & Related papers (2023-10-02T16:49:39Z) - Requirements for Explainability and Acceptance of Artificial
Intelligence in Collaborative Work [0.0]
The present structured literature analysis examines the requirements for the explainability and acceptance of AI.
Results indicate that the two main groups of users are developers who require information about the internal operations of the model.
The acceptance of AI systems depends on information about the system's functions and performance, privacy and ethical considerations.
arXiv Detail & Related papers (2023-06-27T11:36:07Z) - Video Surveillance System Incorporating Expert Decision-making Process:
A Case Study on Detecting Calving Signs in Cattle [5.80793470875286]
In this study, we examine the framework of a video surveillance AI system that presents the reasoning behind predictions by incorporating experts' decision-making processes with rich domain knowledge of the notification target.
In our case study, we designed a system for detecting signs of calving in cattle based on the proposed framework and evaluated the system through a user study with people involved in livestock farming.
arXiv Detail & Related papers (2023-01-10T12:06:49Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - "There Is Not Enough Information": On the Effects of Explanations on
Perceptions of Informational Fairness and Trustworthiness in Automated
Decision-Making [0.0]
Automated decision systems (ADS) are increasingly used for consequential decision-making.
We conduct a human subject study to assess people's perceptions of informational fairness.
A comprehensive analysis of qualitative feedback sheds light on people's desiderata for explanations.
arXiv Detail & Related papers (2022-05-11T20:06:03Z) - Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations [62.997667081978825]
Explosive growth in big data technologies and artificial intelligence [AI] applications have led to increasing pervasiveness of information facets.
Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information.
We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary.
arXiv Detail & Related papers (2022-04-25T02:47:25Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.