Information That Matters: Exploring Information Needs of People Affected
by Algorithmic Decisions
- URL: http://arxiv.org/abs/2401.13324v4
- Date: Mon, 29 Jan 2024 08:52:18 GMT
- Title: Information That Matters: Exploring Information Needs of People Affected
by Algorithmic Decisions
- Authors: Timoth\'ee Schmude, Laura Koesten, Torsten M\"oller, Sebastian
Tschiatschek
- Abstract summary: Explanations of AI systems rarely address the information needs of people affected by algorithmic decision-making (ADM)
We present the "XAI Novice Question Bank": A catalog of affected stakeholders' information needs in two ADM use cases.
- Score: 9.15830544182374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explanations of AI systems rarely address the information needs of people
affected by algorithmic decision-making (ADM). This gap between conveyed
information and information that matters to affected stakeholders can impede
understanding and adherence to regulatory frameworks such as the AI Act. To
address this gap, we present the "XAI Novice Question Bank": A catalog of
affected stakeholders' information needs in two ADM use cases (employment
prediction and health monitoring), covering the categories data, system
context, system usage, and system specifications. Information needs were
gathered in an interview study where participants received explanations in
response to their inquiries. Participants further reported their understanding
and decision confidence, showing that while confidence tended to increase after
receiving explanations, participants also met understanding challenges, such as
being unable to tell why their understanding felt incomplete. Explanations
further influenced participants' perceptions of the systems' risks and
benefits, which they confirmed or changed depending on the use case. When risks
were perceived as high, participants expressed particular interest in
explanations about intention, such as why and to what end a system was put in
place. With this work, we aim to support the inclusion of affected stakeholders
into explainability by contributing an overview of information and challenges
relevant to them when deciding on the adoption of ADM systems. We close by
summarizing our findings in a list of six key implications that inform the
design of future explanations for affected stakeholder audiences.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Sociotechnical Implications of Generative Artificial Intelligence for Information Access [4.3867169221012645]
Generative AI technologies may enable new ways to access information and improve effectiveness of existing information retrieval systems.
We present an overview of some of the systemic consequences and risks of employing generative AI in the context of information access.
arXiv Detail & Related papers (2024-05-19T17:04:39Z) - Notion of Explainable Artificial Intelligence -- An Empirical
Investigation from A Users Perspective [0.3069335774032178]
This study aims to investigate usercentric explainable AI and considered recommendation systems as the study context.
We conducted focus group interviews to collect qualitative data on the recommendation system.
Our findings reveal that end users want a non-technical and tailor-made explanation with on-demand supplementary information.
arXiv Detail & Related papers (2023-11-01T22:20:14Z) - ChoiceMates: Supporting Unfamiliar Online Decision-Making with
Multi-Agent Conversational Interactions [58.71970923420007]
We present ChoiceMates, a system that enables conversations with a dynamic set of LLM-powered agents.
Agents, as opinionated personas, flexibly join the conversation, not only providing responses but also conversing among themselves to elicit each agent's preferences.
Our study (n=36) comparing ChoiceMates to conventional web search and single-agent showed that ChoiceMates was more helpful in discovering, diving deeper, and managing information compared to Web with higher confidence.
arXiv Detail & Related papers (2023-10-02T16:49:39Z) - Requirements for Explainability and Acceptance of Artificial
Intelligence in Collaborative Work [0.0]
The present structured literature analysis examines the requirements for the explainability and acceptance of AI.
Results indicate that the two main groups of users are developers who require information about the internal operations of the model.
The acceptance of AI systems depends on information about the system's functions and performance, privacy and ethical considerations.
arXiv Detail & Related papers (2023-06-27T11:36:07Z) - Video Surveillance System Incorporating Expert Decision-making Process:
A Case Study on Detecting Calving Signs in Cattle [5.80793470875286]
In this study, we examine the framework of a video surveillance AI system that presents the reasoning behind predictions by incorporating experts' decision-making processes with rich domain knowledge of the notification target.
In our case study, we designed a system for detecting signs of calving in cattle based on the proposed framework and evaluated the system through a user study with people involved in livestock farming.
arXiv Detail & Related papers (2023-01-10T12:06:49Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Are Akpans Trick or Treat: Unveiling Helpful Biases in Assistant Systems [55.09907990139756]
Information-seeking AI assistant systems aim to answer users' queries about knowledge in a timely manner.
In this paper, we study computational measurements of helpfulness.
Experiments with state-of-the-art dialogue systems reveal that existing systems tend to be more helpful for questions regarding concepts from highly-developed countries.
arXiv Detail & Related papers (2022-05-25T07:58:38Z) - "There Is Not Enough Information": On the Effects of Explanations on
Perceptions of Informational Fairness and Trustworthiness in Automated
Decision-Making [0.0]
Automated decision systems (ADS) are increasingly used for consequential decision-making.
We conduct a human subject study to assess people's perceptions of informational fairness.
A comprehensive analysis of qualitative feedback sheds light on people's desiderata for explanations.
arXiv Detail & Related papers (2022-05-11T20:06:03Z) - Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations [62.997667081978825]
Explosive growth in big data technologies and artificial intelligence [AI] applications have led to increasing pervasiveness of information facets.
Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information.
We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary.
arXiv Detail & Related papers (2022-04-25T02:47:25Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.