"There Is Not Enough Information": On the Effects of Explanations on
Perceptions of Informational Fairness and Trustworthiness in Automated
Decision-Making
- URL: http://arxiv.org/abs/2205.05758v1
- Date: Wed, 11 May 2022 20:06:03 GMT
- Title: "There Is Not Enough Information": On the Effects of Explanations on
Perceptions of Informational Fairness and Trustworthiness in Automated
Decision-Making
- Authors: Jakob Schoeffer, Niklas Kuehl, Yvette Machowski
- Abstract summary: Automated decision systems (ADS) are increasingly used for consequential decision-making.
We conduct a human subject study to assess people's perceptions of informational fairness.
A comprehensive analysis of qualitative feedback sheds light on people's desiderata for explanations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Automated decision systems (ADS) are increasingly used for consequential
decision-making. These systems often rely on sophisticated yet opaque machine
learning models, which do not allow for understanding how a given decision was
arrived at. In this work, we conduct a human subject study to assess people's
perceptions of informational fairness (i.e., whether people think they are
given adequate information on and explanation of the process and its outcomes)
and trustworthiness of an underlying ADS when provided with varying types of
information about the system. More specifically, we instantiate an ADS in the
area of automated loan approval and generate different explanations that are
commonly used in the literature. We randomize the amount of information that
study participants get to see by providing certain groups of people with the
same explanations as others plus additional explanations. From our quantitative
analyses, we observe that different amounts of information as well as people's
(self-assessed) AI literacy significantly influence the perceived informational
fairness, which, in turn, positively relates to perceived trustworthiness of
the ADS. A comprehensive analysis of qualitative feedback sheds light on
people's desiderata for explanations, among which are (i) consistency (both
with people's expectations and across different explanations), (ii) disclosure
of monotonic relationships between features and outcome, and (iii)
actionability of recommendations.
Related papers
- Information That Matters: Exploring Information Needs of People Affected by Algorithmic Decisions [11.421963387588864]
"XAI Novice Question Bank" is an extension of the XAI Question Bank containing a catalog of information needs from AI novices.
"XAI Novice Question Bank" contains a catalog of information needs from AI novices in two use cases: employment prediction and health monitoring.
Our work aims to support the inclusion of AI novices in explainability efforts by highlighting their information needs, aims, and challenges.
arXiv Detail & Related papers (2024-01-24T09:39:39Z) - Revisiting Self-supervised Learning of Speech Representation from a
Mutual Information Perspective [68.20531518525273]
We take a closer look into existing self-supervised methods of speech from an information-theoretic perspective.
We use linear probes to estimate the mutual information between the target information and learned representations.
We explore the potential of evaluating representations in a self-supervised fashion, where we estimate the mutual information between different parts of the data without using any labels.
arXiv Detail & Related papers (2024-01-16T21:13:22Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Requirements for Explainability and Acceptance of Artificial
Intelligence in Collaborative Work [0.0]
The present structured literature analysis examines the requirements for the explainability and acceptance of AI.
Results indicate that the two main groups of users are developers who require information about the internal operations of the model.
The acceptance of AI systems depends on information about the system's functions and performance, privacy and ethical considerations.
arXiv Detail & Related papers (2023-06-27T11:36:07Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Explainable Predictive Process Monitoring: A User Evaluation [62.41400549499849]
Explainability is motivated by the lack of transparency of black-box Machine Learning approaches.
We carry on a user evaluation on explanation approaches for Predictive Process Monitoring.
arXiv Detail & Related papers (2022-02-15T22:24:21Z) - Uncertainty Quantification of Surrogate Explanations: an Ordinal
Consensus Approach [1.3750624267664155]
We produce estimates of the uncertainty of a given explanation by measuring the consensus amongst a set of diverse bootstrapped surrogate explainers.
We empirically illustrate the properties of this approach through experiments on state-of-the-art Convolutional Neural Network ensembles.
arXiv Detail & Related papers (2021-11-17T13:55:58Z) - Perceptions of Fairness and Trustworthiness Based on Explanations in
Human vs. Automated Decision-Making [0.0]
Automated decision systems (ADS) have become ubiquitous in many high-stakes domains.
We conduct an online study with 200 participants to examine people's perceptions of fairness and trustworthiness towards ADS.
We find that people perceive ADS as fairer than human decision-makers.
arXiv Detail & Related papers (2021-09-13T09:14:15Z) - Appropriate Fairness Perceptions? On the Effectiveness of Explanations
in Enabling People to Assess the Fairness of Automated Decision Systems [0.0]
We argue that for an effective explanation, perceptions of fairness should increase if and only if the underlying ADS is fair.
In this in-progress work, we introduce the desideratum of appropriate fairness perceptions, propose a novel study design for evaluating it, and outline next steps towards a comprehensive experiment.
arXiv Detail & Related papers (2021-08-14T09:39:59Z) - Conditional Contrastive Learning: Removing Undesirable Information in
Self-Supervised Representations [108.29288034509305]
We develop conditional contrastive learning to remove undesirable information in self-supervised representations.
We demonstrate empirically that our methods can successfully learn self-supervised representations for downstream tasks.
arXiv Detail & Related papers (2021-06-05T10:51:26Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.