The Duet of Representations and How Explanations Exacerbate It
- URL: http://arxiv.org/abs/2402.08379v1
- Date: Tue, 13 Feb 2024 11:18:27 GMT
- Title: The Duet of Representations and How Explanations Exacerbate It
- Authors: Charles Wan, Rodrigo Belo, Leid Zejnilovi\'c, Susana Lavado
- Abstract summary: An algorithm effects a causal representation of relations between features and labels in the human's perception.
Explanations can direct the human's attention to the conflicting feature and away from other relevant features.
This leads to causal overattribution and may adversely affect the human's information processing.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An algorithm effects a causal representation of relations between features
and labels in the human's perception. Such a representation might conflict with
the human's prior belief. Explanations can direct the human's attention to the
conflicting feature and away from other relevant features. This leads to causal
overattribution and may adversely affect the human's information processing. In
a field experiment we implemented an XGBoost-trained model as a decision-making
aid for counselors at a public employment service to predict candidates' risk
of long-term unemployment. The treatment group of counselors was also provided
with SHAP. The results show that the quality of the human's decision-making is
worse when a feature on which the human holds a conflicting prior belief is
displayed as part of the explanation.
Related papers
- The Odyssey of Commonsense Causality: From Foundational Benchmarks to Cutting-Edge Reasoning [70.16523526957162]
Understanding commonsense causality helps people understand the principles of the real world better.
Despite its significance, a systematic exploration of this topic is notably lacking.
Our work aims to provide a systematic overview, update scholars on recent advancements, and provide a pragmatic guide for beginners.
arXiv Detail & Related papers (2024-06-27T16:30:50Z) - Mitigating Biases in Collective Decision-Making: Enhancing Performance in the Face of Fake News [4.413331329339185]
We study the influence these biases can have in the pervasive problem of fake news by evaluating human participants' capacity to identify false headlines.
By focusing on headlines involving sensitive characteristics, we gather a comprehensive dataset to explore how human responses are shaped by their biases.
We show that demographic factors, headline categories, and the manner in which information is presented significantly influence errors in human judgment.
arXiv Detail & Related papers (2024-03-11T12:08:08Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - The Impact of Explanations on Fairness in Human-AI Decision-Making: Protected vs Proxy Features [25.752072910748716]
Explanations may help human-AI teams address biases for fairer decision-making.
We study the effect of the presence of protected and proxy features on participants' perception of model fairness.
We find that explanations help people detect direct but not indirect biases.
arXiv Detail & Related papers (2023-10-12T16:00:16Z) - VISPUR: Visual Aids for Identifying and Interpreting Spurious
Associations in Data-Driven Decisions [8.594140167290098]
Simpson's paradox is a phenomenon where aggregated and subgroup-level associations contradict with each other.
Existing tools provide little insights for humans to locate, reason about, and prevent pitfalls of spurious association in practice.
We propose VISPUR, a visual analytic system that provides a causal analysis framework and a human-centric workflow for tackling spurious associations.
arXiv Detail & Related papers (2023-07-26T18:40:07Z) - Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making [10.049226270783562]
We study the effects of feature-based explanations on distributive fairness of AI-assisted decisions.
Our findings show that explanations influence fairness perceptions, which, in turn, relate to humans' tendency to adhere to AI recommendations.
arXiv Detail & Related papers (2022-09-23T19:10:59Z) - Toward Supporting Perceptual Complementarity in Human-AI Collaboration
via Reflection on Unobservables [7.3043497134309145]
We conduct an online experiment to understand whether and how explicitly communicating potentially relevant unobservables influences how people integrate model outputs and unobservables when making predictions.
Our findings indicate that presenting prompts about unobservables can change how humans integrate model outputs and unobservables, but do not necessarily lead to improved performance.
arXiv Detail & Related papers (2022-07-28T00:05:14Z) - Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards
Individualized and Explainable Robotic Support in Everyday Activities [80.37857025201036]
Key challenge for robotic systems is to figure out the behavior of another agent.
Processing correct inferences is especially challenging when (confounding) factors are not controlled experimentally.
We propose equipping robots with the necessary tools to conduct observational studies on people.
arXiv Detail & Related papers (2022-01-27T22:15:56Z) - Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals [53.484562601127195]
We point out the inability to infer behavioral conclusions from probing results.
We offer an alternative method that focuses on how the information is being used, rather than on what information is encoded.
arXiv Detail & Related papers (2020-06-01T15:00:11Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.