On the Relationship Between Explanations, Fairness Perceptions, and
Decisions
- URL: http://arxiv.org/abs/2204.13156v2
- Date: Fri, 29 Apr 2022 14:29:16 GMT
- Title: On the Relationship Between Explanations, Fairness Perceptions, and
Decisions
- Authors: Jakob Schoeffer, Maria De-Arteaga, Niklas Kuehl
- Abstract summary: It is known that recommendations of AI-based systems can be incorrect or unfair.
It is often proposed that a human be the final decision-maker.
Prior work has argued that explanations are an essential pathway to help human decision-makers enhance decision quality.
- Score: 2.5372245630249632
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: It is known that recommendations of AI-based systems can be incorrect or
unfair. Hence, it is often proposed that a human be the final decision-maker.
Prior work has argued that explanations are an essential pathway to help human
decision-makers enhance decision quality and mitigate bias, i.e., facilitate
human-AI complementarity. For these benefits to materialize, explanations
should enable humans to appropriately rely on AI recommendations and override
the algorithmic recommendation when necessary to increase distributive fairness
of decisions. The literature, however, does not provide conclusive empirical
evidence as to whether explanations enable such complementarity in practice. In
this work, we (a) provide a conceptual framework to articulate the
relationships between explanations, fairness perceptions, reliance, and
distributive fairness, (b) apply it to understand (seemingly) contradictory
research findings at the intersection of explanations and fairness, and (c)
derive cohesive implications for the formulation of research questions and the
design of experiments.
Related papers
- Explainable and Human-Grounded AI for Decision Support Systems: The Theory of Epistemic Quasi-Partnerships [0.0]
We argue that meeting the demands of ethical and explainable AI (XAI) is about developing AI-DSS to provide human decision-makers with three types of human-grounded explanations.
We demonstrate how current theories about what constitutes good human-grounded reasons either do not adequately explain this evidence or do not offer sound ethical advice for development.
arXiv Detail & Related papers (2024-09-23T09:14:25Z) - The Odyssey of Commonsense Causality: From Foundational Benchmarks to Cutting-Edge Reasoning [70.16523526957162]
Understanding commonsense causality helps people understand the principles of the real world better.
Despite its significance, a systematic exploration of this topic is notably lacking.
Our work aims to provide a systematic overview, update scholars on recent advancements, and provide a pragmatic guide for beginners.
arXiv Detail & Related papers (2024-06-27T16:30:50Z) - A Decision Theoretic Framework for Measuring AI Reliance [23.353778024330165]
Humans frequently make decisions with the aid of artificially intelligent (AI) systems.
Researchers have identified ensuring that a human has appropriate reliance on an AI as a critical component of achieving complementary performance.
We propose a formal definition of reliance, based on statistical decision theory, which separates the concepts of reliance as the probability the decision-maker follows the AI's recommendation.
arXiv Detail & Related papers (2024-01-27T09:13:09Z) - From Heuristic to Analytic: Cognitively Motivated Strategies for
Coherent Physical Commonsense Reasoning [66.98861219674039]
Heuristic-Analytic Reasoning (HAR) strategies drastically improve the coherence of rationalizations for model decisions.
Our findings suggest that human-like reasoning strategies can effectively improve the coherence and reliability of PLM reasoning.
arXiv Detail & Related papers (2023-10-24T19:46:04Z) - In Search of Verifiability: Explanations Rarely Enable Complementary
Performance in AI-Advised Decision Making [25.18203172421461]
We argue explanations are only useful to the extent that they allow a human decision maker to verify the correctness of an AI's prediction.
We also compare the objective of complementary performance with that of appropriate reliance, decomposing the latter into the notions of outcome-graded and strategy-graded reliance.
arXiv Detail & Related papers (2023-05-12T18:28:04Z) - AI Reliance and Decision Quality: Fundamentals, Interdependence, and the Effects of Interventions [6.356355538824237]
We argue that reliance and decision quality are often inappropriately conflated in the current literature on AI-assisted decision-making.
Our research highlights the importance of distinguishing between reliance behavior and decision quality in AI-assisted decision-making.
arXiv Detail & Related papers (2023-04-18T08:08:05Z) - Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making [10.049226270783562]
We study the effects of feature-based explanations on distributive fairness of AI-assisted decisions.
Our findings show that explanations influence fairness perceptions, which, in turn, relate to humans' tendency to adhere to AI recommendations.
arXiv Detail & Related papers (2022-09-23T19:10:59Z) - Doubting AI Predictions: Influence-Driven Second Opinion Recommendation [92.30805227803688]
We propose a way to augment human-AI collaboration by building on a common organizational practice: identifying experts who are likely to provide complementary opinions.
The proposed approach aims to leverage productive disagreement by identifying whether some experts are likely to disagree with an algorithmic assessment.
arXiv Detail & Related papers (2022-04-29T20:35:07Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Prompting Contrastive Explanations for Commonsense Reasoning Tasks [74.7346558082693]
Large pretrained language models (PLMs) can achieve near-human performance on commonsense reasoning tasks.
We show how to use these same models to generate human-interpretable evidence.
arXiv Detail & Related papers (2021-06-12T17:06:13Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.