A Critical Survey on Fairness Benefits of Explainable AI
- URL: http://arxiv.org/abs/2310.13007v6
- Date: Tue, 7 May 2024 15:50:27 GMT
- Title: A Critical Survey on Fairness Benefits of Explainable AI
- Authors: Luca Deck, Jakob Schoeffer, Maria De-Arteaga, Niklas Kühl,
- Abstract summary: We identify seven archetypal claims from 175 scientific articles on the alleged fairness benefits of XAI.
We notice that claims are often vague and simplistic, lacking normative grounding, or poorly aligned with the actual capabilities of XAI.
We suggest to conceive XAI not as an ethical panacea but as one of many tools to approach the multidimensional, sociotechnical challenge of algorithmic fairness.
- Score: 10.81142163495028
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this critical survey, we analyze typical claims on the relationship between explainable AI (XAI) and fairness to disentangle the multidimensional relationship between these two concepts. Based on a systematic literature review and a subsequent qualitative content analysis, we identify seven archetypal claims from 175 scientific articles on the alleged fairness benefits of XAI. We present crucial caveats with respect to these claims and provide an entry point for future discussions around the potentials and limitations of XAI for specific fairness desiderata. Importantly, we notice that claims are often (i) vague and simplistic, (ii) lacking normative grounding, or (iii) poorly aligned with the actual capabilities of XAI. We suggest to conceive XAI not as an ethical panacea but as one of many tools to approach the multidimensional, sociotechnical challenge of algorithmic fairness. Moreover, when making a claim about XAI and fairness, we emphasize the need to be more specific about what kind of XAI method is used, which fairness desideratum it refers to, how exactly it enables fairness, and who is the stakeholder that benefits from XAI.
Related papers
- Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction [5.417632175667161]
Explainable Artificial Intelligence (XAI) addresses challenges by providing explanations for how these models make decisions and predictions.
Existing studies have examined the fundamental concepts of XAI, its general principles, and the scope of XAI techniques.
This paper provides a comprehensive literature review encompassing common terminologies and definitions, the need for XAI, beneficiaries of XAI, a taxonomy of XAI methods, and the application of XAI methods in different application areas.
arXiv Detail & Related papers (2024-08-30T21:42:17Z) - Dataset | Mindset = Explainable AI | Interpretable AI [36.001670039529586]
"explainable" Artificial Intelligence (XAI)" and "interpretable AI (IAI)" interchangeably when we apply various XAI tools for a given dataset to explain the reasons that underpin machine learning (ML) outputs.
We argue that XAI is a subset of IAI. The concept of IAI is beyond the sphere of a dataset. It includes the domain of a mindset.
We aim to clarify these notions and lay the foundation of XAI, IAI, EAI, and TAI for many practitioners and policymakers in future AI applications and research.
arXiv Detail & Related papers (2024-08-22T14:12:53Z) - How much informative is your XAI? A decision-making assessment task to
objectively measure the goodness of explanations [53.01494092422942]
The number and complexity of personalised and user-centred approaches to XAI have rapidly grown in recent years.
It emerged that user-centred approaches to XAI positively affect the interaction between users and systems.
We propose an assessment task to objectively and quantitatively measure the goodness of XAI systems.
arXiv Detail & Related papers (2023-12-07T15:49:39Z) - Does Explainable AI Have Moral Value? [0.0]
Explainable AI (XAI) aims to bridge the gap between complex algorithmic systems and human stakeholders.
Current discourse often examines XAI in isolation as either a technological tool, user interface, or policy mechanism.
This paper proposes a unifying ethical framework grounded in moral duties and the concept of reciprocity.
arXiv Detail & Related papers (2023-11-05T15:59:27Z) - Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.
It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.
We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - An Experimental Investigation into the Evaluation of Explainability
Methods [60.54170260771932]
This work compares 14 different metrics when applied to nine state-of-the-art XAI methods and three dummy methods (e.g., random saliency maps) used as references.
Experimental results show which of these metrics produces highly correlated results, indicating potential redundancy.
arXiv Detail & Related papers (2023-05-25T08:07:07Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
Stakeholder Perspective on XAI and a Conceptual Model Guiding
Interdisciplinary XAI Research [0.8707090176854576]
Main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems.
It often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata.
arXiv Detail & Related papers (2021-02-15T19:54:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.