The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPR
- URL: http://arxiv.org/abs/2501.05325v1
- Date: Thu, 09 Jan 2025 15:50:02 GMT
- Title: The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPR
- Authors: Laura State, Alejandra Bringas Colmenarejo, Andrea Beretta, Salvatore Ruggieri, Franco Turini, Stephanie Law,
- Abstract summary: We present the Explanation Dialogues, an expert focus study to uncover the expectations, reasoning, and understanding of legal experts and practitioners towards XAI.
The study consists of an online questionnaire and follow-up interviews, and is centered around a use-case in the credit domain.
We find that the presented explanations are hard to understand and lack information, and discuss issues that can arise from the different interests of the data controller and subject.
- Score: 47.06917254695738
- License:
- Abstract: Explainable AI (XAI) provides methods to understand non-interpretable machine learning models. However, we have little knowledge about what legal experts expect from these explanations, including their legal compliance with, and value against European Union legislation. To close this gap, we present the Explanation Dialogues, an expert focus study to uncover the expectations, reasoning, and understanding of legal experts and practitioners towards XAI, with a specific focus on the European General Data Protection Regulation. The study consists of an online questionnaire and follow-up interviews, and is centered around a use-case in the credit domain. We extract both a set of hierarchical and interconnected codes using grounded theory, and present the standpoints of the participating experts towards XAI. We find that the presented explanations are hard to understand and lack information, and discuss issues that can arise from the different interests of the data controller and subject. Finally, we present a set of recommendations for developers of XAI methods, and indications of legal areas of discussion. Among others, recommendations address the presentation, choice, and content of an explanation, technical risks as well as the end-user, while we provide legal pointers to the contestability of explanations, transparency thresholds, intellectual property rights as well as the relationship between involved parties.
Related papers
- EXAGREE: Towards Explanation Agreement in Explainable Machine Learning [0.0]
Explanations in machine learning are critical for trust, transparency, and fairness.
We introduce a novel framework, EXplanation AGREEment, to bridge diverse interpretations in explainable machine learning.
arXiv Detail & Related papers (2024-11-04T10:28:38Z) - How should AI decisions be explained? Requirements for Explanations from the Perspective of European Law [0.20971479389679337]
This paper focuses on European (and part German) law, although with international concepts and regulations.
Based on XAI-taxonomies, requirements for XAI (methods) are derived from each of the legal bases.
arXiv Detail & Related papers (2024-04-19T10:08:28Z) - Advancing Explainable Autonomous Vehicle Systems: A Comprehensive Review and Research Roadmap [4.2330023661329355]
This study presents a review to discuss the complexities associated with explanation generation and presentation.
Our roadmap is underpinned by principles of responsible research and innovation.
By exploring these research directions, the study aims to guide the development and deployment of explainable AVs.
arXiv Detail & Related papers (2024-03-19T11:43:41Z) - Explainability in AI Policies: A Critical Review of Communications,
Reports, Regulations, and Standards in the EU, US, and UK [1.5039745292757671]
We perform the first thematic and gap analysis of policies and standards on explainability in the EU, US, and UK.
We find that policies are often informed by coarse notions and requirements for explanations.
We propose recommendations on how to address explainability in regulations for AI systems.
arXiv Detail & Related papers (2023-04-20T07:53:07Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Harnessing value from data science in business: ensuring explainability
and fairness of solutions [0.0]
The paper introduces concepts of fairness and explainability (XAI) in artificial intelligence, oriented to solve a sophisticated business problems.
For fairness, the authors discuss the bias-inducing specifics, as well as relevant mitigation methods, concluding with a set of recipes for introducing fairness in data-driven organizations.
arXiv Detail & Related papers (2021-08-10T11:59:38Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.