Explainable and Human-Grounded AI for Decision Support Systems: The Theory of Epistemic Quasi-Partnerships
- URL: http://arxiv.org/abs/2409.14839v1
- Date: Mon, 23 Sep 2024 09:14:25 GMT
- Title: Explainable and Human-Grounded AI for Decision Support Systems: The Theory of Epistemic Quasi-Partnerships
- Authors: John Dorsch, Maximilian Moll,
- Abstract summary: We argue that meeting the demands of ethical and explainable AI (XAI) is about developing AI-DSS to provide human decision-makers with three types of human-grounded explanations.
We demonstrate how current theories about what constitutes good human-grounded reasons either do not adequately explain this evidence or do not offer sound ethical advice for development.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the context of AI decision support systems (AI-DSS), we argue that meeting the demands of ethical and explainable AI (XAI) is about developing AI-DSS to provide human decision-makers with three types of human-grounded explanations: reasons, counterfactuals, and confidence, an approach we refer to as the RCC approach. We begin by reviewing current empirical XAI literature that investigates the relationship between various methods for generating model explanations (e.g., LIME, SHAP, Anchors), the perceived trustworthiness of the model, and end-user accuracy. We demonstrate how current theories about what constitutes good human-grounded reasons either do not adequately explain this evidence or do not offer sound ethical advice for development. Thus, we offer a novel theory of human-machine interaction: the theory of epistemic quasi-partnerships (EQP). Finally, we motivate adopting EQP and demonstrate how it explains the empirical evidence, offers sound ethical advice, and entails adopting the RCC approach.
Related papers
- Advancing Interactive Explainable AI via Belief Change Theory [5.842480645870251]
We argue that this type of formalisation provides a framework and a methodology to develop interactive explanations.
We first define a novel, logic-based formalism to represent explanatory information shared between humans and machines.
We then consider real world scenarios for interactive XAI, with different prioritisations of new and existing knowledge, where our formalism may be instantiated.
arXiv Detail & Related papers (2024-08-13T13:11:56Z) - Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making [47.33241893184721]
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.
We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.
Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
arXiv Detail & Related papers (2024-03-25T14:34:06Z) - Towards the New XAI: A Hypothesis-Driven Approach to Decision Support Using Evidence [9.916507773707917]
We describe and evaluate an approach for hypothesis-driven AI based on the Weight of Evidence (WoE) framework.
We show that our hypothesis-driven approach increases decision accuracy and reduces reliance compared to a recommendation-driven approach.
arXiv Detail & Related papers (2024-02-02T10:28:24Z) - Emergent Explainability: Adding a causal chain to neural network
inference [0.0]
This position paper presents a theoretical framework for enhancing explainable artificial intelligence (xAI) through emergent communication (EmCom)
We explore the novel integration of EmCom into AI systems, offering a paradigm shift from conventional associative relationships between inputs and outputs to a more nuanced, causal interpretation.
The paper discusses the theoretical underpinnings of this approach, its potential broad applications, and its alignment with the growing need for responsible and transparent AI systems.
arXiv Detail & Related papers (2024-01-29T02:28:39Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Categorical Foundations of Explainable AI: A Unifying Theory [8.637435154170916]
This paper presents the first mathematically rigorous definitions of key XAI notions and processes, using the well-funded formalism of Category theory.
We show that our categorical framework allows to: (i) model existing learning schemes and architectures, (ii) formally define the term "explanation", (iii) establish a theoretical basis for XAI, and (iv) analyze commonly overlooked aspects of explaining methods.
arXiv Detail & Related papers (2023-04-27T11:10:16Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - On the Relationship Between Explanations, Fairness Perceptions, and
Decisions [2.5372245630249632]
It is known that recommendations of AI-based systems can be incorrect or unfair.
It is often proposed that a human be the final decision-maker.
Prior work has argued that explanations are an essential pathway to help human decision-makers enhance decision quality.
arXiv Detail & Related papers (2022-04-27T19:33:36Z) - On some Foundational Aspects of Human-Centered Artificial Intelligence [52.03866242565846]
There is no clear definition of what is meant by Human Centered Artificial Intelligence.
This paper introduces the term HCAI agent to refer to any physical or software computational agent equipped with AI components.
We see the notion of HCAI agent, together with its components and functions, as a way to bridge the technical and non-technical discussions on human-centered AI.
arXiv Detail & Related papers (2021-12-29T09:58:59Z) - CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing
Human Trust in Image Recognition Models [84.32751938563426]
We propose a new explainable AI (XAI) framework for explaining decisions made by a deep convolutional neural network (CNN)
In contrast to the current methods in XAI that generate explanations as a single shot response, we pose explanation as an iterative communication process.
Our framework generates sequence of explanations in a dialog by mediating the differences between the minds of machine and human user.
arXiv Detail & Related papers (2021-09-03T09:46:20Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.