Integrating Evidence into the Design of XAI and AI-based Decision Support Systems: A Means-End Framework for End-users in Construction
- URL: http://arxiv.org/abs/2412.14209v1
- Date: Tue, 17 Dec 2024 13:02:05 GMT
- Title: Integrating Evidence into the Design of XAI and AI-based Decision Support Systems: A Means-End Framework for End-users in Construction
- Authors: Peter . E. D. Love, Jane Matthews, Weili Fang, Hadi Mahamivanan,
- Abstract summary: This paper develops a theoretical evidence-based means-end framework to uphold explainable artificial intelligence instruments.
It is suggested that it can be used in construction and other engineering domains.
- Score: 0.1999925939110439
- License:
- Abstract: A narrative review is used to develop a theoretical evidence-based means-end framework to build an epistemic foundation to uphold explainable artificial intelligence instruments so that the reliability of outcomes generated from decision support systems can be assured and better explained to end-users. The implications of adopting an evidence-based approach to designing decision support systems in construction are discussed with emphasis placed on evaluating the strength, value, and utility of evidence needed to develop meaningful human explanations for end-users. While the developed means-end framework is focused on end-users, stakeholders can also utilize it to create meaningful human explanations. However, they will vary due to their different epistemic goals. Including evidence in the design and development of explainable artificial intelligence and decision support systems will improve decision-making effectiveness, enabling end-users' epistemic goals to be achieved. The proposed means-end framework is developed from a broad spectrum of literature. Thus, it is suggested that it can be used in construction and other engineering domains where there is a need to integrate evidence into the design of explainable artificial intelligence and decision support systems.
Related papers
- The AI-DEC: A Card-based Design Method for User-centered AI Explanations [20.658833770179903]
We develop a design method, called AI-DEC, that defines four dimensions of AI explanations.
We evaluate this method through co-design sessions with workers in healthcare, finance, and management industries.
We discuss the implications of using the AI-DEC for the user-centered design of AI explanations in real-world systems.
arXiv Detail & Related papers (2024-05-26T22:18:38Z) - What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks [16.795332276080888]
We propose a fine-grained validation framework for explainable artificial intelligence systems.
We recognise their inherent modular structure: technical building blocks, user-facing explanatory artefacts and social communication protocols.
arXiv Detail & Related papers (2024-03-19T13:45:34Z) - A Unifying Framework for Learning Argumentation Semantics [50.69905074548764]
We present a novel framework, which uses an Inductive Logic Programming approach to learn the acceptability semantics for several abstract and structured argumentation frameworks in an interpretable way.
Our framework outperforms existing argumentation solvers, thus opening up new future research directions in the area of formal argumentation and human-machine dialogues.
arXiv Detail & Related papers (2023-10-18T20:18:05Z) - Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.
It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.
We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - A Semantic Approach to Decidability in Epistemic Planning (Extended
Version) [72.77805489645604]
We use a novel semantic approach to achieve decidability.
Specifically, we augment the logic of knowledge S5$_n$ and with an interaction axiom called (knowledge) commutativity.
We prove that our framework admits a finitary non-fixpoint characterization of common knowledge, which is of independent interest.
arXiv Detail & Related papers (2023-07-28T11:26:26Z) - Need-driven decision-making and prototyping for DLT: Framework and
web-based tool [0.0]
Multiple groups attempted to disentangle the technology from the associated hype and controversy.
We develop a holistic analytical framework and open-source web tool for making evidence-based decisions.
arXiv Detail & Related papers (2023-07-18T12:19:47Z) - Designing explainable artificial intelligence with active inference: A
framework for transparent introspection and decision-making [0.0]
We discuss how active inference can be leveraged to design explainable AI systems.
We propose an architecture for explainable AI systems using active inference.
arXiv Detail & Related papers (2023-06-06T21:38:09Z) - Design Space Exploration and Explanation via Conditional Variational
Autoencoders in Meta-model-based Conceptual Design of Pedestrian Bridges [52.77024349608834]
This paper provides a performance-driven design exploration framework to augment the human designer through a Conditional Variational Autoencoder (CVAE)
The CVAE is trained on 18'000 synthetically generated instances of a pedestrian bridge in Switzerland.
arXiv Detail & Related papers (2022-11-29T17:28:31Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - A Computational Model of the Institutional Analysis and Development
Framework [0.0]
This work presents the first attempt to turn the IAD framework into a computational model.
We define the Action Situation Language -- or ASL -- whose syntax is tailored to the components of the IAD framework and that we use to write descriptions of social interactions.
These models can be analyzed with the standard tools of game theory to predict which outcomes are being most incentivized, and evaluated according to their socially relevant properties.
arXiv Detail & Related papers (2021-05-27T13:53:56Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.