Explainable Artificial Intelligence in Construction: The Content,
Context, Process, Outcome Evaluation Framework
- URL: http://arxiv.org/abs/2211.06561v1
- Date: Sat, 12 Nov 2022 03:50:14 GMT
- Title: Explainable Artificial Intelligence in Construction: The Content,
Context, Process, Outcome Evaluation Framework
- Authors: Peter ED Love, Jane Matthews, Weili Fang, Stuart Porter, Hanbin Luo
and Lieyun Ding
- Abstract summary: We develop a content, context, process, and outcome evaluation framework that can be used to justify the adoption and effective management of XAI.
While our novel framework is conceptual, it provides a frame of reference for construction organisations to make headway toward realising XAI business value and benefits.
- Score: 1.3375143521862154
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable artificial intelligence is an emerging and evolving concept. Its
impact on construction, though yet to be realised, will be profound in the
foreseeable future. Still, XAI has received limited attention in construction.
As a result, no evaluation frameworks have been propagated to enable
construction organisations to understand the what, why, how, and when of XAI.
Our paper aims to fill this void by developing a content, context, process, and
outcome evaluation framework that can be used to justify the adoption and
effective management of XAI. After introducing and describing this novel
framework, we discuss its implications for future research. While our novel
framework is conceptual, it provides a frame of reference for construction
organisations to make headway toward realising XAI business value and benefits.
Related papers
- What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks [16.795332276080888]
We propose a fine-grained validation framework for explainable artificial intelligence systems.
We recognise their inherent modular structure: technical building blocks, user-facing explanatory artefacts and social communication protocols.
arXiv Detail & Related papers (2024-03-19T13:45:34Z) - Semantic Communications for Artificial Intelligence Generated Content
(AIGC) Toward Effective Content Creation [75.73229320559996]
This paper develops a conceptual model for the integration of AIGC and SemCom.
A novel framework that employs AIGC technology is proposed as an encoder and decoder for semantic information.
The framework can adapt to different types of content generated, the required quality, and the semantic information utilized.
arXiv Detail & Related papers (2023-08-09T13:17:21Z) - Unfinished Architectures: A Perspective from Artificial Intelligence [73.52315464582637]
Development of Artificial Intelligence (AI) opens new avenues for the proposal of possibilities for the completion of unfinished architectures.
Recent appearance of tools such as DALL-E, capable of completing images guided by a textual description.
In this article we explore the use of these new AI tools for the completion of unfinished facades of historical temples and analyse the still germinal stadium in the field of architectural graphic composition.
arXiv Detail & Related papers (2023-03-03T13:05:10Z) - Charting the Sociotechnical Gap in Explainable AI: A Framework to
Address the Gap in XAI [29.33534897830558]
We argue that charting the gap improves our problem understanding, which can reflexively provide actionable insights to improve explainability.
We empirically derive a framework that facilitates systematic charting of the sociotechnical gap.
By making conceptual and practical contributions to understanding the sociotechnical gap in XAI, the framework expands the XAI design space.
arXiv Detail & Related papers (2023-02-01T23:21:45Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Explainable Artificial Intelligence: Precepts, Methods, and
Opportunities for Research in Construction [1.2622634782102324]
We provide a narrative review of XAI to raise awareness about its potential in construction.
Opportunities for future XAI research focusing on stakeholder desiderata and data and information fusion are identified and discussed.
arXiv Detail & Related papers (2022-11-12T05:47:42Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - XAI Handbook: Towards a Unified Framework for Explainable AI [5.716475756970092]
The field of explainable AI (XAI) has quickly become a thriving and prolific community.
Each new contribution seems to rely on its own (and often intuitive) version of terms like "explanation" and "interpretation"
We propose a theoretical framework that not only provides concrete definitions for these terms, but it also outlines all steps necessary to produce explanations and interpretations.
arXiv Detail & Related papers (2021-05-14T07:28:21Z) - Expanding Explainability: Towards Social Transparency in AI systems [20.41177660318785]
Social Transparency (ST) is a socio-technically informed perspective that incorporates the socio-organizational context into explaining AI-mediated decision-making.
Our work contributes to the discourse of Human-Centered XAI by expanding the design space of XAI.
arXiv Detail & Related papers (2021-01-12T19:44:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.