Explainable Artificial Intelligence: Precepts, Methods, and
Opportunities for Research in Construction
- URL: http://arxiv.org/abs/2211.06579v1
- Date: Sat, 12 Nov 2022 05:47:42 GMT
- Title: Explainable Artificial Intelligence: Precepts, Methods, and
Opportunities for Research in Construction
- Authors: Peter ED Love, Weili Fang, Jane Matthews, Stuart Porter, Hanbin Luo,
and Lieyun Ding
- Abstract summary: We provide a narrative review of XAI to raise awareness about its potential in construction.
Opportunities for future XAI research focusing on stakeholder desiderata and data and information fusion are identified and discussed.
- Score: 1.2622634782102324
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable artificial intelligence has received limited attention in
construction despite its growing importance in various other industrial
sectors. In this paper, we provide a narrative review of XAI to raise awareness
about its potential in construction. Our review develops a taxonomy of the XAI
literature comprising its precepts and approaches. Opportunities for future XAI
research focusing on stakeholder desiderata and data and information fusion are
identified and discussed. We hope the opportunities we suggest stimulate new
lines of inquiry to help alleviate the scepticism and hesitancy toward AI
adoption and integration in construction.
Related papers
- Cutting Through the Confusion and Hype: Understanding the True Potential of Generative AI [0.0]
This paper explores the nuanced landscape of generative AI (genAI)
It focuses on neural network-based models like Large Language Models (LLMs)
arXiv Detail & Related papers (2024-10-22T02:18:44Z) - Explainable Generative AI (GenXAI): A Survey, Conceptualization, and Research Agenda [1.8592384822257952]
We elaborate on why XAI has gained importance with the rise of GenAI and its challenges for explainability research.
We also unveil novel and emerging desiderata that explanations should fulfill, covering aspects such as verifiability, interactivity, security, and cost.
arXiv Detail & Related papers (2024-04-15T08:18:16Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing [51.524108608250074]
Black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing.
We perform a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches.
We also give a detailed outlook on the challenges and promising research directions.
arXiv Detail & Related papers (2024-02-21T13:19:58Z) - Generative AI in the Construction Industry: Opportunities & Challenges [2.562895371316868]
Current surge lacks a study investigating the opportunities and challenges of implementing Generative AI (GenAI) in the construction sector.
This study delves into reflected perception in literature, analyzes the industry perception using programming-based word cloud and frequency analysis.
This paper recommends a conceptual GenAI implementation framework, provides practical recommendations, summarizes future research questions, and builds foundational literature to foster subsequent research expansion in GenAI.
arXiv Detail & Related papers (2023-09-19T18:20:49Z) - Explainable Artificial Intelligence in Construction: The Content,
Context, Process, Outcome Evaluation Framework [1.3375143521862154]
We develop a content, context, process, and outcome evaluation framework that can be used to justify the adoption and effective management of XAI.
While our novel framework is conceptual, it provides a frame of reference for construction organisations to make headway toward realising XAI business value and benefits.
arXiv Detail & Related papers (2022-11-12T03:50:14Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Artificial Intelligence in Concrete Materials: A Scientometric View [77.34726150561087]
This chapter aims to uncover the main research interests and knowledge structure of the existing literature on AI for concrete materials.
To begin with, a total of 389 journal articles published from 1990 to 2020 were retrieved from the Web of Science.
Scientometric tools such as keyword co-occurrence analysis and documentation co-citation analysis were adopted to quantify features and characteristics of the research field.
arXiv Detail & Related papers (2022-09-17T18:24:56Z) - Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and
Stir" [76.44130385507894]
This paper aims to ground what we dub a 'participatory turn' in AI design by synthesizing existing literature on participation and through empirical analysis of its current practices.
Based on our literature synthesis and empirical research, this paper presents a conceptual framework for analyzing participatory approaches to AI design.
arXiv Detail & Related papers (2021-11-01T17:57:04Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
Stakeholder Perspective on XAI and a Conceptual Model Guiding
Interdisciplinary XAI Research [0.8707090176854576]
Main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems.
It often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata.
arXiv Detail & Related papers (2021-02-15T19:54:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.