LeanAI: A method for AEC practitioners to effectively plan AI
implementations
- URL: http://arxiv.org/abs/2306.16799v1
- Date: Thu, 29 Jun 2023 09:18:11 GMT
- Title: LeanAI: A method for AEC practitioners to effectively plan AI
implementations
- Authors: Ashwin Agrawal, Vishal Singh, and Martin Fischer
- Abstract summary: Despite the enthusiasm regarding the use of AI, 85% of current big data projects fail.
One of the main reasons for AI project failures in the AEC industry is the disconnect between those who plan or decide to use AI and those who implement it.
This work introduces the LeanAI method, which delineates what AI should solve, what it can solve, and what it will solve.
- Score: 1.213096549055645
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent developments in Artificial Intelligence (AI) provide unprecedented
automation opportunities in the Architecture, Engineering, and Construction
(AEC) industry. However, despite the enthusiasm regarding the use of AI, 85% of
current big data projects fail. One of the main reasons for AI project failures
in the AEC industry is the disconnect between those who plan or decide to use
AI and those who implement it. AEC practitioners often lack a clear
understanding of the capabilities and limitations of AI, leading to a failure
to distinguish between what AI should solve, what it can solve, and what it
will solve, treating these categories as if they are interchangeable. This lack
of understanding results in the disconnect between AI planning and
implementation because the planning is based on a vision of what AI should
solve without considering if it can or will solve it. To address this
challenge, this work introduces the LeanAI method. The method has been
developed using data from several ongoing longitudinal studies analyzing AI
implementations in the AEC industry, which involved 50+ hours of interview
data. The LeanAI method delineates what AI should solve, what it can solve, and
what it will solve, forcing practitioners to clearly articulate these
components early in the planning process itself by involving the relevant
stakeholders. By utilizing the method, practitioners can effectively plan AI
implementations, thus increasing the likelihood of success and ultimately
speeding up the adoption of AI. A case example illustrates the usefulness of
the method.
Related papers
- AI Thinking: A framework for rethinking artificial intelligence in practice [2.9805831933488127]
A growing range of disciplines are now involved in studying, developing, and assessing the use of AI in practice.
New, interdisciplinary approaches are needed to bridge competing conceptualisations of AI in practice.
I propose a novel conceptual framework called AI Thinking, which models key decisions and considerations involved in AI use across disciplinary perspectives.
arXiv Detail & Related papers (2024-08-26T04:41:21Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - AI-in-the-Loop -- The impact of HMI in AI-based Application [0.0]
We introduce the AI-in-the-loop concept that combines AI's and humans' strengths.
By enabling HMI during an AI uses inference, we will introduce the AI-in-the-loop concept that combines AI's and humans' strengths.
arXiv Detail & Related papers (2023-03-21T00:04:33Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A Deployment Model to Extend Ethically Aligned AI Implementation Method
ECCOLA [5.28595286827031]
This study aims to extend ECCOLA with a deployment model to drive the adoption of ECCOLA.
The model includes simple metrics to facilitate the communication of ethical gaps or outcomes of ethical AI development.
arXiv Detail & Related papers (2021-10-12T12:22:34Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - ECCOLA -- a Method for Implementing Ethically Aligned AI Systems [11.31664099885664]
We present a method for implementing AI ethics into practice.
The method, ECCOLA, has been iteratively developed using a cyclical action design research approach.
arXiv Detail & Related papers (2020-04-17T17:57:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.