Trustworthy AI and Robotics and the Implications for the AEC Industry: A
Systematic Literature Review and Future Potentials
- URL: http://arxiv.org/abs/2109.13373v1
- Date: Mon, 27 Sep 2021 22:33:56 GMT
- Title: Trustworthy AI and Robotics and the Implications for the AEC Industry: A
Systematic Literature Review and Future Potentials
- Authors: Newsha Emaminejad and Reza Akhavian
- Abstract summary: The study focuses on trust in AI and robotics (AIR) applications in the architecture, engineering, and construction (AEC) industry.
The connections of the identified dimensions to the existing and potential AEC applications are determined and discussed.
Finally, major future directions on trustworthy AI and robotics in AEC research and practice are outlined.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human-technology interaction deals with trust as an inevitable requirement
for user acceptance. As the applications of artificial intelligence (AI) and
robotics emerge and with their ever-growing socio-economic influence in various
fields of research and practice, there is an imminent need to study trust in
such systems. With the opaque work mechanism of AI-based systems and the
prospect of intelligent robots as workers' companions, context-specific
interdisciplinary studies on trust are key in increasing their adoption.
Through a thorough systematic literature review on (1) trust in AI and robotics
(AIR) and (2) AIR applications in the architecture, engineering, and
construction (AEC) industry, this study identifies common trust dimensions in
the literature and uses them to organize the paper. Furthermore, the
connections of the identified dimensions to the existing and potential AEC
applications are determined and discussed. Finally, major future directions on
trustworthy AI and robotics in AEC research and practice are outlined.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Trust in Construction AI-Powered Collaborative Robots: A Qualitative
Empirical Analysis [0.0]
Intelligent cobots are expected to be the dominant type of robots in the future of work in construction.
The black-box nature of AI-powered cobots and unknown technical and psychological aspects of introducing them to job sites are precursors to trust challenges.
The study found that while the key trust factors resonated with the field experts and end users, other factors such as financial considerations and the uncertainty associated with change were also significant barriers against trusting AI-powered cobots in construction.
arXiv Detail & Related papers (2023-08-28T19:07:14Z) - Understanding the Application of Utility Theory in Robotics and
Artificial Intelligence: A Survey [5.168741399695988]
The utility is a unifying concept in economics, game theory, and operations research, even in the Robotics and AI field.
This paper introduces a utility-orient needs paradigm to describe and evaluate inter and outer relationships among agents' interactions.
arXiv Detail & Related papers (2023-06-15T18:55:48Z) - Artificial intelligence in government: Concepts, standards, and a
unified framework [0.0]
Recent advances in artificial intelligence (AI) hold the promise of transforming government.
It is critical that new AI systems behave in alignment with the normative expectations of society.
arXiv Detail & Related papers (2022-10-31T10:57:20Z) - Trust in AI and Implications for the AEC Research: A Literature Analysis [0.0]
The architecture, engineering, and construction (AEC) research community has been harnessing advanced solutions offered by artificial intelligence (AI) to improve project.
Despite the unique characteristics of work, workers, and workplaces in the AEC industry, the concept of trust in AI has received very little attention in the literature.
This paper presents a comprehensive analysis of the academic literature in two main areas of trust in AI and AI in the AEC, to explore the interplay between AEC projects unique aspects and the sociotechnical concepts that lead to trust in AI.
arXiv Detail & Related papers (2022-03-08T04:38:34Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - AAAI FSS-19: Human-Centered AI: Trustworthiness of AI Models and Data
Proceedings [8.445274192818825]
It is crucial for predictive models to be uncertainty-aware and yield trustworthy predictions.
The focus of this symposium was on AI systems to improve data quality and technical robustness and safety.
submissions from broadly defined areas also discussed approaches addressing requirements such as explainable models, human trust and ethical aspects of AI.
arXiv Detail & Related papers (2020-01-15T15:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.