A Systematic Literature Review of User Trust in AI-Enabled Systems: An
HCI Perspective
- URL: http://arxiv.org/abs/2304.08795v1
- Date: Tue, 18 Apr 2023 07:58:09 GMT
- Title: A Systematic Literature Review of User Trust in AI-Enabled Systems: An
HCI Perspective
- Authors: Tita Alissa Bach, Amna Khan, Harry Hallock, Gabriela Beltr\~ao, Sonia
Sousa
- Abstract summary: User trust in Artificial Intelligence (AI) enabled systems has been increasingly recognized and proven as a key element to fostering adoption.
This review aims to provide an overview of the user trust definitions, influencing factors, and measurement methods from 23 empirical studies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: User trust in Artificial Intelligence (AI) enabled systems has been
increasingly recognized and proven as a key element to fostering adoption. It
has been suggested that AI-enabled systems must go beyond technical-centric
approaches and towards embracing a more human centric approach, a core
principle of the human-computer interaction (HCI) field. This review aims to
provide an overview of the user trust definitions, influencing factors, and
measurement methods from 23 empirical studies to gather insight for future
technical and design strategies, research, and initiatives to calibrate the
user AI relationship. The findings confirm that there is more than one way to
define trust. Selecting the most appropriate trust definition to depict user
trust in a specific context should be the focus instead of comparing
definitions. User trust in AI-enabled systems is found to be influenced by
three main themes, namely socio-ethical considerations, technical and design
features, and user characteristics. User characteristics dominate the findings,
reinforcing the importance of user involvement from development through to
monitoring of AI enabled systems. In conclusion, user trust needs to be
addressed directly in every context where AI-enabled systems are being used or
discussed. In addition, calibrating the user-AI relationship requires finding
the optimal balance that works for not only the user but also the system.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - The impact of labeling automotive AI as "trustworthy" or "reliable" on user evaluation and technology acceptance [0.0]
This study explores whether labeling AI as "trustworthy" or "reliable" influences user perceptions and acceptance of automotive AI technologies.
Using a one-way between-subjects design, the research involved 478 online participants who were presented with guidelines for either trustworthy or reliable AI.
Although labeling AI as "trustworthy" did not significantly influence judgments on specific scenarios, it increased perceived ease of use and human-like trust, particularly benevolence.
arXiv Detail & Related papers (2024-08-20T14:48:24Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Who to Trust, How and Why: Untangling AI Ethics Principles,
Trustworthiness and Trust [0.0]
We argue for the need to distinguish these concepts more clearly.
We discuss that trust in AI involves not only reliance on the system itself, but also trust in the developers of the AI system.
arXiv Detail & Related papers (2023-09-19T05:00:34Z) - Requirements for Explainability and Acceptance of Artificial
Intelligence in Collaborative Work [0.0]
The present structured literature analysis examines the requirements for the explainability and acceptance of AI.
Results indicate that the two main groups of users are developers who require information about the internal operations of the model.
The acceptance of AI systems depends on information about the system's functions and performance, privacy and ethical considerations.
arXiv Detail & Related papers (2023-06-27T11:36:07Z) - Human-centered trust framework: An HCI perspective [1.6344851071810074]
The rationale of this work is based on the current user trust discourse of Artificial Intelligence (AI)
We propose a framework to guide non-experts to unlock the full potential of user trust in AI design.
arXiv Detail & Related papers (2023-05-05T06:15:32Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.