Trust in AI and Its Role in the Acceptance of AI Technologies
- URL: http://arxiv.org/abs/2203.12687v1
- Date: Wed, 23 Mar 2022 19:18:19 GMT
- Title: Trust in AI and Its Role in the Acceptance of AI Technologies
- Authors: Hyesun Choung, Prabu David, Arun Ross
- Abstract summary: This paper explains the role of trust on the intention to use AI technologies.
Study 1 examined the role of trust in the use of AI voice assistants based on survey responses from college students.
Study 2, using data from a representative sample of the U.S. population, different dimensions of trust were examined.
- Score: 12.175031903660972
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As AI-enhanced technologies become common in a variety of domains, there is
an increasing need to define and examine the trust that users have in such
technologies. Given the progress in the development of AI, a correspondingly
sophisticated understanding of trust in the technology is required. This paper
addresses this need by explaining the role of trust on the intention to use AI
technologies. Study 1 examined the role of trust in the use of AI voice
assistants based on survey responses from college students. A path analysis
confirmed that trust had a significant effect on the intention to use AI, which
operated through perceived usefulness and participants' attitude toward voice
assistants. In study 2, using data from a representative sample of the U.S.
population, different dimensions of trust were examined using exploratory
factor analysis, which yielded two dimensions: human-like trust and
functionality trust. The results of the path analyses from Study 1 were
replicated in Study 2, confirming the indirect effect of trust and the effects
of perceived usefulness, ease of use, and attitude on intention to use.
Further, both dimensions of trust shared a similar pattern of effects within
the model, with functionality-related trust exhibiting a greater total impact
on usage intention than human-like trust. Overall, the role of trust in the
acceptance of AI technologies was significant across both studies. This
research contributes to the advancement and application of the TAM in
AI-related applications and offers a multidimensional measure of trust that can
be utilized in the future study of trustworthy AI.
Related papers
- The impact of labeling automotive AI as "trustworthy" or "reliable" on user evaluation and technology acceptance [0.0]
This study explores whether labeling AI as "trustworthy" or "reliable" influences user perceptions and acceptance of automotive AI technologies.
Using a one-way between-subjects design, the research involved 478 online participants who were presented with guidelines for either trustworthy or reliable AI.
Although labeling AI as "trustworthy" did not significantly influence judgments on specific scenarios, it increased perceived ease of use and human-like trust, particularly benevolence.
arXiv Detail & Related papers (2024-08-20T14:48:24Z) - Trusting Your AI Agent Emotionally and Cognitively: Development and Validation of a Semantic Differential Scale for AI Trust [16.140485357046707]
We develop and validated a set of 27-item semantic differential scales for affective and cognitive trust.
Our empirical findings showed how the emotional and cognitive aspects of trust interact with each other and collectively shape a person's overall trust in AI agents.
arXiv Detail & Related papers (2024-07-25T18:55:33Z) - Trust in AI: Progress, Challenges, and Future Directions [6.724854390957174]
The increasing use of artificial intelligence (AI) systems in our daily life explains the significance of trust/distrust in AI from a user perspective.
Trust/distrust in AI plays the role of a regulator and could significantly control the level of this diffusion.
arXiv Detail & Related papers (2024-03-12T20:26:49Z) - Common (good) practices measuring trust in HRI [55.2480439325792]
Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives.
Researchers have been exploring how people trust robot in different ways.
Most roboticists agree that insufficient levels of trust lead to a risk of disengagement.
arXiv Detail & Related papers (2023-11-20T20:52:10Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - A Systematic Literature Review of User Trust in AI-Enabled Systems: An
HCI Perspective [0.0]
User trust in Artificial Intelligence (AI) enabled systems has been increasingly recognized and proven as a key element to fostering adoption.
This review aims to provide an overview of the user trust definitions, influencing factors, and measurement methods from 23 empirical studies.
arXiv Detail & Related papers (2023-04-18T07:58:09Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Will We Trust What We Don't Understand? Impact of Model Interpretability
and Outcome Feedback on Trust in AI [0.0]
We analyze the impact of interpretability and outcome feedback on trust in AI and on human performance in AI-assisted prediction tasks.
We find that interpretability led to no robust improvements in trust, while outcome feedback had a significantly greater and more reliable effect.
arXiv Detail & Related papers (2021-11-16T04:35:34Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.