What drives the acceptance of AI technology?: the role of expectations
and experiences
- URL: http://arxiv.org/abs/2306.13670v1
- Date: Sat, 17 Jun 2023 02:47:48 GMT
- Title: What drives the acceptance of AI technology?: the role of expectations
and experiences
- Authors: Minsang Yi and Hanbyul Choi
- Abstract summary: The acceptance intention towards artificial intelligence is greatly influenced by the experience with current AI products and services, expectations for AI, and past experiences with ICT technology.
The analysis results of this study reveal that AI experience and past ICT experience are associated with a greater intention to accept AI.
It is essential to provide potential AI users with specific information about the features and benefits of AI products and services.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, Artificial intelligence products and services have been
offered potential users as pilots. The acceptance intention towards artificial
intelligence is greatly influenced by the experience with current AI products
and services, expectations for AI, and past experiences with ICT technology.
This study aims to explore the factors that impact AI acceptance intention and
understand the process of its formation. The analysis results of this study
reveal that AI experience and past ICT experience affect AI acceptance
intention in two ways. Through the direct path, higher AI experience and ICT
experience are associated with a greater intention to accept AI. Additionally,
there is an indirect path where AI experience and ICT experience contribute to
increased expectations for AI, and these expectations, in turn, elevate
acceptance intention. Based on the findings, several recommendations are
suggested for companies and public organizations planning to implement
artificial intelligence in the future. It is crucial to manage the user
experience of ICT services and pilot AI products and services to deliver
positive experiences. It is essential to provide potential AI users with
specific information about the features and benefits of AI products and
services. This will enable them to develop realistic expectations regarding AI
technology.
Related papers
- The Ethics of Advanced AI Assistants [53.89899371095332]
This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants.
We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user.
We consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants.
arXiv Detail & Related papers (2024-04-24T23:18:46Z) - Agency and legibility for artists through Experiential AI [12.941266914933454]
Experiential AI is an emerging research field that addresses the challenge of making AI tangible and explicit.
We report on an empirical case study of an experiential AI system designed for creative data exploration.
We discuss how experiential AI can increase legibility and agency for artists.
arXiv Detail & Related papers (2023-06-04T11:00:07Z) - End-User Development for Artificial Intelligence: A Systematic
Literature Review [2.347942013388615]
End-User Development (EUD) can allow people to create, customize, or adapt AI-based systems to their own needs.
This paper presents a literature review that aims to shed the light on the current landscape of EUD for AI systems.
arXiv Detail & Related papers (2023-04-14T09:57:36Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Do We Need Explainable AI in Companies? Investigation of Challenges,
Expectations, and Chances from Employees' Perspective [0.8057006406834467]
Using AI poses new requirements for companies and their employees, including transparency and comprehensibility of AI systems.
The field of Explainable AI (XAI) aims to address these issues.
This project report paper provides insights into employees' needs and attitudes towards (X)AI.
arXiv Detail & Related papers (2022-10-07T13:11:28Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Measuring Ethics in AI with AI: A Methodology and Dataset Construction [1.6861004263551447]
We propose to use such newfound capabilities of AI technologies to augment our AI measuring capabilities.
We do so by training a model to classify publications related to ethical issues and concerns.
We highlight the implications of AI metrics, in particular their contribution towards developing trustful and fair AI-based tools and technologies.
arXiv Detail & Related papers (2021-07-26T00:26:12Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Empowering Things with Intelligence: A Survey of the Progress,
Challenges, and Opportunities in Artificial Intelligence of Things [98.10037444792444]
We show how AI can empower the IoT to make it faster, smarter, greener, and safer.
First, we present progress in AI research for IoT from four perspectives: perceiving, learning, reasoning, and behaving.
Finally, we summarize some promising applications of AIoT that are likely to profoundly reshape our world.
arXiv Detail & Related papers (2020-11-17T13:14:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.