Revisiting UTAUT for the Age of AI: Understanding Employees AI Adoption and Usage Patterns Through an Extended UTAUT Framework
- URL: http://arxiv.org/abs/2510.15142v1
- Date: Thu, 16 Oct 2025 21:01:41 GMT
- Title: Revisiting UTAUT for the Age of AI: Understanding Employees AI Adoption and Usage Patterns Through an Extended UTAUT Framework
- Authors: Diana Wolfe, Matt Price, Alice Choe, Fergus Kidd, Hannah Wagner,
- Abstract summary: This study investigates whether demographic factors shape adoption and attitudes among employees toward artificial intelligence (AI) technologies at work.<n>We surveyed 2,257 professionals across global regions and organizational levels within a multinational consulting firm.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study investigates whether demographic factors shape adoption and attitudes among employees toward artificial intelligence (AI) technologies at work. Building on an extended Unified Theory of Acceptance and Use of Technology (UTAUT), which reintroduces affective dimensions such as attitude, self-efficacy, and anxiety, we surveyed 2,257 professionals across global regions and organizational levels within a multinational consulting firm. Non-parametric tests examined whether three demographic factors (i.e., years of experience, hierarchical level in the organization, and geographic region) were associated with AI adoption, usage intensity, and eight UTAUT constructs. Organizational level significantly predicted AI adoption, with senior employees showing higher usage rates, while experience and region were unrelated to adoption. Among AI users (n = 1,256), frequency and duration of use showed minimal demographic variation. However, omnibus tests revealed small but consistent group differences across several UTAUT constructs, particularly anxiety, performance expectancy, and behavioral intention, suggesting that emotional and cognitive responses to AI vary modestly across contexts. These findings highlight that demographic factors explain limited variance in AI acceptance but remain relevant for understanding contextual nuances in technology-related attitudes. The results underscore the need to integrate affective and organizational factors into models of technology acceptance to support equitable, confident, and sustainable engagement with AI in modern workplaces.
Related papers
- Work Design and Multidimensional AI Threat as Predictors of Workplace AI Adoption and Depth of Use [0.36944296923226316]
This research examines whether motivational job characteristics and multidimensional AI threat perceptions jointly predict workplace AI adoption and depth of use.<n>Using cross-sectional survey data from 2,257 employees, we tested group differences across role level, years of experience, and region, along with multivariable predictors of AI adoption and use depth.
arXiv Detail & Related papers (2026-02-26T17:52:29Z) - The Impact of Artificial Intelligence on Enterprise Decision-Making Process [0.0]
93 percent of firms use AI, primarily in customer service, data forecasting, and decision support.<n>The most frequent barriers include employee resistance, high costs, and regulatory ambiguity.<n>The study highlights the importance of integrating AI with human judgment and communication practices.
arXiv Detail & Related papers (2025-11-26T14:45:16Z) - AI as Cognitive Amplifier: Rethinking Human Judgment in the Age of Generative AI [0.65268245109828]
I propose a three-level model of AI engagement.<n>I argue that the transition between levels requires not technical training but development of domain expertise and metacognitive skills.
arXiv Detail & Related papers (2025-10-30T11:55:34Z) - Cultural Dimensions of Artificial Intelligence Adoption: Empirical Insights for Wave 1 from a Multinational Longitudinal Pilot Study [0.0]
The swift diffusion of artificial intelligence (AI) raises critical questions about how cultural contexts shape adoption patterns and their consequences for human daily life.<n>This study investigates the cultural dimensions of AI adoption and their influence on cognitive strategies across nine national contexts in Europe, Africa, Asia, and South America.<n>Results reveal two key findings: First, cultural factors, particularly language and age, significantly affect AI adoption and perceptions of reliability with older participants reporting higher engagement with AI for educational purposes.<n>Second, ethical judgment about AI use varied across domains, with professional contexts normalizing its role as a pragmatic collaborator while academic settings emphasized risks of plagiarism.
arXiv Detail & Related papers (2025-10-22T16:31:28Z) - (AI peers) are people learning from the same standpoint: Perception of AI characters in a Collaborative Science Investigation [0.0]
scenario-based assessment (SBA) introduces simulated agents to provide an authentic social-interactional context.<n>Recent advancements in multimodal AI, such as text-to-video technology, allow these agents to be enhanced into AI-generated characters.<n>This study investigates how learners perceive AI characters taking the role of mentor and teammates in an SBA mirroring the context of a collaborative science investigation.
arXiv Detail & Related papers (2025-06-06T15:29:11Z) - When Models Know More Than They Can Explain: Quantifying Knowledge Transfer in Human-AI Collaboration [79.69935257008467]
We introduce Knowledge Integration and Transfer Evaluation (KITE), a conceptual and experimental framework for Human-AI knowledge transfer capabilities.<n>We conduct the first large-scale human study (N=118) explicitly designed to measure it.<n>In our two-phase setup, humans first ideate with an AI on problem-solving strategies, then independently implement solutions, isolating model explanations' influence on human understanding.
arXiv Detail & Related papers (2025-06-05T20:48:16Z) - AI in Software Engineering: Perceived Roles and Their Impact on Adoption [0.0]
This paper investigates how developers conceptualize AI-powered Development Tools.<n>We identify two primary Mental Models: AI as an inanimate tool and AI as a human-like teammate.
arXiv Detail & Related papers (2025-04-29T00:37:49Z) - General Scales Unlock AI Evaluation with Explanatory and Predictive Power [57.7995945974989]
benchmarking has guided progress in AI, but it has offered limited explanatory and predictive power for general-purpose AI systems.<n>We introduce general scales for AI evaluation that can explain what common AI benchmarks really measure.<n>Our fully-automated methodology builds on 18 newly-crafted rubrics that place instance demands on general scales that do not saturate.
arXiv Detail & Related papers (2025-03-09T01:13:56Z) - On Benchmarking Human-Like Intelligence in Machines [77.55118048492021]
We argue that current AI evaluation paradigms are insufficient for assessing human-like cognitive capabilities.<n>We identify a set of key shortcomings: a lack of human-validated labels, inadequate representation of human response variability and uncertainty, and reliance on simplified and ecologically-invalid tasks.
arXiv Detail & Related papers (2025-02-27T20:21:36Z) - Making Sense of AI Limitations: How Individual Perceptions Shape Organizational Readiness for AI Adoption [0.0]
This study investigates how individuals' perceptions of artificial intelligence (AI) limitations influence organizational readiness for AI adoption.<n>The research reveals that organizational readiness emerges through dynamic interactions between individual sensemaking, social learning, and formal integration processes.
arXiv Detail & Related papers (2025-02-21T18:31:08Z) - Why (not) use AI? Analyzing People's Reasoning and Conditions for AI Acceptability [17.420096756296896]
We investigate the demographic and reasoning factors that influence people's judgments about AI's development.<n>We find lower acceptance of labor-replacement usage over personal health.<n>We observe unified reasoning type (e.g., cost-benefit reasoning) leading to higher agreement.
arXiv Detail & Related papers (2025-02-11T06:06:47Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Perceptions of Discriminatory Decisions of Artificial Intelligence: Unpacking the Role of Individual Characteristics [0.0]
Personal differences (digital self-efficacy, technical knowledge, belief in equality, political ideology) are associated with perceptions of AI outcomes.
Digital self-efficacy and technical knowledge are positively associated with attitudes toward AI.
Liberal ideologies are negatively associated with outcome trust, higher negative emotion, and greater skepticism.
arXiv Detail & Related papers (2024-10-17T06:18:26Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.