What Drives Students' Use of AI Chatbots? Technology Acceptance in Conversational AI
- URL: http://arxiv.org/abs/2602.20547v1
- Date: Tue, 24 Feb 2026 05:00:41 GMT
- Title: What Drives Students' Use of AI Chatbots? Technology Acceptance in Conversational AI
- Authors: Griffin Pitts, Sanaz Motamedi,
- Abstract summary: We examine how perceived usefulness and perceived ease of use relate to students' behavioral intention to use conversational AI.<n>We find perceived usefulness remains the strongest predictor of students' intention to use conversational AI.<n>Trust and subjective norms significantly influence perceptions of usefulness, while perceived enjoyment exerts both a direct and indirect effect on usage intentions.
- Score: 0.17188280334580194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conversational AI tools have been rapidly adopted by students and are becoming part of their learning routines. To understand what drives this adoption, we draw on the Technology Acceptance Model (TAM) and examine how perceived usefulness and perceived ease of use relate to students' behavioral intention to use conversational AI that generates responses for learning tasks. We extend TAM by incorporating trust, perceived enjoyment, and subjective norms as additional factors that capture social and affective influences and uncertainty around AI outputs. Using partial least squares structural equation modeling, we find perceived usefulness remains the strongest predictor of students' intention to use conversational AI. However, perceived ease of use does not exert a significant direct effect on behavioral intention once other factors are considered, operating instead indirectly through perceived usefulness. Trust and subjective norms significantly influence perceptions of usefulness, while perceived enjoyment exerts both a direct and indirect effect on usage intentions. These findings suggest that adoption decisions for conversational AI systems are influenced less by effort-related considerations and more by confidence in system outputs, affective engagement, and social context. Future research is needed to further examine how these acceptance relationships generalize across different conversational systems and usage contexts.
Related papers
- Learning Through Dialogue: Unpacking the Dynamics of Human-LLM Conversations on Political Issues [17.05441917302334]
Large language models (LLMs) are increasingly used as conversational partners for learning, yet the interactional dynamics supporting users' learning and engagement are understudied.<n>We analyze the linguistic and interactional features from both LLM and participant chats across 397 human-LLM conversations about socio-political issues.
arXiv Detail & Related papers (2026-01-12T18:10:21Z) - Human Cognitive Biases in Explanation-Based Interaction: The Case of Within and Between Session Order Effect [46.80756527630539]
Explanatory Interactive Learning (XIL) is a powerful interactive learning framework designed to enable users to customize and correct AI models by interacting with their explanations.<n>Recent studies have raised concerns that explanatory interaction may trigger order effects, a well-known cognitive bias in which the sequence of presented items influences users' trust and, critically, the quality of their feedback.<n>To clarify the interplay between order effects and explanatory interaction, we ran two larger-scale user studies designed to mimic common XIL tasks.
arXiv Detail & Related papers (2025-12-04T12:59:54Z) - AI Literacy as a Key Driver of User Experience in AI-Powered Assessment: Insights from Socratic Mind [2.0272430076690027]
This study examines how students' AI literacy and prior exposure to AI technologies shape their perceptions of Socratic Mind.<n>Data from 309 undergraduates in Computer Science and Business courses were collected.
arXiv Detail & Related papers (2025-07-29T10:11:24Z) - When Models Know More Than They Can Explain: Quantifying Knowledge Transfer in Human-AI Collaboration [79.69935257008467]
We introduce Knowledge Integration and Transfer Evaluation (KITE), a conceptual and experimental framework for Human-AI knowledge transfer capabilities.<n>We conduct the first large-scale human study (N=118) explicitly designed to measure it.<n>In our two-phase setup, humans first ideate with an AI on problem-solving strategies, then independently implement solutions, isolating model explanations' influence on human understanding.
arXiv Detail & Related papers (2025-06-05T20:48:16Z) - Must Read: A Systematic Survey of Computational Persuasion [60.83151988635103]
AI-driven persuasion can be leveraged for beneficial applications, but also poses threats through manipulation and unethical influence.<n>Our survey outlines future research directions to enhance the safety, fairness, and effectiveness of AI-powered persuasion.
arXiv Detail & Related papers (2025-05-12T17:26:31Z) - Understanding Learner-LLM Chatbot Interactions and the Impact of Prompting Guidelines [9.834055425277874]
This study investigates learner-AI interactions through an educational experiment in which participants receive structured guidance on effective prompting.<n>To assess user behavior and prompting efficacy, we analyze a dataset of 642 interactions from 107 users.<n>Our findings provide a deeper understanding of how users engage with Large Language Models and the role of structured prompting guidance in enhancing AI-assisted communication.
arXiv Detail & Related papers (2025-04-10T15:20:43Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - I would love this to be like an assistant, not the teacher: a voice of the customer perspective of what distance learning students want from an Artificial Intelligence Digital Assistant [0.0]
This study examined the perceptions of ten online and distance learning students regarding the design of a hypothetical AI Digital Assistant (AIDA)
All participants agreed on the usefulness of such an AI tool while studying and reported benefits from using it for real-time assistance and query resolution, support for academic tasks, personalisation and accessibility, together with emotional and social support.
Students' concerns related to the ethical and social implications of implementing AIDA, data privacy and data use, operational challenges, academic integrity and misuse, and the future of education.
arXiv Detail & Related papers (2024-02-16T08:10:41Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Analysis of the User Perception of Chatbots in Education Using A Partial
Least Squares Structural Equation Modeling Approach [0.0]
Key behavior-related aspects, such as Optimism, Innovativeness, Discomfort, Insecurity, Transparency, Ethics, Interaction, Engagement, and Accuracy, were studied.
Results showed that Optimism and Innovativeness are positively associated with Perceived Ease of Use (PEOU) and Perceived Usefulness (PU)
arXiv Detail & Related papers (2023-11-07T00:44:56Z) - From Heuristic to Analytic: Cognitively Motivated Strategies for
Coherent Physical Commonsense Reasoning [66.98861219674039]
Heuristic-Analytic Reasoning (HAR) strategies drastically improve the coherence of rationalizations for model decisions.
Our findings suggest that human-like reasoning strategies can effectively improve the coherence and reliability of PLM reasoning.
arXiv Detail & Related papers (2023-10-24T19:46:04Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.