Psychological Factors Influencing University Students Trust in AI-Based Learning Assistants
- URL: http://arxiv.org/abs/2512.17390v1
- Date: Fri, 19 Dec 2025 09:44:43 GMT
- Title: Psychological Factors Influencing University Students Trust in AI-Based Learning Assistants
- Authors: Ezgi Dağtekin, Ercan Erkalkan,
- Abstract summary: This paper adopts a psychology oriented perspective to examine how university students form trust in AI based learning assistants.<n>We propose a conceptual framework that organizes psychological predictors of trust into four groups: cognitive appraisals, affective reactions, social relational factors, and contextual moderators.<n>The paper highlights that trust in AI is a psychological process shaped by individual differences and learning environments.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence (AI) based learning assistants and chatbots are increasingly integrated into higher education. While these tools are often evaluated in terms of technical performance, their successful and ethical use also depends on psychological factors such as trust, perceived risk, technology anxiety, and students general attitudes toward AI. This paper adopts a psychology oriented perspective to examine how university students form trust in AI based learning assistants. Drawing on recent literature in mental health, human AI interaction, and trust in automation, we propose a conceptual framework that organizes psychological predictors of trust into four groups: cognitive appraisals, affective reactions, social relational factors, and contextual moderators. A narrative review approach synthesizes empirical findings and derives research questions and hypotheses for future studies. The paper highlights that trust in AI is a psychological process shaped by individual differences and learning environments, with practical implications for instructors, administrators, and designers of educational AI systems.
Related papers
- The Psychology of Learning from Machines: Anthropomorphic AI and the Paradox of Automation in Education [1.8217623940980625]
This work synthesizes four research traditions to establish a comprehensive framework for understanding how learners psychologically relate to anthropomorphic AI tutors.<n>We identify three persistent challenges intensified by Generative AI's conversational fluency.<n>We ground this theoretical synthesis through comparative analysis of over 104,984 YouTube comments across AI-generated philosophical debates and human-created engineering tutorials.
arXiv Detail & Related papers (2026-01-07T08:28:33Z) - Bridging Minds and Machines: Toward an Integration of AI and Cognitive Science [48.38628297686686]
Cognitive Science has profoundly shaped disciplines such as Artificial Intelligence (AI), Philosophy, Psychology, Neuroscience, Linguistics, and Culture.<n>Many breakthroughs in AI trace their roots to cognitive theories, while AI itself has become an indispensable tool for advancing cognitive research.<n>We argue that the future of AI within Cognitive Science lies not only in improving performance but also in constructing systems that deepen our understanding of the human mind.
arXiv Detail & Related papers (2025-08-28T11:26:17Z) - Evaluating AI-Powered Learning Assistants in Engineering Higher Education: Student Engagement, Ethical Challenges, and Policy Implications [0.2099922236065961]
This study evaluates the Educational AI Hub, an AI-powered learning framework, implemented in undergraduate civil and environmental engineering courses at a large R1 public university.<n>Students valued the AI assistant for its accessibility and comfort, with nearly half reporting greater ease using it than seeking help from instructors or teaching assistants.<n>Ethical uncertainty, particularly around institutional policy and academic integrity, emerged as a key barrier to full engagement.
arXiv Detail & Related papers (2025-06-06T03:02:49Z) - Measurement of LLM's Philosophies of Human Nature [113.47929131143766]
We design the standardized psychological scale specifically targeting large language models (LLM)<n>We show that current LLMs exhibit a systemic lack of trust in humans.<n>We propose a mental loop learning framework, which enables LLM to continuously optimize its value system.
arXiv Detail & Related papers (2025-04-03T06:22:19Z) - AI Identity, Empowerment, and Mindfulness in Mitigating Unethical AI Use [0.0]
This study examines how AI identity influences psychological empowerment and unethical AI behavior among college students.<n>Findings show that a strong AI identity enhances psychological empowerment and academic engagement but can also lead to increased unethical AI practices.<n>IT mindfulness acts as an ethical safeguard, promoting sensitivity to ethical concerns and reducing misuse of AI.
arXiv Detail & Related papers (2025-03-25T22:36:21Z) - Aligning Generalisation Between Humans and Machines [74.120848518198]
AI technology can support humans in scientific discovery and forming decisions, but may also disrupt democracies and target individuals.<n>The responsible use of AI and its participation in human-AI teams increasingly shows the need for AI alignment.<n>A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Perceptions of Discriminatory Decisions of Artificial Intelligence: Unpacking the Role of Individual Characteristics [0.0]
Personal differences (digital self-efficacy, technical knowledge, belief in equality, political ideology) are associated with perceptions of AI outcomes.
Digital self-efficacy and technical knowledge are positively associated with attitudes toward AI.
Liberal ideologies are negatively associated with outcome trust, higher negative emotion, and greater skepticism.
arXiv Detail & Related papers (2024-10-17T06:18:26Z) - Untangling Critical Interaction with AI in Students Written Assessment [2.8078480738404]
Key challenge exists in ensuring that humans are equipped with the required critical thinking and AI literacy skills.
This paper provides a first step toward conceptualizing the notion of critical learner interaction with AI.
Using both theoretical models and empirical data, our preliminary findings suggest a general lack of Deep interaction with AI during the writing process.
arXiv Detail & Related papers (2024-04-10T12:12:50Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - MAILS -- Meta AI Literacy Scale: Development and Testing of an AI
Literacy Questionnaire Based on Well-Founded Competency Models and
Psychological Change- and Meta-Competencies [6.368014180870025]
The questionnaire should be modular (i.e., including different facets that can be used independently of each other) to be flexibly applicable in professional life.
We derived 60 items to represent different facets of AI Literacy according to Ng and colleagues conceptualisation of AI literacy.
Additional 12 items to represent psychological competencies such as problem solving, learning, and emotion regulation in regard to AI.
arXiv Detail & Related papers (2023-02-18T12:35:55Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.