Do Students Rely on AI? Analysis of Student-ChatGPT Conversations from a Field Study
- URL: http://arxiv.org/abs/2508.20244v1
- Date: Wed, 27 Aug 2025 20:00:27 GMT
- Title: Do Students Rely on AI? Analysis of Student-ChatGPT Conversations from a Field Study
- Authors: Jiayu Zheng, Lingxin Hao, Kelun Lu, Ashi Garg, Mike Reese, Melo-Jean Yap, I-Jeng Wang, Xingyun Wu, Wenrui Huang, Jenna Hoffman, Ariane Kelly, My Le, Ryan Zhang, Yanyu Lin, Muhammad Faayez, Anqi Liu,
- Abstract summary: This study analyzed 315 student-AI conversations during a brief, quiz-based scenario across various STEM courses.<n>Students exhibited overall low reliance on AI and many of them could not effectively use AI for learning.<n>Certain behavioral metrics strongly predicted AI reliance, highlighting potential behavioral mechanisms to explain AI adoption.
- Score: 10.71612026319996
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study explores how college students interact with generative AI (ChatGPT-4) during educational quizzes, focusing on reliance and predictors of AI adoption. Conducted at the early stages of ChatGPT implementation, when students had limited familiarity with the tool, this field study analyzed 315 student-AI conversations during a brief, quiz-based scenario across various STEM courses. A novel four-stage reliance taxonomy was introduced to capture students' reliance patterns, distinguishing AI competence, relevance, adoption, and students' final answer correctness. Three findings emerged. First, students exhibited overall low reliance on AI and many of them could not effectively use AI for learning. Second, negative reliance patterns often persisted across interactions, highlighting students' difficulty in effectively shifting strategies after unsuccessful initial experiences. Third, certain behavioral metrics strongly predicted AI reliance, highlighting potential behavioral mechanisms to explain AI adoption. The study's findings underline critical implications for ethical AI integration in education and the broader field. It emphasizes the need for enhanced onboarding processes to improve student's familiarity and effective use of AI tools. Furthermore, AI interfaces should be designed with reliance-calibration mechanisms to enhance appropriate reliance. Ultimately, this research advances understanding of AI reliance dynamics, providing foundational insights for ethically sound and cognitively enriching AI practices.
Related papers
- AI Literacy, Safety Awareness, and STEM Career Aspirations of Australian Secondary Students: Evaluating the Impact of Workshop Interventions [38.350232667249095]
Deepfakes and other forms of synthetic media pose growing safety risks for adolescents.<n>This study evaluates the impact of Day of AI Australia's workshop-based intervention on Australian secondary students.
arXiv Detail & Related papers (2026-01-30T02:55:53Z) - Attachment Styles and AI Chatbot Interactions Among College Students [1.334956439319062]
This study explored how college students with different attachment styles describe their interactions with ChatGPT.<n>We identified three main themes: (1) AI as a low-risk emotional space, (2) attachment-congruent patterns of AI engagement, and (3) the paradox of AI intimacy.
arXiv Detail & Related papers (2025-12-20T18:49:07Z) - AI Literacy as a Key Driver of User Experience in AI-Powered Assessment: Insights from Socratic Mind [2.0272430076690027]
This study examines how students' AI literacy and prior exposure to AI technologies shape their perceptions of Socratic Mind.<n>Data from 309 undergraduates in Computer Science and Business courses were collected.
arXiv Detail & Related papers (2025-07-29T10:11:24Z) - ChatGPT produces more "lazy" thinkers: Evidence of cognitive engagement decline [0.0]
This study investigates the impact of generative artificial intelligence (AI) tools on the cognitive engagement of students during academic writing tasks.<n>The results revealed significantly lower cognitive engagement scores in the ChatGPT group compared to the control group.<n>These findings suggest that AI assistance may lead to cognitive offloading.
arXiv Detail & Related papers (2025-06-30T18:41:50Z) - Evaluating AI-Powered Learning Assistants in Engineering Higher Education: Student Engagement, Ethical Challenges, and Policy Implications [0.2812395851874055]
This study evaluates the use of the Educational AI Hub, an AI-powered learning framework, in undergraduate civil and environmental engineering courses at a large R1 public university.<n>Students appreciated the AI assistant for its convenience and comfort, with nearly half reporting greater ease in using the AI tool.<n>While most students viewed AI use as ethically acceptable, many expressed uncertainties about institutional policies and apprehension about potential academic misconduct.
arXiv Detail & Related papers (2025-06-06T03:02:49Z) - Let people fail! Exploring the influence of explainable virtual and robotic agents in learning-by-doing tasks [45.23431596135002]
This study compares the effects of classic vs. partner-aware explanations on human behavior and performance during a learning-by-doing task.
Results indicated that partner-aware explanations influenced participants differently based on the type of artificial agents involved.
arXiv Detail & Related papers (2024-11-15T13:22:04Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Learning to Prompt in the Classroom to Understand AI Limits: A pilot
study [35.06607166918901]
Large Language Models (LLM) and the derived chatbots, like ChatGPT, have highly improved the natural language processing capabilities of AI systems.
However, excitement has led to negative sentiments, even as AI methods demonstrate remarkable contributions.
A pilot educational intervention was performed in a high school with 21 students.
arXiv Detail & Related papers (2023-07-04T07:51:37Z) - Assigning AI: Seven Approaches for Students, with Prompts [0.0]
This paper examines the transformative role of Large Language Models (LLMs) in education and their potential as learning tools.
The authors propose seven approaches for utilizing AI in classrooms: AI-tutor, AI-coach, AI-mentor, AI-teammate, AI-tool, AI-simulator, and AI-student.
arXiv Detail & Related papers (2023-06-13T03:36:36Z) - Is AI Changing the Rules of Academic Misconduct? An In-depth Look at
Students' Perceptions of 'AI-giarism' [0.0]
This study explores students' perceptions of AI-giarism, an emergent form of academic dishonesty involving AI and plagiarism.
The findings portray a complex landscape of understanding, with clear disapproval for direct AI content generation.
The study provides pivotal insights for academia, policy-making, and the broader integration of AI technology in education.
arXiv Detail & Related papers (2023-06-06T02:22:08Z) - Understanding the Role of Human Intuition on Reliance in Human-AI
Decision-Making with Explanations [44.01143305912054]
We study how decision-makers' intuition affects their use of AI predictions and explanations.
Our results identify three types of intuition involved in reasoning about AI predictions and explanations.
We use these pathways to explain why feature-based explanations did not improve participants' decision outcomes and increased their overreliance on AI.
arXiv Detail & Related papers (2023-01-18T01:33:50Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.