ChatGPT-5 in Secondary Education: A Mixed-Methods Analysis of Student Attitudes, AI Anxiety, and Hallucination-Aware Use
- URL: http://arxiv.org/abs/2512.04109v1
- Date: Sun, 30 Nov 2025 19:28:48 GMT
- Title: ChatGPT-5 in Secondary Education: A Mixed-Methods Analysis of Student Attitudes, AI Anxiety, and Hallucination-Aware Use
- Authors: Tryfon Sivenas,
- Abstract summary: Students used ChatGPT-5 during an eight-hour intervention in the course "Technology"<n>Students engaged in information seeking, CV generation, document and video summarization, image generation, quiz creation, and age-appropriate explanations.<n>After encountering hallucinations, many students reported restricting AI use to domains where they already possessed knowledge.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This mixed-methods study examined secondary students' interactions with the generative AI chatbot ChatGPT-5 in a formal classroom setting, focusing on attitudes, anxiety, and responses to hallucinated outputs. Participants were 109 16-year-old students from three Greek high schools who used ChatGPT-5 during an eight-hour intervention in the course "Technology." Students engaged in information seeking, CV generation, document and video summarization, image generation, quiz creation, and age-appropriate explanations, including tasks deliberately designed to elicit hallucinations. Quantitative data were collected with the Student Attitudes Toward Artificial Intelligence scale (SATAI) and the Artificial Intelligence Anxiety Scale (AIAS); qualitative data came from semi-structured interviews with 36 students. SATAI results showed moderately positive attitudes toward AI, with stronger cognitive evaluations than behavioral intentions, whereas AIAS scores indicated moderate learning-related anxiety and higher concern about AI-driven job replacement. Gender differences in AI anxiety were small and non-significant, while female students reported more positive cognitive attitudes than males. AI attitudes and AI anxiety were essentially uncorrelated. Thematic analysis identified four pedagogical affordances (knowledge expansion, immediate feedback, familiar interface, perceived skill development) and three constraints (uncertainty about accuracy, anxiety about AI feedback, privacy concerns). After encountering hallucinations, many students reported restricting AI use to domains where they already possessed knowledge and could verify answers, a strategy termed "epistemic safeguarding." The study discusses implications for critical AI literacy in secondary education.
Related papers
- AI Literacy, Safety Awareness, and STEM Career Aspirations of Australian Secondary Students: Evaluating the Impact of Workshop Interventions [38.350232667249095]
Deepfakes and other forms of synthetic media pose growing safety risks for adolescents.<n>This study evaluates the impact of Day of AI Australia's workshop-based intervention on Australian secondary students.
arXiv Detail & Related papers (2026-01-30T02:55:53Z) - Attachment Styles and AI Chatbot Interactions Among College Students [1.334956439319062]
This study explored how college students with different attachment styles describe their interactions with ChatGPT.<n>We identified three main themes: (1) AI as a low-risk emotional space, (2) attachment-congruent patterns of AI engagement, and (3) the paradox of AI intimacy.
arXiv Detail & Related papers (2025-12-20T18:49:07Z) - Human or AI? Comparing Design Thinking Assessments by Teaching Assistants and Bots [0.38233569758620045]
This study investigates the reliability and perceived accuracy of AI-assisted assessment compared to TA-assisted assessment in evaluating student posters in design thinking education.<n>Results showed low statistical agreement between instructor and AI scores for empathy and pain points, with slightly higher alignment for visual communication.<n>The study underscores the need for hybrid assessment models that integrate computational efficiency with human insights.
arXiv Detail & Related papers (2025-10-17T07:09:21Z) - Embedding Generative AI into Systems Analysis and Design Curriculum: Framework, Case Study, and Cross-Campus Empirical Evidence [0.0]
Students risk accepting AI suggestions blindly or uncritically without assessing alignment with user needs or contextual appropriateness.<n> SAGE addresses this gap by embedding GenAI into curriculum design, training students when to accept, modify, or reject AI contributions.
arXiv Detail & Related papers (2025-10-07T12:31:15Z) - Do Students Rely on AI? Analysis of Student-ChatGPT Conversations from a Field Study [10.71612026319996]
This study analyzed 315 student-AI conversations during a brief, quiz-based scenario across various STEM courses.<n>Students exhibited overall low reliance on AI and many of them could not effectively use AI for learning.<n>Certain behavioral metrics strongly predicted AI reliance, highlighting potential behavioral mechanisms to explain AI adoption.
arXiv Detail & Related papers (2025-08-27T20:00:27Z) - ChatGPT produces more "lazy" thinkers: Evidence of cognitive engagement decline [0.0]
This study investigates the impact of generative artificial intelligence (AI) tools on the cognitive engagement of students during academic writing tasks.<n>The results revealed significantly lower cognitive engagement scores in the ChatGPT group compared to the control group.<n>These findings suggest that AI assistance may lead to cognitive offloading.
arXiv Detail & Related papers (2025-06-30T18:41:50Z) - Investigating Middle School Students Question-Asking and Answer-Evaluation Skills When Using ChatGPT for Science Investigation [18.913112043551045]
Generative AI (GenAI) tools such as ChatGPT allow users to explore and address a wide range of tasks.<n>This study examines middle school students ability to ask effective questions and critically evaluate ChatGPT responses.
arXiv Detail & Related papers (2025-05-02T08:38:17Z) - Almost AI, Almost Human: The Challenge of Detecting AI-Polished Writing [55.2480439325792]
This study systematically evaluations twelve state-of-the-art AI-text detectors using our AI-Polished-Text Evaluation dataset.<n>Our findings reveal that detectors frequently flag even minimally polished text as AI-generated, struggle to differentiate between degrees of AI involvement, and exhibit biases against older and smaller models.
arXiv Detail & Related papers (2025-02-21T18:45:37Z) - Let people fail! Exploring the influence of explainable virtual and robotic agents in learning-by-doing tasks [45.23431596135002]
This study compares the effects of classic vs. partner-aware explanations on human behavior and performance during a learning-by-doing task.
Results indicated that partner-aware explanations influenced participants differently based on the type of artificial agents involved.
arXiv Detail & Related papers (2024-11-15T13:22:04Z) - Do great minds think alike? Investigating Human-AI Complementarity in Question Answering with CAIMIRA [43.116608441891096]
Humans outperform AI systems in knowledge-grounded abductive and conceptual reasoning.
State-of-the-art LLMs like GPT-4 and LLaMA show superior performance on targeted information retrieval.
arXiv Detail & Related papers (2024-10-09T03:53:26Z) - Human Bias in the Face of AI: Examining Human Judgment Against Text Labeled as AI Generated [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.<n>We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Exploring Parent's Needs for Children-Centered AI to Support Preschoolers' Interactive Storytelling and Reading Activities [52.828843153565984]
AI-based storytelling and reading technologies are becoming increasingly ubiquitous in preschoolers' lives.
This paper investigates how they function in practical storytelling and reading scenarios and, how parents, the most critical stakeholders, experience and perceive them.
Our findings suggest that even though AI-based storytelling and reading technologies provide more immersive and engaging interaction, they still cannot meet parents' expectations due to a series of interactive and algorithmic challenges.
arXiv Detail & Related papers (2024-01-24T20:55:40Z) - Experimental Evidence on Negative Impact of Generative AI on Scientific
Learning Outcomes [0.0]
Using AI for summarization significantly improved both quality and output.
Individuals with a robust background in the reading topic and superior reading/writing skills benefitted the most.
arXiv Detail & Related papers (2023-09-23T21:59:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.