What Students Ask, How a Generative AI Assistant Responds: Exploring Higher Education Students' Dialogues on Learning Analytics Feedback
- URL: http://arxiv.org/abs/2601.04919v1
- Date: Thu, 08 Jan 2026 13:17:44 GMT
- Title: What Students Ask, How a Generative AI Assistant Responds: Exploring Higher Education Students' Dialogues on Learning Analytics Feedback
- Authors: Yildiz Uzun, Andrea Gauthier, Mutlu Cukurova,
- Abstract summary: Learning analytics dashboards (LADs) aim to support students' regulation of learning by translating complex data into feedback.<n>Students, especially those with lower self-regulated learning (SRL) competence, often struggle to engage with and interpret analytics feedback.<n>We explored authentic dialogues between students and GenAI assistant integrated into LAD during a 10-week semester.
- Score: 0.3562673545689596
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Learning analytics dashboards (LADs) aim to support students' regulation of learning by translating complex data into feedback. Yet students, especially those with lower self-regulated learning (SRL) competence, often struggle to engage with and interpret analytics feedback. Conversational generative artificial intelligence (GenAI) assistants have shown potential to scaffold this process through real-time, personalised, dialogue-based support. Further advancing this potential, we explored authentic dialogues between students and GenAI assistant integrated into LAD during a 10-week semester. The analysis focused on questions students with different SRL levels posed, the relevance and quality of the assistant's answers, and how students perceived the assistant's role in their learning. Findings revealed distinct query patterns. While low SRL students sought clarification and reassurance, high SRL students queried technical aspects and requested personalised strategies. The assistant provided clear and reliable explanations but limited in personalisation, handling emotionally charged queries, and integrating multiple data points for tailored responses. Findings further extend that GenAI interventions can be especially valuable for low SRL students, offering scaffolding that supports engagement with feedback and narrows gaps with their higher SRL peers. At the same time, students' reflections underscored the importance of trust, need for greater adaptivity, context-awareness, and technical refinement in future systems.
Related papers
- IntentRL: Training Proactive User-intent Agents for Open-ended Deep Research via Reinforcement Learning [54.21689544323704]
Deep Research (DR) agents extend Large Language Models (LLMs) beyond parametric knowledge.<n>Unlike real-time conversational assistants, DR is computationally expensive and time-consuming.<n>We propose IntentRL, a framework that trains proactive agents to clarify latent user intents before starting long-horizon research.
arXiv Detail & Related papers (2026-02-03T12:43:09Z) - What Can Student-AI Dialogues Tell Us About Students' Self-Regulated Learning? An exploratory framework [2.7367574061168747]
The rise of Human-AI Collaborative Learning (HAICL) is shifting education toward dialogue-centric paradigms.<n>This study investigates whether the student-AI dialogue can serve as a valid, non-interrupted data source for Self-Regulated Learning assessment.
arXiv Detail & Related papers (2025-12-30T15:50:27Z) - Mining the Gold: Student-AI Chat Logs as Rich Sources for Automated Knowledge Gap Detection [2.6808104662419097]
In large lectures, instructors face challenges on identifying students' knowledge gaps timely.<n>We propose QueryQuilt, a multi-agent LLM framework that automatically detects common knowledge gaps in large-scale lectures by analyzing students' chat logs with AI assistants.<n>Our evaluation demonstrates promising results, with QueryQuilt achieving 100% accuracy in identifying knowledge gaps among simulated students and 95% completeness when tested on real student-AI dialogue data.
arXiv Detail & Related papers (2025-12-26T23:04:04Z) - An Experience Report on a Pedagogically Controlled, Curriculum-Constrained AI Tutor for SE Education [4.976713294177978]
This paper presents the design and pilot evaluation of RockStartIT Tutor, an AI-powered assistant developed for a digital programming and computational thinking course within the RockStartIT initiative.<n> Powered by GPT-4 via OpenAI's Assistant API, the tutor employs a novel prompting strategy and a modular, semantically tagged knowledge base to deliver context-aware, personalized, and curriculum-constrained support for secondary school students.
arXiv Detail & Related papers (2025-12-08T12:54:37Z) - Teaching Language Models To Gather Information Proactively [53.85419549904644]
Large language models (LLMs) are increasingly expected to function as collaborative partners.<n>In this work, we introduce a new task paradigm: proactive information gathering.<n>We design a scalable framework that generates partially specified, real-world tasks, masking key information.<n>Within this setup, our core innovation is a reinforcement finetuning strategy that rewards questions that elicit genuinely new, implicit user information.
arXiv Detail & Related papers (2025-07-28T23:50:09Z) - Short-Term Gains, Long-Term Gaps: The Impact of GenAI and Search Technologies on Retention [1.534667887016089]
This study investigates how GenAI (ChatGPT), search engines (Google), and e-textbooks influence student performance across tasks of varying cognitive complexity.<n>ChatGPT and Google groups outperformed the control group in immediate assessments for lower-order cognitive tasks.<n>While AI-driven tools facilitate immediate performance, they do not inherently reinforce long-term retention unless supported by structured learning strategies.
arXiv Detail & Related papers (2025-07-10T00:44:50Z) - LLM Agents for Education: Advances and Applications [49.3663528354802]
Large Language Model (LLM) agents have demonstrated remarkable capabilities in automating tasks and driving innovation across diverse educational applications.<n>This survey aims to provide a comprehensive technological overview of LLM agents for education, fostering further research and collaboration to enhance their impact for the greater good of learners and educators alike.
arXiv Detail & Related papers (2025-03-14T11:53:44Z) - How Good is ChatGPT in Giving Adaptive Guidance Using Knowledge Graphs in E-Learning Environments? [0.8999666725996978]
This study introduces an approach that integrates dynamic knowledge graphs with large language models (LLMs) to offer nuanced student assistance.<n>Central to this method is the knowledge graph's role in assessing a student's comprehension of topic prerequisites.<n>Preliminary findings suggest students could benefit from this tiered support, achieving enhanced comprehension and improved task outcomes.
arXiv Detail & Related papers (2024-12-05T04:05:43Z) - Exploring Knowledge Tracing in Tutor-Student Dialogues using LLMs [49.18567856499736]
We investigate whether large language models (LLMs) can be supportive of open-ended dialogue tutoring.<n>We apply a range of knowledge tracing (KT) methods on the resulting labeled data to track student knowledge levels over an entire dialogue.<n>We conduct experiments on two tutoring dialogue datasets, and show that a novel yet simple LLM-based method, LLMKT, significantly outperforms existing KT methods in predicting student response correctness in dialogues.
arXiv Detail & Related papers (2024-09-24T22:31:39Z) - BoilerTAI: A Platform for Enhancing Instruction Using Generative AI in Educational Forums [0.0]
This paper describes a practical, scalable platform that seamlessly integrates Generative AI (GenAI) with online educational forums.
The platform empowers instructional staff to efficiently manage, refine, and approve responses by facilitating interaction between student posts and a Large Language Model (LLM)
arXiv Detail & Related papers (2024-09-20T04:00:30Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - UKP-SQuARE: An Interactive Tool for Teaching Question Answering [61.93372227117229]
The exponential growth of question answering (QA) has made it an indispensable topic in any Natural Language Processing (NLP) course.
We introduce UKP-SQuARE as a platform for QA education.
Students can run, compare, and analyze various QA models from different perspectives.
arXiv Detail & Related papers (2023-05-31T11:29:04Z) - Stylistic Dialogue Generation via Information-Guided Reinforcement
Learning Strategy [65.98002918470544]
We introduce a new training strategy, know as Information-Guided Reinforcement Learning (IG-RL)
In IG-RL, a training model is encouraged to explore stylistic expressions while being constrained to maintain its content quality.
This is achieved by adopting reinforcement learning strategy with statistical style information guidance for quality-preserving explorations.
arXiv Detail & Related papers (2020-04-05T13:58:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.