Personalizing Student-Agent Interactions Using Log-Contextualized Retrieval Augmented Generation (RAG)
- URL: http://arxiv.org/abs/2505.17238v2
- Date: Tue, 17 Jun 2025 02:57:43 GMT
- Title: Personalizing Student-Agent Interactions Using Log-Contextualized Retrieval Augmented Generation (RAG)
- Authors: Clayton Cohn, Surya Rayala, Caitlin Snyder, Joyce Fonteles, Shruti Jain, Naveeduddin Mohammed, Umesh Timalsina, Sarah K. Burriss, Ashwin T S, Namrata Srivastava, Menton Deweese, Angela Eeds, Gautam Biswas,
- Abstract summary: Large language models (LLMs) facilitate dynamic pedagogical interactions but undermine confidence, trust, and instructional value.<n>Retrieval-augmented generation (RAG) grounds LLM outputs in curated knowledge but requires a clear semantic link between user input and a knowledge base.<n>We propose log-contextualized RAG (LC-RAG), which enhances RAG retrieval by using environment logs to contextualize collaborative discourse.
- Score: 3.5262811753348715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collaborative dialogue offers rich insights into students' learning and critical thinking, which is essential for personalizing pedagogical agent interactions in STEM+C settings. While large language models (LLMs) facilitate dynamic pedagogical interactions, hallucinations undermine confidence, trust, and instructional value. Retrieval-augmented generation (RAG) grounds LLM outputs in curated knowledge but requires a clear semantic link between user input and a knowledge base, which is often weak in student dialogue. We propose log-contextualized RAG (LC-RAG), which enhances RAG retrieval by using environment logs to contextualize collaborative discourse. Our findings show that LC-RAG improves retrieval over a discourse-only baseline and allows our collaborative peer agent, Copa, to deliver relevant, personalized guidance that supports students' critical thinking and epistemic decision-making in a collaborative computational modeling environment, C2STEM.
Related papers
- Conversational Search: From Fundamentals to Frontiers in the LLM Era [30.68590015959931]
This tutorial aims to introduce the connection between fundamentals and the emerging topics revolutionized by large language models.<n>It is designed for students, researchers, and practitioners from both academia and industry.
arXiv Detail & Related papers (2025-06-12T12:19:57Z) - Improving Multilingual Retrieval-Augmented Language Models through Dialectic Reasoning Argumentations [65.11348389219887]
We introduce Dialectic-RAG (DRAG), a modular approach that evaluates retrieved information by comparing, contrasting, and resolving conflicting perspectives.<n>We show the impact of our framework both as an in-context learning strategy and for constructing demonstrations to instruct smaller models.
arXiv Detail & Related papers (2025-04-07T06:55:15Z) - Graph Retrieval-Augmented LLM for Conversational Recommendation Systems [52.35491420330534]
G-CRS (Graph Retrieval-Augmented Large Language Model for Conversational Recommender Systems) is a training-free framework that combines graph retrieval-augmented generation and in-context learning.<n>G-CRS achieves superior recommendation performance compared to existing methods without requiring task-specific training.
arXiv Detail & Related papers (2025-03-09T03:56:22Z) - Enhanced Classroom Dialogue Sequences Analysis with a Hybrid AI Agent: Merging Expert Rule-Base with Large Language Models [7.439914834067705]
This study develops a comprehensive rule base of dialogue sequences and an Artificial Intelligence (AI) agent.
The agent applies expert knowledge while adapting to the complexities of natural language, enabling accurate and flexible categorisation of classroom dialogue sequences.
arXiv Detail & Related papers (2024-11-13T08:13:41Z) - GIVE: Structured Reasoning of Large Language Models with Knowledge Graph Inspired Veracity Extrapolation [108.2008975785364]
Graph Inspired Veracity Extrapolation (GIVE) is a novel reasoning method that merges parametric and non-parametric memories to improve accurate reasoning with minimal external input.<n>GIVE guides the LLM agent to select the most pertinent expert data (observe), engage in query-specific divergent thinking (reflect), and then synthesize this information to produce the final output (speak)
arXiv Detail & Related papers (2024-10-11T03:05:06Z) - Scalable Frame-based Construction of Sociocultural NormBases for Socially-Aware Dialogues [66.69453609603875]
Sociocultural norms serve as guiding principles for personal conduct in social interactions.
We propose a scalable approach for constructing a Sociocultural Norm (SCN) Base using Large Language Models (LLMs)
We construct a comprehensive and publicly accessible Chinese Sociocultural NormBase.
arXiv Detail & Related papers (2024-10-04T00:08:46Z) - Exploring Knowledge Tracing in Tutor-Student Dialogues using LLMs [49.18567856499736]
We investigate whether large language models (LLMs) can be supportive of open-ended dialogue tutoring.<n>We apply a range of knowledge tracing (KT) methods on the resulting labeled data to track student knowledge levels over an entire dialogue.<n>We conduct experiments on two tutoring dialogue datasets, and show that a novel yet simple LLM-based method, LLMKT, significantly outperforms existing KT methods in predicting student response correctness in dialogues.
arXiv Detail & Related papers (2024-09-24T22:31:39Z) - Generating Situated Reflection Triggers about Alternative Solution Paths: A Case Study of Generative AI for Computer-Supported Collaborative Learning [3.2721068185888127]
We present a proof-of-concept application to offer students dynamic and contextualized feedback.
Specifically, we augment an Online Programming Exercise bot for a college-level Cloud Computing course with ChatGPT.
We demonstrate that LLMs can be used to generate highly situated reflection triggers that incorporate details of the collaborative discussion happening in context.
arXiv Detail & Related papers (2024-04-28T17:56:14Z) - Knowledge Graphs and Pre-trained Language Models enhanced Representation Learning for Conversational Recommender Systems [58.561904356651276]
We introduce the Knowledge-Enhanced Entity Representation Learning (KERL) framework to improve the semantic understanding of entities for Conversational recommender systems.
KERL uses a knowledge graph and a pre-trained language model to improve the semantic understanding of entities.
KERL achieves state-of-the-art results in both recommendation and response generation tasks.
arXiv Detail & Related papers (2023-12-18T06:41:23Z) - Enhancing Student Performance Prediction on Learnersourced Questions
with SGNN-LLM Synergy [11.735587384038753]
We introduce an innovative strategy that synergizes the potential of integrating Signed Graph Neural Networks (SGNNs) and Large Language Model (LLM) embeddings.
Our methodology employs a signed bipartite graph to comprehensively model student answers, complemented by a contrastive learning framework that enhances noise resilience.
arXiv Detail & Related papers (2023-09-23T23:37:55Z) - Understanding Idea Creation in Collaborative Discourse through Networks:
The Joint Attention-Interaction-Creation (AIC) Framework [0.42303492200814446]
The Joint Attention-Interaction-Creation (AIC) framework captures important dynamics in collaborative discourse.
The framework was developed from the networked lens, informed by natural language processing techniques, and inspired by socio-semantic network analysis.
arXiv Detail & Related papers (2023-05-25T17:18:19Z) - Towards Unified Conversational Recommender Systems via
Knowledge-Enhanced Prompt Learning [89.64215566478931]
Conversational recommender systems (CRS) aim to proactively elicit user preference and recommend high-quality items through natural language conversations.
To develop an effective CRS, it is essential to seamlessly integrate the two modules.
We propose a unified CRS model named UniCRS based on knowledge-enhanced prompt learning.
arXiv Detail & Related papers (2022-06-19T09:21:27Z) - Graph Based Network with Contextualized Representations of Turns in
Dialogue [0.0]
Dialogue-based relation extraction (RE) aims to extract relation(s) between two arguments that appear in a dialogue.
We propose the TUrn COntext awaRE Graph Convolutional Network (TUCORE-GCN) modeled by paying attention to the way people understand dialogues.
arXiv Detail & Related papers (2021-09-09T03:09:08Z) - Towards Explainable Student Group Collaboration Assessment Models Using
Temporal Representations of Individual Student Roles [12.945344702592557]
We propose using simple temporal-CNN deep-learning models to assess student group collaboration.
We check the applicability of dynamically changing feature representations for student group collaboration assessment.
We also use Grad-CAM visualizations to better understand and interpret the important temporal indices that led to the deep-learning model's decision.
arXiv Detail & Related papers (2021-06-17T16:00:08Z) - CR-Walker: Tree-Structured Graph Reasoning and Dialog Acts for
Conversational Recommendation [62.13413129518165]
CR-Walker is a model that performs tree-structured reasoning on a knowledge graph.
It generates informative dialog acts to guide language generation.
Automatic and human evaluations show that CR-Walker can arrive at more accurate recommendation.
arXiv Detail & Related papers (2020-10-20T14:53:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.