PedagoSense: A Pedology Grounded LLM System for Pedagogical Strategy Detection and Contextual Response Generation in Learning Dialogues
- URL: http://arxiv.org/abs/2602.01169v1
- Date: Sun, 01 Feb 2026 11:38:18 GMT
- Title: PedagoSense: A Pedology Grounded LLM System for Pedagogical Strategy Detection and Contextual Response Generation in Learning Dialogues
- Authors: Shahem Sultan, Shahem Fadi, Yousef Melhim, Ibrahim Alsarraj, Besher Hassan,
- Abstract summary: We introduce PedagoSense, a pedology grounded system that combines a two stage strategy classifier with large language model generation.<n>The system first detects whether a pedagogical strategy is present using a binary classifier, then performs fine grained classification to identify the specific strategy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper addresses the challenge of improving interaction quality in dialogue based learning by detecting and recommending effective pedagogical strategies in tutor student conversations. We introduce PedagoSense, a pedology grounded system that combines a two stage strategy classifier with large language model generation. The system first detects whether a pedagogical strategy is present using a binary classifier, then performs fine grained classification to identify the specific strategy. In parallel, it recommends an appropriate strategy from the dialogue context and uses an LLM to generate a response aligned with that strategy. We evaluate on human annotated tutor student dialogues, augmented with additional non pedagogical conversations for the binary task. Results show high performance for pedagogical strategy detection and consistent gains when using data augmentation, while analysis highlights where fine grained classes remain challenging. Overall, PedagoSense bridges pedagogical theory and practical LLM based response generation for more adaptive educational technologies.
Related papers
- PATS: Personality-Aware Teaching Strategies with Large Language Model Tutors [66.56586559631516]
Large language models (LLMs) have potential as educational tutors.<n>But different tutoring strategies benefit different student personalities.<n>Despite this, current LLM tutoring systems do not take into account student personality traits.
arXiv Detail & Related papers (2026-01-13T10:17:26Z) - Towards Mining Effective Pedagogical Strategies from Learner-LLM Educational Dialogues [0.40724544758045295]
This paper presents an ongoing study employing a dialogue analysis approach to identify effective pedagogical strategies from learner-LLM dialogues.<n>The proposed approach involves dialogue data collection, dialogue act (DA) annotation, DA pattern mining, and predictive model building.
arXiv Detail & Related papers (2025-10-20T16:11:34Z) - EduDial: Constructing a Large-scale Multi-turn Teacher-Student Dialogue Corpus [59.693733170193944]
We present EduDial, a comprehensive multi-turn teacher-student dialogue dataset.<n>EduDial covers 345 core knowledge points and consists of 34,250 dialogue sessions generated through interactions between teacher and student agents.
arXiv Detail & Related papers (2025-10-14T18:18:43Z) - Exploring LLMs for Predicting Tutor Strategy and Student Outcomes in Dialogues [48.99818550820575]
Recent studies have shown that strategies used by tutors can have significant effects on student outcomes.<n>Few works have studied predicting tutor strategy in dialogues.<n>We investigate the ability of modern LLMs, particularly Llama 3 and GPT-4o, to predict both future tutor moves and student outcomes in dialogues.
arXiv Detail & Related papers (2025-07-09T14:47:35Z) - Dialogic Pedagogy for Large Language Models: Aligning Conversational AI with Proven Theories of Learning [1.2691047660244332]
Large Language Models (LLMs) are transforming education by enabling rich conversational learning experiences.<n>This article provides a review of how LLM-based conversational agents are being used in higher education.
arXiv Detail & Related papers (2025-06-24T10:19:09Z) - From Problem-Solving to Teaching Problem-Solving: Aligning LLMs with Pedagogy using Reinforcement Learning [82.50157695987558]
Large language models (LLMs) can transform education, but their optimization for direct question-answering often undermines effective pedagogy.<n>We propose an online reinforcement learning (RL)-based alignment framework that can quickly adapt LLMs into effective tutors.
arXiv Detail & Related papers (2025-05-21T15:00:07Z) - Exploring Knowledge Tracing in Tutor-Student Dialogues using LLMs [49.18567856499736]
We investigate whether large language models (LLMs) can be supportive of open-ended dialogue tutoring.<n>We apply a range of knowledge tracing (KT) methods on the resulting labeled data to track student knowledge levels over an entire dialogue.<n>We conduct experiments on two tutoring dialogue datasets, and show that a novel yet simple LLM-based method, LLMKT, significantly outperforms existing KT methods in predicting student response correctness in dialogues.
arXiv Detail & Related papers (2024-09-24T22:31:39Z) - BIPED: Pedagogically Informed Tutoring System for ESL Education [11.209992106075788]
Large Language Models (LLMs) have a great potential to serve as readily available and cost-efficient Conversational Intelligent Tutoring Systems (CITS)
Existing CITS are designed to teach only simple concepts or lack the pedagogical depth necessary to address diverse learning strategies.
We construct a BIlingual PEDagogically-informed Tutoring dataset of one-on-one, human-to-human English tutoring interactions.
arXiv Detail & Related papers (2024-06-05T17:49:24Z) - Strategize Before Teaching: A Conversational Tutoring System with
Pedagogy Self-Distillation [35.11534904787774]
We propose a unified framework that combines teaching response generation and pedagogical strategy prediction.
Our experiments and analyses shed light on how teaching strategies affect dialog tutoring.
arXiv Detail & Related papers (2023-02-27T03:43:25Z) - Opportunities and Challenges in Neural Dialog Tutoring [54.07241332881601]
We rigorously analyze various generative language models on two dialog tutoring datasets for language learning.
We find that although current approaches can model tutoring in constrained learning scenarios, they perform poorly in less constrained scenarios.
Our human quality evaluation shows that both models and ground-truth annotations exhibit low performance in terms of equitable tutoring.
arXiv Detail & Related papers (2023-01-24T11:00:17Z) - Rethinking Supervised Learning and Reinforcement Learning in
Task-Oriented Dialogue Systems [58.724629408229205]
We demonstrate how traditional supervised learning and a simulator-free adversarial learning method can be used to achieve performance comparable to state-of-the-art RL-based methods.
Our main goal is not to beat reinforcement learning with supervised learning, but to demonstrate the value of rethinking the role of reinforcement learning and supervised learning in optimizing task-oriented dialogue systems.
arXiv Detail & Related papers (2020-09-21T12:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.