Instructor-Aligned Knowledge Graphs for Personalized Learning
- URL: http://arxiv.org/abs/2602.17111v1
- Date: Thu, 19 Feb 2026 06:15:10 GMT
- Title: Instructor-Aligned Knowledge Graphs for Personalized Learning
- Authors: Abdulrahman AlRabah, Priyanka Kargupta, Jiawei Han, Abdussalam Alawini,
- Abstract summary: We propose InstructKG, a framework for automatically constructing instructor-aligned knowledge graphs.<n>InstructKG extracts significant concepts as nodes and infers learning dependencies as directed edges.<n>We demonstrate that InstructKG captures rich, instructor-aligned learning progressions.
- Score: 18.405486267157247
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Mastering educational concepts requires understanding both their prerequisites (e.g., recursion before merge sort) and sub-concepts (e.g., merge sort as part of sorting algorithms). Capturing these dependencies is critical for identifying students' knowledge gaps and enabling targeted intervention for personalized learning. This is especially challenging in large-scale courses, where instructors cannot feasibly diagnose individual misunderstanding or determine which concepts need reinforcement. While knowledge graphs offer a natural representation for capturing these conceptual relationships at scale, existing approaches are either surface-level (focusing on course-level concepts like "Algorithms" or logistical relationships such as course enrollment), or disregard the rich pedagogical signals embedded in instructional materials. We propose InstructKG, a framework for automatically constructing instructor-aligned knowledge graphs that capture a course's intended learning progression. Given a course's lecture materials (slides, notes, etc.), InstructKG extracts significant concepts as nodes and infers learning dependencies as directed edges (e.g., "part-of" or "depends-on" relationships). The framework synergizes the rich temporal and semantic signals unique to educational materials (e.g., "recursion" is taught before "mergesort"; "recursion" is mentioned in the definition of "merge sort") with the generalizability of large language models. Through experiments on real-world, diverse lecture materials across multiple courses and human-based evaluation, we demonstrate that InstructKG captures rich, instructor-aligned learning progressions.
Related papers
- Forgetting is Everywhere [19.22572725623779]
We propose an algorithm- and task-agnostic theory that characterises forgetting as a lack of self-consistency in a learner's predictive distribution over future experiences.<n>Our theory naturally yields a general measure of an algorithm's propensity to forget.<n>We empirically demonstrate how forgetting is present across all learning settings and plays a significant role in determining learning efficiency.
arXiv Detail & Related papers (2025-11-06T18:52:57Z) - Inferring Prerequisite Knowledge Concepts in Educational Knowledge Graphs: A Multi-criteria Approach [1.4081145216286748]
We propose an unsupervised method for automatically inferring concept PRs without relying on labeled data.<n>We define ten criteria based on document-based, hyperlink-based, and text-based features, and combine them using a voting algorithm to robustly capture PRs in educational content.
arXiv Detail & Related papers (2025-09-05T10:37:58Z) - Machine Learning: a Lecture Note [51.31735291774885]
This lecture note is intended to prepare early-year master's and PhD students in data science or a related discipline with foundational ideas in machine learning.<n>It starts with basic ideas in modern machine learning with classification as a main target task.<n>Based on these basic ideas, the lecture note explores in depth the probablistic approach to unsupervised learning.
arXiv Detail & Related papers (2025-05-06T16:03:41Z) - Learning Representations for Reasoning: Generalizing Across Diverse Structures [5.031093893882575]
We aim to push the boundary of reasoning models by devising algorithms that generalize across knowledge and query structures.
Our library treats structured data as first-class citizens and removes the barrier for developing algorithms on structured data.
arXiv Detail & Related papers (2024-10-16T20:23:37Z) - Hierarchical Insights: Exploiting Structural Similarities for Reliable 3D Semantic Segmentation [4.480310276450028]
We propose a training strategy for a 3D LiDAR semantic segmentation model that learns structural relationships between classes through abstraction.
This is achieved by implicitly modeling these relationships using a learning rule for hierarchical multi-label classification (HMC)
Our detailed analysis demonstrates that this training strategy not only improves the model's confidence calibration but also retains additional information useful for downstream tasks such as fusion, prediction, and planning.
arXiv Detail & Related papers (2024-04-09T08:49:01Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Joint Language Semantic and Structure Embedding for Knowledge Graph
Completion [66.15933600765835]
We propose to jointly embed the semantics in the natural language description of the knowledge triplets with their structure information.
Our method embeds knowledge graphs for the completion task via fine-tuning pre-trained language models.
Our experiments on a variety of knowledge graph benchmarks have demonstrated the state-of-the-art performance of our method.
arXiv Detail & Related papers (2022-09-19T02:41:02Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z) - R-VGAE: Relational-variational Graph Autoencoder for Unsupervised
Prerequisite Chain Learning [83.13634692459486]
We propose a model called Graph AutoEncoder (VGA-E) to predict concept relations within a graph consisting of concept resource nodes.
Results show that our unsupervised approach outperforms graph-based semi-supervised methods and other baseline methods by up to 9.77% and 10.47% in terms of prerequisite relation prediction accuracy and F1 score.
Our method is notably the first graph-based model that attempts to make use of deep learning representations for the task of unsupervised prerequisite learning.
arXiv Detail & Related papers (2020-04-22T14:48:03Z) - Generative Adversarial Zero-Shot Relational Learning for Knowledge
Graphs [96.73259297063619]
We consider a novel formulation, zero-shot learning, to free this cumbersome curation.
For newly-added relations, we attempt to learn their semantic features from their text descriptions.
We leverage Generative Adrial Networks (GANs) to establish the connection between text and knowledge graph domain.
arXiv Detail & Related papers (2020-01-08T01:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.