Inferring Prerequisite Knowledge Concepts in Educational Knowledge Graphs: A Multi-criteria Approach
- URL: http://arxiv.org/abs/2509.05393v1
- Date: Fri, 05 Sep 2025 10:37:58 GMT
- Title: Inferring Prerequisite Knowledge Concepts in Educational Knowledge Graphs: A Multi-criteria Approach
- Authors: Rawaa Alatrash, Mohamed Amine Chatti, Nasha Wibowo, Qurat Ul Ain,
- Abstract summary: We propose an unsupervised method for automatically inferring concept PRs without relying on labeled data.<n>We define ten criteria based on document-based, hyperlink-based, and text-based features, and combine them using a voting algorithm to robustly capture PRs in educational content.
- Score: 1.4081145216286748
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Educational Knowledge Graphs (EduKGs) organize various learning entities and their relationships to support structured and adaptive learning. Prerequisite relationships (PRs) are critical in EduKGs for defining the logical order in which concepts should be learned. However, the current EduKG in the MOOC platform CourseMapper lacks explicit PR links, and manually annotating them is time-consuming and inconsistent. To address this, we propose an unsupervised method for automatically inferring concept PRs without relying on labeled data. We define ten criteria based on document-based, Wikipedia hyperlink-based, graph-based, and text-based features, and combine them using a voting algorithm to robustly capture PRs in educational content. Experiments on benchmark datasets show that our approach achieves higher precision than existing methods while maintaining scalability and adaptability, thus providing reliable support for sequence-aware learning in CourseMapper.
Related papers
- Instructor-Aligned Knowledge Graphs for Personalized Learning [18.405486267157247]
We propose InstructKG, a framework for automatically constructing instructor-aligned knowledge graphs.<n>InstructKG extracts significant concepts as nodes and infers learning dependencies as directed edges.<n>We demonstrate that InstructKG captures rich, instructor-aligned learning progressions.
arXiv Detail & Related papers (2026-02-19T06:15:10Z) - Visual Programmability: A Guide for Code-as-Thought in Chart Understanding [37.44645754630439]
We propose a Code-as-Thought (CaT) approach to represent the visual information of a chart in a verifiable, symbolic format.<n>Visual Programmability is a learnable property that determines if a chart-question pair is better solved with code or direct visual analysis.<n>We implement this concept in an adaptive framework where a Vision-Language Models (VLMs) learns to choose between the CaT pathway and a direct visual reasoning pathway.
arXiv Detail & Related papers (2025-09-11T09:22:16Z) - Hierarchical Insights: Exploiting Structural Similarities for Reliable 3D Semantic Segmentation [4.480310276450028]
We propose a training strategy for a 3D LiDAR semantic segmentation model that learns structural relationships between classes through abstraction.
This is achieved by implicitly modeling these relationships using a learning rule for hierarchical multi-label classification (HMC)
Our detailed analysis demonstrates that this training strategy not only improves the model's confidence calibration but also retains additional information useful for downstream tasks such as fusion, prediction, and planning.
arXiv Detail & Related papers (2024-04-09T08:49:01Z) - Does Pre-trained Language Model Actually Infer Unseen Links in Knowledge Graph Completion? [32.645448509968226]
Knowledge graphs (KGs) consist of links that describe relationships between entities.
Knowledge Graph Completion (KGC) is a task that infers unseen relationships between entities in a KG.
Traditional embedding-based KGC methods, such as RESCAL, infer missing links using only the knowledge from training data.
Recent Pre-trained Language Model (PLM)-based KGC utilizes knowledge obtained during pre-training.
arXiv Detail & Related papers (2023-11-15T16:56:49Z) - Set-to-Sequence Ranking-based Concept-aware Learning Path Recommendation [49.85548436111153]
We propose a novel framework named Set-to-Sequence Ranking-based Concept-aware Learning Path Recommendation (SRC)
SRC formulates the recommendation task under a set-to-sequence paradigm.
We conduct extensive experiments on two real-world public datasets and one industrial dataset.
arXiv Detail & Related papers (2023-06-07T08:24:44Z) - UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question
Answering Over Knowledge Graph [89.98762327725112]
Multi-hop Question Answering over Knowledge Graph(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question.
We propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning.
arXiv Detail & Related papers (2022-12-02T04:08:09Z) - Supporting Vision-Language Model Inference with Confounder-pruning Knowledge Prompt [71.77504700496004]
Vision-language models are pre-trained by aligning image-text pairs in a common space to deal with open-set visual concepts.
To boost the transferability of the pre-trained models, recent works adopt fixed or learnable prompts.
However, how and what prompts can improve inference performance remains unclear.
arXiv Detail & Related papers (2022-05-23T07:51:15Z) - SLADE: A Self-Training Framework For Distance Metric Learning [75.54078592084217]
We present a self-training framework, SLADE, to improve retrieval performance by leveraging additional unlabeled data.
We first train a teacher model on the labeled data and use it to generate pseudo labels for the unlabeled data.
We then train a student model on both labels and pseudo labels to generate final feature embeddings.
arXiv Detail & Related papers (2020-11-20T08:26:10Z) - R-VGAE: Relational-variational Graph Autoencoder for Unsupervised
Prerequisite Chain Learning [83.13634692459486]
We propose a model called Graph AutoEncoder (VGA-E) to predict concept relations within a graph consisting of concept resource nodes.
Results show that our unsupervised approach outperforms graph-based semi-supervised methods and other baseline methods by up to 9.77% and 10.47% in terms of prerequisite relation prediction accuracy and F1 score.
Our method is notably the first graph-based model that attempts to make use of deep learning representations for the task of unsupervised prerequisite learning.
arXiv Detail & Related papers (2020-04-22T14:48:03Z) - Generative Adversarial Zero-Shot Relational Learning for Knowledge
Graphs [96.73259297063619]
We consider a novel formulation, zero-shot learning, to free this cumbersome curation.
For newly-added relations, we attempt to learn their semantic features from their text descriptions.
We leverage Generative Adrial Networks (GANs) to establish the connection between text and knowledge graph domain.
arXiv Detail & Related papers (2020-01-08T01:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.