Top-Down vs. Bottom-Up Approaches for Automatic Educational Knowledge Graph Construction in CourseMapper
- URL: http://arxiv.org/abs/2505.10069v1
- Date: Thu, 15 May 2025 08:11:48 GMT
- Title: Top-Down vs. Bottom-Up Approaches for Automatic Educational Knowledge Graph Construction in CourseMapper
- Authors: Qurat Ul Ain, Mohamed Amine Chatti, Amr Shakhshir, Jean Qussa, Rawaa Alatrash, Shoeb Joarder,
- Abstract summary: This study compares Top-down and Bottom-up approaches for automatic EduKG construction.<n>Results indicate that the Bottom-up approach outperforms the Top-down approach in accurately identifying and mapping key knowledge concepts.
- Score: 0.5937476291232802
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The automatic construction of Educational Knowledge Graphs (EduKGs) is crucial for modeling domain knowledge in digital learning environments, particularly in Massive Open Online Courses (MOOCs). However, identifying the most effective approach for constructing accurate EduKGs remains a challenge. This study compares Top-down and Bottom-up approaches for automatic EduKG construction, evaluating their effectiveness in capturing and structuring knowledge concepts from learning materials in our MOOC platform CourseMapper. Through a user study and expert validation using Simple Random Sampling (SRS), results indicate that the Bottom-up approach outperforms the Top-down approach in accurately identifying and mapping key knowledge concepts. To further enhance EduKG accuracy, we integrate a Human-in-the-Loop approach, allowing course moderators to review and refine the EduKG before publication. This structured comparison provides a scalable framework for improving knowledge representation in MOOCs, ultimately supporting more personalized and adaptive learning experiences.
Related papers
- Path Pooling: Training-Free Structure Enhancement for Efficient Knowledge Graph Retrieval-Augmented Generation [19.239478003379478]
Large Language Models suffer from hallucinations and knowledge deficiencies in real-world applications.<n>We propose path pooling, a training-free strategy that introduces structure information through a novel path-centric pooling operation.<n>It seamlessly integrates into existing KG-RAG methods in a plug-and-play manner, enabling richer structure information utilization.
arXiv Detail & Related papers (2025-03-07T07:48:30Z) - Graph Foundation Models for Recommendation: A Comprehensive Survey [55.70529188101446]
Large language models (LLMs) are designed to process and comprehend natural language, making both approaches highly effective and widely adopted.<n>Recent research has focused on graph foundation models (GFMs)<n>GFMs integrate the strengths of GNNs and LLMs to model complex RS problems more efficiently by leveraging the graph-based structure of user-item relationships alongside textual understanding.
arXiv Detail & Related papers (2025-02-12T12:13:51Z) - LLM-Assisted Knowledge Graph Completion for Curriculum and Domain Modelling in Personalized Higher Education Recommendations [0.0]
This paper introduces an innovative approach to higher education curriculum modelling.<n>Our research focuses on modelling university subjects and linking their topics to corresponding domain models.<n>We develop a domain, curriculum, and user models for university modules and stakeholders.
arXiv Detail & Related papers (2025-01-21T17:13:13Z) - KBAlign: Efficient Self Adaptation on Specific Knowledge Bases [73.34893326181046]
We present KBAlign, a self-supervised framework that enhances RAG systems through efficient model adaptation.<n>Our key insight is to leverage the model's intrinsic capabilities for knowledge alignment through two innovative mechanisms.<n> Experiments demonstrate that KBAlign can achieve 90% of the performance gain obtained through GPT-4-supervised adaptation.
arXiv Detail & Related papers (2024-11-22T08:21:03Z) - Learn To Learn More Precisely [30.825058308218047]
"Learn to learn more precisely" aims to make the model learn precise target knowledge from data.
We propose a simple and effective meta-learning framework named Meta Self-Distillation(MSD) to maximize the consistency of learned knowledge.
MSD exhibits remarkable performance in few-shot classification tasks in both standard and augmented scenarios.
arXiv Detail & Related papers (2024-08-08T17:01:26Z) - Structure-aware Domain Knowledge Injection for Large Language Models [38.08691252042949]
StructTuning is a methodology to transform Large Language Models (LLMs) into domain specialists.<n>It significantly reduces the training corpus needs to a mere 5% while achieving an impressive 100% of traditional knowledge injection performance.
arXiv Detail & Related papers (2024-07-23T12:38:48Z) - Subgraph-Aware Training of Language Models for Knowledge Graph Completion Using Structure-Aware Contrastive Learning [4.741342276627672]
Fine-tuning pre-trained language models (PLMs) has recently shown a potential to improve knowledge graph completion (KGC)<n>We propose a Subgraph-Aware Training framework for KGC (SATKGC) with two ideas: (i) subgraph-aware mini-batching to encourage hard negative sampling and to mitigate an imbalance in the frequency of entity occurrences during training, and (ii) new contrastive learning to focus more on harder in-batch negative triples and harder positive triples in terms of the structural properties of the knowledge graph.
arXiv Detail & Related papers (2024-07-17T16:25:37Z) - Hierarchical and Decoupled BEV Perception Learning Framework for Autonomous Driving [52.808273563372126]
This paper proposes a novel hierarchical BEV perception paradigm, aiming to provide a library of fundamental perception modules and user-friendly graphical interface.
We conduct the Pretrain-Finetune strategy to effectively utilize large scale public datasets and streamline development processes.
We also present a Multi-Module Learning (MML) approach, enhancing performance through synergistic and iterative training of multiple models.
arXiv Detail & Related papers (2024-07-17T11:17:20Z) - Finding Paths for Explainable MOOC Recommendation: A Learner Perspective [2.4775868218890484]
We propose an explainable recommendation system for Massive Open Online Courses (MOOCs) that uses graph reasoning.
To validate the practical implications of our approach, we conducted a user study examining user perceptions.
We demonstrate the generalizability of our approach by conducting experiments on two educational datasets.
arXiv Detail & Related papers (2023-12-11T15:27:22Z) - Set-to-Sequence Ranking-based Concept-aware Learning Path Recommendation [49.85548436111153]
We propose a novel framework named Set-to-Sequence Ranking-based Concept-aware Learning Path Recommendation (SRC)
SRC formulates the recommendation task under a set-to-sequence paradigm.
We conduct extensive experiments on two real-world public datasets and one industrial dataset.
arXiv Detail & Related papers (2023-06-07T08:24:44Z) - Learning Large-scale Neural Fields via Context Pruned Meta-Learning [60.93679437452872]
We introduce an efficient optimization-based meta-learning technique for large-scale neural field training.
We show how gradient re-scaling at meta-test time allows the learning of extremely high-quality neural fields.
Our framework is model-agnostic, intuitive, straightforward to implement, and shows significant reconstruction improvements for a wide range of signals.
arXiv Detail & Related papers (2023-02-01T17:32:16Z) - Great Truths are Always Simple: A Rather Simple Knowledge Encoder for
Enhancing the Commonsense Reasoning Capacity of Pre-Trained Models [89.98762327725112]
Commonsense reasoning in natural language is a desired ability of artificial intelligent systems.
For solving complex commonsense reasoning tasks, a typical solution is to enhance pre-trained language models(PTMs) with a knowledge-aware graph neural network(GNN) encoder.
Despite the effectiveness, these approaches are built on heavy architectures, and can't clearly explain how external knowledge resources improve the reasoning capacity of PTMs.
arXiv Detail & Related papers (2022-05-04T01:27:36Z) - Knowledge Distillation Meets Self-Supervision [109.6400639148393]
Knowledge distillation involves extracting "dark knowledge" from a teacher network to guide the learning of a student network.
We show that the seemingly different self-supervision task can serve as a simple yet powerful solution.
By exploiting the similarity between those self-supervision signals as an auxiliary task, one can effectively transfer the hidden information from the teacher to the student.
arXiv Detail & Related papers (2020-06-12T12:18:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.