Generative Large Language Models for Knowledge Representation: A Systematic Review of Concept Map Generation
- URL: http://arxiv.org/abs/2509.14554v1
- Date: Thu, 18 Sep 2025 02:36:54 GMT
- Title: Generative Large Language Models for Knowledge Representation: A Systematic Review of Concept Map Generation
- Authors: Xiaoming Zhai,
- Abstract summary: The rise of generative large language models (LLMs) has opened new opportunities for automating knowledge representation through concept maps.<n>This review systematically synthesizes the emerging body of research on LLM-enabled concept map generation.<n>Findings reveal six major methodological categories: human-in-the-loop systems, weakly supervised learning models, fine-tuned domain-specific LLMs, pre-trained LLMs with prompt engineering, hybrid systems integrating knowledge bases, and modular frameworks combining symbolic and statistical tools.
- Score: 1.163826615891678
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The rise of generative large language models (LLMs) has opened new opportunities for automating knowledge representation through concept maps, a long-standing pedagogical tool valued for fostering meaningful learning and higher-order thinking. Traditional construction of concept maps is labor-intensive, requiring significant expertise and time, limiting their scalability in education. This review systematically synthesizes the emerging body of research on LLM-enabled concept map generation, focusing on two guiding questions: (a) What methods and technical features of LLMs are employed to construct concept maps? (b) What empirical evidence exists to validate their educational utility? Through a comprehensive search across major databases and AI-in-education conference proceedings, 28 studies meeting rigorous inclusion criteria were analyzed using thematic synthesis. Findings reveal six major methodological categories: human-in-the-loop systems, weakly supervised learning models, fine-tuned domain-specific LLMs, pre-trained LLMs with prompt engineering, hybrid systems integrating knowledge bases, and modular frameworks combining symbolic and statistical tools. Validation strategies ranged from quantitative metrics (precision, recall, F1-score, semantic similarity) to qualitative evaluations (expert review, learner feedback). Results indicate LLM-generated maps hold promise for scalable, adaptive, and pedagogically relevant knowledge visualization, though challenges remain regarding validity, interpretability, multilingual adaptability, and classroom integration. Future research should prioritize interdisciplinary co-design, empirical classroom trials, and alignment with instructional practices to realize their full educational potential.
Related papers
- Multi-Agent Learning Path Planning via LLMs [10.288666777827578]
This study proposes a novel Multi-Agent Learning Path Planning framework powered by large language models (LLMs)<n>The framework includes three task-specific agents: a learner analytics agent, a path planning agent, and a reflection agent.<n> Experiments conducted on the MOOCX dataset using seven LLMs show that MALPP significantly outperforms baseline models in path quality, knowledge sequence consistency, and cognitive load alignment.
arXiv Detail & Related papers (2026-01-24T07:13:08Z) - CLLMRec: LLM-powered Cognitive-Aware Concept Recommendation via Semantic Alignment and Prerequisite Knowledge Distillation [3.200298153814017]
The growth of Massive Open Online Courses (MOOCs) presents significant challenges for personalized learning, where concept is crucial.<n>Existing approaches typically rely on heterogeneous information networks or knowledge graphs to capture conceptual relationships, combined with knowledge tracing models to assess learners' cognitive states.<n>This paper proposes CLLMRec, a novel framework that leverages Large Language Models to generate personalized concept recommendations.
arXiv Detail & Related papers (2025-11-21T08:37:39Z) - Simulating Students with Large Language Models: A Review of Architecture, Mechanisms, and Role Modelling in Education with Generative AI [0.8703455323398351]
Review of studies using large language models (LLMs) to simulate student behaviour across educational environments.<n>Wee current evidence on the capacity of LLM-based agents to emulate learner archetypes, respond to instructional inputs, and interact within multi-agent classroom scenarios.<n>We examine the implications of such systems for curriculum development, instructional evaluation, and teacher training.
arXiv Detail & Related papers (2025-11-08T17:23:13Z) - A Survey on Generative Recommendation: Data, Model, and Tasks [55.36322811257545]
generative recommendation reconceptualizes recommendation as a generation task rather than discriminative scoring.<n>This survey provides a comprehensive examination through a unified tripartite framework spanning data, model, and task dimensions.<n>We identify five key advantages: world knowledge integration, natural language understanding, reasoning capabilities, scaling laws, and creative generation.
arXiv Detail & Related papers (2025-10-31T04:02:58Z) - LLM-empowered knowledge graph construction: A survey [0.0]
Knowledge Graphs have long served as a fundamental infrastructure for structured knowledge representation and reasoning.<n>With the advent of Large Language Models (LLMs), the construction of KGs has entered a new paradigm-shifting from rule-based and statistical pipelines to language-driven and generative frameworks.
arXiv Detail & Related papers (2025-10-23T08:43:28Z) - Explain Before You Answer: A Survey on Compositional Visual Reasoning [74.27548620675748]
Compositional visual reasoning has emerged as a key research frontier in multimodal AI.<n>This survey systematically reviews 260+ papers from top venues (CVPR, ICCV, NeurIPS, ICML, ACL, etc.)<n>We then catalog 60+ benchmarks and corresponding metrics that probe compositional visual reasoning along dimensions such as grounding accuracy, chain-of-thought faithfulness, and high-resolution perception.
arXiv Detail & Related papers (2025-08-24T11:01:51Z) - ELMES: An Automated Framework for Evaluating Large Language Models in Educational Scenarios [23.549720214649476]
Large Language Models (LLMs) present transformative opportunities for education, generating numerous novel application scenarios.<n>Current benchmarks predominantly measure general intelligence rather than pedagogical capabilities.<n>We introduce ELMES, an open-source automated evaluation framework specifically designed for assessing LLMs in educational settings.
arXiv Detail & Related papers (2025-07-27T15:20:19Z) - Chain of Methodologies: Scaling Test Time Computation without Training [77.85633949575046]
Large Language Models (LLMs) often struggle with complex reasoning tasks due to insufficient in-depth insights in their training data.<n>This paper introduces the Chain of the (CoM) framework that enhances structured thinking by integrating human methodological insights.
arXiv Detail & Related papers (2025-06-08T03:46:50Z) - Applications of Large Language Model Reasoning in Feature Generation [0.0]
Large Language Models (LLMs) have revolutionized natural language processing through their state of art reasoning capabilities.<n>This paper explores the convergence of LLM reasoning techniques and feature generation for machine learning tasks.<n>The paper categorizes LLM-based feature generation methods across various domains including finance, healthcare, and text analytics.
arXiv Detail & Related papers (2025-03-15T04:18:01Z) - Coding for Intelligence from the Perspective of Category [66.14012258680992]
Coding targets compressing and reconstructing data, and intelligence.
Recent trends demonstrate the potential homogeneity of these two fields.
We propose a novel problem of Coding for Intelligence from the category theory view.
arXiv Detail & Related papers (2024-07-01T07:05:44Z) - Enhancing LLM-Based Feedback: Insights from Intelligent Tutoring Systems and the Learning Sciences [0.0]
This work advocates careful and caring AIED research by going through previous research on feedback generation in ITS.
The main contributions of this paper include: an avocation of applying more cautious, theoretically grounded methods in feedback generation in the era of generative AI.
arXiv Detail & Related papers (2024-05-07T20:09:18Z) - Semi-Supervised and Unsupervised Deep Visual Learning: A Survey [76.2650734930974]
Semi-supervised learning and unsupervised learning offer promising paradigms to learn from an abundance of unlabeled visual data.
We review the recent advanced deep learning algorithms on semi-supervised learning (SSL) and unsupervised learning (UL) for visual recognition from a unified perspective.
arXiv Detail & Related papers (2022-08-24T04:26:21Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.