PALM: PAnoramic Learning Map Integrating Learning Analytics and Curriculum Map for Scalable Insights Across Courses
- URL: http://arxiv.org/abs/2507.18393v1
- Date: Thu, 24 Jul 2025 13:17:47 GMT
- Title: PALM: PAnoramic Learning Map Integrating Learning Analytics and Curriculum Map for Scalable Insights Across Courses
- Authors: Mahiro Ozaki, Li Chen, Shotaro Naganuma, Valdemar Švábenský, Fumiya Okubo, Atsushi Shimada,
- Abstract summary: The PAnoramic Learning Map (PALM) is a learning analytics (LA) dashboard designed to address the scalability challenges of LA.<n>We conducted a system evaluation to assess PALM's effectiveness in two key areas: (1) its impact on students' awareness of their learning behaviors, and (2) its comparative performance against existing systems.
- Score: 5.750960656720476
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study proposes and evaluates the PAnoramic Learning Map (PALM), a learning analytics (LA) dashboard designed to address the scalability challenges of LA by integrating curriculum-level information. Traditional LA research has predominantly focused on individual courses or learners and often lacks a framework that considers the relationships between courses and the long-term trajectory of learning. To bridge this gap, PALM was developed to integrate multilayered educational data into a curriculum map, enabling learners to intuitively understand their learning records and academic progression. We conducted a system evaluation to assess PALM's effectiveness in two key areas: (1) its impact on students' awareness of their learning behaviors, and (2) its comparative performance against existing systems. The results indicate that PALM enhances learners' awareness of study planning and reflection, particularly by improving perceived behavioral control through the visual presentation of individual learning histories and statistical trends, which clarify the links between learning actions and outcomes. Although PALM requires ongoing refinement as a system, it received significantly higher evaluations than existing systems in terms of visual appeal and usability. By serving as an information resource with previously inaccessible insights, PALM enhances self-regulated learning and engagement, representing a significant step beyond conventional LA toward a comprehensive and scalable approach.
Related papers
- Adaptive Learning Systems: Personalized Curriculum Design Using LLM-Powered Analytics [14.157213827899342]
Large language models (LLMs) are revolutionizing the field of education by enabling personalized learning experiences tailored to individual student needs.<n>This paper introduces a framework for Adaptive Learning Systems that leverages LLM-powered analytics for personalized curriculum design.
arXiv Detail & Related papers (2025-07-25T04:36:17Z) - Education-Oriented Graph Retrieval-Augmented Generation for Learning Path Recommendation [56.37740554448673]
We propose Discrimination Learning Enhances Learning Path Recommendation (DLELP) to enhance learning path recommendations.<n>We introduce a knowledge concept structure graph generation module that adaptively constructs knowledge concept structure graphs for different educational datasets.<n>We then propose a Discrimination Learning-driven Reinforcement Learning framework, which mitigates the issue of blocked learning paths.
arXiv Detail & Related papers (2025-06-27T15:15:42Z) - Unveiling the Learning Mind of Language Models: A Cognitive Framework and Empirical Study [50.065744358362345]
Large language models (LLMs) have shown impressive capabilities across tasks such as mathematics, coding, and reasoning.<n>Yet their learning ability, which is crucial for adapting to dynamic environments and acquiring new knowledge, remains underexplored.
arXiv Detail & Related papers (2025-06-16T13:24:50Z) - Enhanced Bloom's Educational Taxonomy for Fostering Information Literacy in the Era of Large Language Models [16.31527042425208]
This paper proposes an LLM-driven Bloom's Educational Taxonomy that aims to recognize and evaluate students' information literacy (IL) with Large Language Models (LLMs)<n>The framework delineates the IL corresponding to the cognitive abilities required to use LLM into two distinct stages: Exploration & Action and Creation & Metacognition.
arXiv Detail & Related papers (2025-03-25T08:23:49Z) - Enhancing LLM Knowledge Learning through Generalization [73.16975077770765]
We show that an LLM's ability to continually predict the same factual knowledge tokens given diverse paraphrased contexts is positively correlated with its capacity to extract that knowledge via question-answering.<n>We propose two strategies to enhance LLMs' ability to predict the same knowledge tokens given varied contexts, thereby enhancing knowledge acquisition.
arXiv Detail & Related papers (2025-03-05T17:56:20Z) - LLMs as Educational Analysts: Transforming Multimodal Data Traces into Actionable Reading Assessment Reports [6.523137821124204]
This study investigates the use of multimodal data sources to derive meaningful reading insights.<n>We employ unsupervised learning techniques to identify distinct reading behavior patterns.<n>A large language model (LLM) synthesizes the derived information into actionable reports for educators.
arXiv Detail & Related papers (2025-03-03T22:34:08Z) - Investigating the Zone of Proximal Development of Language Models for In-Context Learning [59.91708683601029]
We introduce a learning analytics framework to analyze the in-context learning (ICL) behavior of large language models (LLMs)<n>We adapt the Zone of Proximal Development (ZPD) theory to ICL, measuring the ZPD of LLMs based on model performance on individual examples.<n>Our findings reveal a series of intricate and multifaceted behaviors of ICL, providing new insights into understanding and leveraging this technique.
arXiv Detail & Related papers (2025-02-10T19:36:21Z) - Learning Beyond the Surface: How Far Can Continual Pre-Training with LoRA Enhance LLMs' Domain-Specific Insight Learning? [4.390998479503661]
Large Language Models (LLMs) have demonstrated remarkable performance on various tasks.<n>However, their ability to extract and internalize deeper insights from domain-specific datasets remains underexplored.<n>This study investigates how continual pre-training can enhance LLMs' capacity for insight learning.
arXiv Detail & Related papers (2025-01-29T18:40:32Z) - Educational Personalized Learning Path Planning with Large Language Models [0.0]
This paper proposes a novel approach integrating Large Language Models (LLMs) with prompt engineering to address these challenges.
By designing prompts that incorporate learner-specific information, our method guides LLMs like LLama-2-70B and GPT-4 to generate personalized, coherent, and pedagogically sound learning paths.
arXiv Detail & Related papers (2024-07-16T14:32:56Z) - Exploring Federated Unlearning: Review, Comparison, and Insights [101.64910079905566]
federated unlearning enables the selective removal of data from models trained in federated systems.<n>This paper examines existing federated unlearning approaches, examining their algorithmic efficiency, impact on model accuracy, and effectiveness in preserving privacy.<n>We propose the OpenFederatedUnlearning framework, a unified benchmark for evaluating federated unlearning methods.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - A Hierarchy-based Analysis Approach for Blended Learning: A Case Study
with Chinese Students [12.533646830917213]
This paper proposes a hierarchy-based evaluation approach for blended learning evaluation.
The results show that cognitive engagement and emotional engagement play a more important role in blended learning evaluation.
arXiv Detail & Related papers (2023-09-19T00:09:00Z) - A Domain-Agnostic Approach for Characterization of Lifelong Learning
Systems [128.63953314853327]
"Lifelong Learning" systems are capable of 1) Continuous Learning, 2) Transfer and Adaptation, and 3) Scalability.
We show that this suite of metrics can inform the development of varied and complex Lifelong Learning systems.
arXiv Detail & Related papers (2023-01-18T21:58:54Z) - Revisiting Meta-Learning as Supervised Learning [69.2067288158133]
We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
arXiv Detail & Related papers (2020-02-03T06:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.