PRISM: A Personalized, Rapid, and Immersive Skill Mastery framework for personalizing experiential learning through Generative AI
- URL: http://arxiv.org/abs/2411.14433v2
- Date: Sat, 26 Jul 2025 04:50:41 GMT
- Title: PRISM: A Personalized, Rapid, and Immersive Skill Mastery framework for personalizing experiential learning through Generative AI
- Authors: Yu-Zheng Lin, Karan Patel, Ahmed Hussain J Alhamadah, Bono Po-Jen Shih, Matthew William Redondo, David Rafael Vidal Corona, Banafsheh Saber Latibari, Jesus Pacheco, Soheil Salehi, Pratik Satam,
- Abstract summary: PRISM is a scalable framework leveraging gen-AI and Digital Twins (DTs) to deliver adaptive, experiential learning.<n> PRISM integrates sentiment analysis and Retrieval-Augmented Generation (RAG) to monitor learner comprehension and dynamically adjust content to meet course objectives.<n>We show that GPT-4 achieves 91 percent F1 in zero-shot sentiment analysis of teacher-student dialogues, while GPT-3.5 performs robustly in informal language contexts.
- Score: 0.4022583182501593
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The rise of generative AI (gen-AI) is transforming industries, particularly in education and workforce training. This chapter introduces PRISM (Personalized, Rapid, and Immersive Skill Mastery), a scalable framework leveraging gen-AI and Digital Twins (DTs) to deliver adaptive, experiential learning. PRISM integrates sentiment analysis and Retrieval-Augmented Generation (RAG) to monitor learner comprehension and dynamically adjust content to meet course objectives. We further present the Multi-Fidelity Digital Twin for Education (MFDT-E) framework, aligning DT fidelity levels with Bloom's Taxonomy and the Kirkpatrick evaluation model to support undergraduate, master's, and doctoral training. Experimental validation shows that GPT-4 achieves 91 percent F1 in zero-shot sentiment analysis of teacher-student dialogues, while GPT-3.5 performs robustly in informal language contexts. Additionally, the system's effectiveness and scalability for immersive Industry 4.0 training are demonstrated through four VR modules: Home Scene, Factory Floor Tour, Capping Station DT, and PPE Inspection Training. These results highlight the potential of integrating generative AI with digital twins to enable personalized, efficient, and scalable education.
Related papers
- Cultivating Helpful, Personalized, and Creative AI Tutors: A Framework for Pedagogical Alignment using Reinforcement Learning [17.558663729465692]
EduAlign is a framework designed to guide large language models (LLMs) toward becoming more effective and responsible educational assistants.<n>In the first stage, we curate a dataset of 8k educational interactions and annotate them-both manually and automatically-along three key educational dimensions: Helpfulness, Personalization, and Creativity.<n>In the second stage, we leverage HPC-RM as a reward signal to fine-tune a pre-trained LLM using Group Relative Policy Optimization (GRPO) on a set of 2k diverse prompts.
arXiv Detail & Related papers (2025-07-27T15:56:29Z) - Evolution of AI in Education: Agentic Workflows [2.1681971652284857]
Artificial intelligence (AI) has transformed various aspects of education.
Large language models (LLMs) are driving advancements in automated tutoring, assessment, and content generation.
To address these limitations and foster more sustainable technological practices, AI agents have emerged as a promising new avenue for educational innovation.
arXiv Detail & Related papers (2025-04-25T13:44:57Z) - A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems [93.8285345915925]
Reasoning is a fundamental cognitive process that enables logical inference, problem-solving, and decision-making.
With the rapid advancement of large language models (LLMs), reasoning has emerged as a key capability that distinguishes advanced AI systems.
We categorize existing methods along two dimensions: (1) Regimes, which define the stage at which reasoning is achieved; and (2) Architectures, which determine the components involved in the reasoning process.
arXiv Detail & Related papers (2025-04-12T01:27:49Z) - Personalized Education with Generative AI and Digital Twins: VR, RAG, and Zero-Shot Sentiment Analysis for Industry 4.0 Workforce Development [0.609125634461969]
gAI-PT4I4 employs sentiment analysis to assess student comprehension, leveraging generative AI and finite automaton to tailor learning experiences.<n>The framework integrates low-fidelity Digital Twins for VR-based training, featuring an Interactive Tutor.<n>It uses zero-shot sentiment analysis with LLMs and prompt engineering, achieving 86% accuracy in classifying student-teacher interactions as positive or negative.
arXiv Detail & Related papers (2025-02-19T20:11:19Z) - LLM-powered Multi-agent Framework for Goal-oriented Learning in Intelligent Tutoring System [54.71619734800526]
GenMentor is a multi-agent framework designed to deliver goal-oriented, personalized learning within ITS.<n>It maps learners' goals to required skills using a fine-tuned LLM trained on a custom goal-to-skill dataset.<n>GenMentor tailors learning content with an exploration-drafting-integration mechanism to align with individual learner needs.
arXiv Detail & Related papers (2025-01-27T03:29:44Z) - Multi-Stage Knowledge Integration of Vision-Language Models for Continual Learning [79.46570165281084]
We propose a Multi-Stage Knowledge Integration network (MulKI) to emulate the human learning process in distillation methods.
MulKI achieves this through four stages, including Eliciting Ideas, Adding New Ideas, Distinguishing Ideas, and Making Connections.
Our method demonstrates significant improvements in maintaining zero-shot capabilities while supporting continual learning across diverse downstream tasks.
arXiv Detail & Related papers (2024-11-11T07:36:19Z) - When LLMs Learn to be Students: The SOEI Framework for Modeling and Evaluating Virtual Student Agents in Educational Interaction [12.070907646464537]
We propose the SOEI framework for constructing and evaluating personality-aligned Virtual Student Agents (LVSAs) in classroom scenarios.<n>We generate five LVSAs based on Big Five traits through LoRA fine-tuning and expert-informed prompt design.<n>Our results provide: (1) an educationally and psychologically grounded generation pipeline for LLM-based student agents; (2) a hybrid, scalable evaluation framework for behavioral realism; and (3) empirical insights into the pedagogical utility of LVSAs in shaping instructional adaptation.
arXiv Detail & Related papers (2024-10-21T07:18:24Z) - Learning Paradigms and Modelling Methodologies for Digital Twins in Process Industry [1.1060425537315088]
Digital Twins (DTs) are virtual replicas of physical manufacturing systems that combine sensor data with sophisticated data-based or physics-based models, or a combination thereof, to tackle a variety of industrial-relevant tasks like process monitoring, predictive control or decision support.
The backbone of a DT, i.e. the concrete modelling methodologies and architectural frameworks supporting these models, are complex, diverse and evolve fast, necessitating a thorough understanding of the latest state-of-the-art methods and trends to stay on top of a highly competitive market.
arXiv Detail & Related papers (2024-07-02T14:05:10Z) - Enhancing Deep Knowledge Tracing via Diffusion Models for Personalized Adaptive Learning [1.2248793682283963]
This study aims to tackle data shortage issues in student learning records to enhance DKT performance for personalized adaptive learning (PAL)
It employs TabDDPM, a diffusion model, to generate synthetic educational records to augment training data for enhancing DKT.
The experimental results demonstrate that the AI-generated data by TabDDPM significantly improves DKT performance.
arXiv Detail & Related papers (2024-04-25T00:23:20Z) - Learning to Maximize Mutual Information for Chain-of-Thought Distillation [13.660167848386806]
Distilling Step-by-Step(DSS) has demonstrated promise by imbuing smaller models with the superior reasoning capabilities of their larger counterparts.
However, DSS overlooks the intrinsic relationship between the two training tasks, leading to ineffective integration of CoT knowledge with the task of label prediction.
We propose a variational approach to solve this problem using a learning-based method.
arXiv Detail & Related papers (2024-03-05T22:21:45Z) - Predicting Learning Performance with Large Language Models: A Study in Adult Literacy [18.48602704139462]
This study investigates the application of advanced AI models, including Large Language Models (LLMs), for predicting learning performance in adult literacy programs in ITSs.
We evaluate the predictive capabilities of GPT-4 versus traditional machine learning methods in predicting learning performance through five-fold cross-validation techniques.
arXiv Detail & Related papers (2024-03-04T08:14:07Z) - 3DG: A Framework for Using Generative AI for Handling Sparse Learner
Performance Data From Intelligent Tutoring Systems [22.70004627901319]
We introduce the 3DG framework (3-Dimensional tensor for Densification and Generation), a novel approach combining tensor factorization with advanced generative models.
The framework effectively generated scalable, personalized simulations of learning performance.
arXiv Detail & Related papers (2024-01-29T22:34:01Z) - Forging Vision Foundation Models for Autonomous Driving: Challenges,
Methodologies, and Opportunities [59.02391344178202]
Vision foundation models (VFMs) serve as potent building blocks for a wide range of AI applications.
The scarcity of comprehensive training data, the need for multi-sensor integration, and the diverse task-specific architectures pose significant obstacles to the development of VFMs.
This paper delves into the critical challenge of forging VFMs tailored specifically for autonomous driving, while also outlining future directions.
arXiv Detail & Related papers (2024-01-16T01:57:24Z) - Integrating AI and Learning Analytics for Data-Driven Pedagogical Decisions and Personalized Interventions in Education [0.2812395851874055]
This research study explores the conceptualization, development, and deployment of an innovative learning analytics tool.
By analyzing critical data points such as students' stress levels, curiosity, confusion, agitation, topic preferences, and study methods, the tool provides a comprehensive view of the learning environment.
This research underscores AI's role in shaping personalized, data-driven education.
arXiv Detail & Related papers (2023-12-15T06:00:26Z) - Current Trends in Digital Twin Development, Maintenance, and Operation: An Interview Study [0.2871849986181679]
Digital twins (DT) are often defined as a pairing of a physical entity and a corresponding virtual entity (VE)
We performed a semi-structured interview research with 19 professionals from industry and academia who are closely associated with different lifecycle stages of digital twins.
arXiv Detail & Related papers (2023-06-16T12:19:28Z) - GPT-4 Technical Report [116.90398195245983]
GPT-4 is a large-scale, multimodal model which can accept image and text inputs and produce text outputs.
It exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers.
arXiv Detail & Related papers (2023-03-15T17:15:04Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - A Unified Continuous Learning Framework for Multi-modal Knowledge
Discovery and Pre-training [73.7507857547549]
We propose to unify knowledge discovery and multi-modal pre-training in a continuous learning framework.
For knowledge discovery, a pre-trained model is used to identify cross-modal links on a graph.
For model pre-training, the knowledge graph is used as the external knowledge to guide the model updating.
arXiv Detail & Related papers (2022-06-11T16:05:06Z) - Unifying Language Learning Paradigms [96.35981503087567]
We present a unified framework for pre-training models that are universally effective across datasets and setups.
We show how different pre-training objectives can be cast as one another and how interpolating between different objectives can be effective.
Our model also achieve strong results at in-context learning, outperforming 175B GPT-3 on zero-shot SuperGLUE and tripling the performance of T5-XXL on one-shot summarization.
arXiv Detail & Related papers (2022-05-10T19:32:20Z) - Digital Twin: From Concept to Practice [1.3633989508250934]
This paper proposes a framework to help practitioners select an appropriate level of sophistication in a Digital Twin.
Three real-life case studies illustrate the application and usefulness of the framework.
arXiv Detail & Related papers (2022-01-14T17:41:26Z) - SLIP: Self-supervision meets Language-Image Pre-training [79.53764315471543]
We study whether self-supervised learning can aid in the use of language supervision for visual representation learning.
We introduce SLIP, a multi-task learning framework for combining self-supervised learning and CLIP pre-training.
We find that SLIP enjoys the best of both worlds: better performance than self-supervision and language supervision.
arXiv Detail & Related papers (2021-12-23T18:07:13Z) - Motivating Learners in Multi-Orchestrator Mobile Edge Learning: A
Stackelberg Game Approach [54.28419430315478]
Mobile Edge Learning enables distributed training of Machine Learning models over heterogeneous edge devices.
In MEL, the training performance deteriorates without the availability of sufficient training data or computing resources.
We propose an incentive mechanism, where we formulate the orchestrators-learners interactions as a 2-round Stackelberg game.
arXiv Detail & Related papers (2021-09-25T17:27:48Z) - Pre-Trained Models: Past, Present and Future [126.21572378910746]
Large-scale pre-trained models (PTMs) have recently achieved great success and become a milestone in the field of artificial intelligence (AI)
By storing knowledge into huge parameters and fine-tuning on specific tasks, the rich knowledge implicitly encoded in huge parameters can benefit a variety of downstream tasks.
It is now the consensus of the AI community to adopt PTMs as backbone for downstream tasks rather than learning models from scratch.
arXiv Detail & Related papers (2021-06-14T02:40:32Z) - A Survey of Knowledge Tracing: Models, Variants, and Applications [70.69281873057619]
Knowledge Tracing is one of the fundamental tasks for student behavioral data analysis.
We present three types of fundamental KT models with distinct technical routes.
We discuss potential directions for future research in this rapidly growing field.
arXiv Detail & Related papers (2021-05-06T13:05:55Z) - Dual Policy Distillation [58.43610940026261]
Policy distillation, which transfers a teacher policy to a student policy, has achieved great success in challenging tasks of deep reinforcement learning.
In this work, we introduce dual policy distillation(DPD), a student-student framework in which two learners operate on the same environment to explore different perspectives of the environment.
The key challenge in developing this dual learning framework is to identify the beneficial knowledge from the peer learner for contemporary learning-based reinforcement learning algorithms.
arXiv Detail & Related papers (2020-06-07T06:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.