3DG: A Framework for Using Generative AI for Handling Sparse Learner
Performance Data From Intelligent Tutoring Systems
- URL: http://arxiv.org/abs/2402.01746v1
- Date: Mon, 29 Jan 2024 22:34:01 GMT
- Title: 3DG: A Framework for Using Generative AI for Handling Sparse Learner
Performance Data From Intelligent Tutoring Systems
- Authors: Liang Zhang, Jionghao Lin, Conrad Borchers, Meng Cao, Xiangen Hu
- Abstract summary: We introduce the 3DG framework (3-Dimensional tensor for Densification and Generation), a novel approach combining tensor factorization with advanced generative models.
The framework effectively generated scalable, personalized simulations of learning performance.
- Score: 22.70004627901319
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning performance data (e.g., quiz scores and attempts) is significant for
understanding learner engagement and knowledge mastery level. However, the
learning performance data collected from Intelligent Tutoring Systems (ITSs)
often suffers from sparsity, impacting the accuracy of learner modeling and
knowledge assessments. To address this, we introduce the 3DG framework
(3-Dimensional tensor for Densification and Generation), a novel approach
combining tensor factorization with advanced generative models, including
Generative Adversarial Network (GAN) and Generative Pre-trained Transformer
(GPT), for enhanced data imputation and augmentation. The framework operates by
first representing the data as a three-dimensional tensor, capturing dimensions
of learners, questions, and attempts. It then densifies the data through tensor
factorization and augments it using Generative AI models, tailored to
individual learning patterns identified via clustering. Applied to data from an
AutoTutor lesson by the Center for the Study of Adult Literacy (CSAL), the 3DG
framework effectively generated scalable, personalized simulations of learning
performance. Comparative analysis revealed GAN's superior reliability over
GPT-4 in this context, underscoring its potential in addressing data sparsity
challenges in ITSs and contributing to the advancement of personalized
educational technology.
Related papers
- Generative Fuzzy System for Sequence Generation [16.20988290308979]
We introduce the fuzzy system, a classical modeling method that combines data and knowledge-driven mechanisms, to generative tasks.
We propose an end-to-end GenFS-based model for sequence generation, called FuzzyS2S.
A series of experimental studies were conducted on 12 datasets, covering three distinct categories of generative tasks.
arXiv Detail & Related papers (2024-11-21T06:03:25Z) - Data Augmentation for Sparse Multidimensional Learning Performance Data Using Generative AI [17.242331892899543]
Learning performance data describe correct and incorrect answers or problem-solving attempts in adaptive learning.
Learning performance data tend to be highly sparse (80%(sim)90% missing observations) in most real-world applications due to adaptive item selection.
This article proposes a systematic framework for augmenting learner data to address data sparsity in learning performance data.
arXiv Detail & Related papers (2024-09-24T00:25:07Z) - Generative Adversarial Networks for Imputing Sparse Learning Performance [3.0350058108125646]
This paper proposes using the Generative Adversarial Imputation Networks (GAIN) framework to impute sparse learning performance data.
Our customized GAIN-based method computational process imputes sparse data in a 3D tensor space.
This finding enhances comprehensive learning data modeling and analytics in AI-based education.
arXiv Detail & Related papers (2024-07-26T17:09:48Z) - Enhancing Deep Knowledge Tracing via Diffusion Models for Personalized Adaptive Learning [1.2248793682283963]
This study aims to tackle data shortage issues in student learning records to enhance DKT performance for personalized adaptive learning (PAL)
It employs TabDDPM, a diffusion model, to generate synthetic educational records to augment training data for enhancing DKT.
The experimental results demonstrate that the AI-generated data by TabDDPM significantly improves DKT performance.
arXiv Detail & Related papers (2024-04-25T00:23:20Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Predicting Infant Brain Connectivity with Federated Multi-Trajectory
GNNs using Scarce Data [54.55126643084341]
Existing deep learning solutions suffer from three major limitations.
We introduce FedGmTE-Net++, a federated graph-based multi-trajectory evolution network.
Using the power of federation, we aggregate local learnings among diverse hospitals with limited datasets.
arXiv Detail & Related papers (2024-01-01T10:20:01Z) - GraphLearner: Graph Node Clustering with Fully Learnable Augmentation [76.63963385662426]
Contrastive deep graph clustering (CDGC) leverages the power of contrastive learning to group nodes into different clusters.
We propose a Graph Node Clustering with Fully Learnable Augmentation, termed GraphLearner.
It introduces learnable augmentors to generate high-quality and task-specific augmented samples for CDGC.
arXiv Detail & Related papers (2022-12-07T10:19:39Z) - On-Device Domain Generalization [93.79736882489982]
Domain generalization is critical to on-device machine learning applications.
We find that knowledge distillation is a strong candidate for solving the problem.
We propose a simple idea called out-of-distribution knowledge distillation (OKD), which aims to teach the student how the teacher handles (synthetic) out-of-distribution data.
arXiv Detail & Related papers (2022-09-15T17:59:31Z) - Data-Free Adversarial Knowledge Distillation for Graph Neural Networks [62.71646916191515]
We propose the first end-to-end framework for data-free adversarial knowledge distillation on graph structured data (DFAD-GNN)
To be specific, our DFAD-GNN employs a generative adversarial network, which mainly consists of three components: a pre-trained teacher model and a student model are regarded as two discriminators, and a generator is utilized for deriving training graphs to distill knowledge from the teacher model into the student model.
Our DFAD-GNN significantly surpasses state-of-the-art data-free baselines in the graph classification task.
arXiv Detail & Related papers (2022-05-08T08:19:40Z) - Consistency and Monotonicity Regularization for Neural Knowledge Tracing [50.92661409499299]
Knowledge Tracing (KT) tracking a human's knowledge acquisition is a central component in online learning and AI in Education.
We propose three types of novel data augmentation, coined replacement, insertion, and deletion, along with corresponding regularization losses.
Extensive experiments on various KT benchmarks show that our regularization scheme consistently improves the model performances.
arXiv Detail & Related papers (2021-05-03T02:36:29Z) - Data Augmentation for Enhancing EEG-based Emotion Recognition with Deep
Generative Models [13.56090099952884]
We propose three methods for augmenting EEG training data to enhance the performance of emotion recognition models.
For the full usage strategy, all of the generated data are augmented to the training dataset without judging the quality of the generated data.
The experimental results demonstrate that the augmented training datasets produced by our methods enhance the performance of EEG-based emotion recognition models.
arXiv Detail & Related papers (2020-06-04T21:23:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.