Towards a General Pre-training Framework for Adaptive Learning in MOOCs
- URL: http://arxiv.org/abs/2208.04708v1
- Date: Mon, 18 Jul 2022 13:18:39 GMT
- Title: Towards a General Pre-training Framework for Adaptive Learning in MOOCs
- Authors: Qingyang Zhong, Jifan Yu, Zheyuan Zhang, Yiming Mao, Yuquan Wang,
Yankai Lin, Lei Hou, Juanzi Li, Jie Tang
- Abstract summary: We propose a unified framework based on data observation and learning style analysis, properly leveraging heterogeneous learning elements.
We find that course structures, text, and knowledge are helpful for modeling and inherently coherent to student non-sequential learning behaviors.
- Score: 37.570119583573955
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adaptive learning aims to stimulate and meet the needs of individual
learners, which requires sophisticated system-level coordination of diverse
tasks, including modeling learning resources, estimating student states, and
making personalized recommendations. Existing deep learning methods have
achieved great success over statistical models; however, they still lack
generalization for diverse tasks and suffer from insufficient capacity since
they are composed of highly-coupled task-specific architectures and rely on
small-scale, coarse-grained recommendation scenarios. To realize the idea of
general adaptive systems proposed in pedagogical theory, with the emerging
pre-training techniques in NLP, we try to conduct a practical exploration on
applying pre-training to adaptive learning, to propose a unified framework
based on data observation and learning style analysis, properly leveraging
heterogeneous learning elements. Through a series of downstream tasks of
Learning Recommendation, Learning Resource Evaluation, Knowledge Tracing, and
Dropout Prediction, we find that course structures, text, and knowledge are
helpful for modeling and inherently coherent to student non-sequential learning
behaviors and that indirectly relevant information included in the pre-training
foundation can be shared across downstream tasks to facilitate effectiveness.
We finally build a simplified systematic application of adaptive learning and
reflect on the insights brought back to pedagogy. The source code and dataset
will be released.
Related papers
- A Pre-Trained Graph-Based Model for Adaptive Sequencing of Educational Documents [8.986349423301863]
Massive Open Online Courses (MOOCs) have greatly contributed to making education more accessible.
Many MOOCs maintain a rigid, one-size-fits-all structure that fails to address the diverse needs and backgrounds of individual learners.
This study introduces a novel data-efficient framework for learning path personalization that operates without expert annotation.
arXiv Detail & Related papers (2024-11-18T12:29:06Z) - Advancing Deep Active Learning & Data Subset Selection: Unifying
Principles with Information-Theory Intuitions [3.0539022029583953]
This thesis aims to enhance the practicality of deep learning by improving the label and training efficiency of deep learning models.
We investigate data subset selection techniques, specifically active learning and active sampling, grounded in information-theoretic principles.
arXiv Detail & Related papers (2024-01-09T01:41:36Z) - Ticketed Learning-Unlearning Schemes [57.89421552780526]
We propose a new ticketed model for learning--unlearning.
We provide space-efficient ticketed learning--unlearning schemes for a broad family of concept classes.
arXiv Detail & Related papers (2023-06-27T18:54:40Z) - Towards Scalable Adaptive Learning with Graph Neural Networks and
Reinforcement Learning [0.0]
We introduce a flexible and scalable approach towards the problem of learning path personalization.
Our model is a sequential recommender system based on a graph neural network.
Our results demonstrate that it can learn to make good recommendations in the small-data regime.
arXiv Detail & Related papers (2023-05-10T18:16:04Z) - Hierarchically Structured Task-Agnostic Continual Learning [0.0]
We take a task-agnostic view of continual learning and develop a hierarchical information-theoretic optimality principle.
We propose a neural network layer, called the Mixture-of-Variational-Experts layer, that alleviates forgetting by creating a set of information processing paths.
Our approach can operate in a task-agnostic way, i.e., it does not require task-specific knowledge, as is the case with many existing continual learning algorithms.
arXiv Detail & Related papers (2022-11-14T19:53:15Z) - RLTutor: Reinforcement Learning Based Adaptive Tutoring System by
Modeling Virtual Student with Fewer Interactions [10.34673089426247]
We propose a framework for optimizing teaching strategies by constructing a virtual model of the student.
Our results can serve as a buffer between theoretical instructional optimization and practical applications in e-learning systems.
arXiv Detail & Related papers (2021-07-31T15:42:03Z) - Nonparametric Estimation of Heterogeneous Treatment Effects: From Theory
to Learning Algorithms [91.3755431537592]
We analyze four broad meta-learning strategies which rely on plug-in estimation and pseudo-outcome regression.
We highlight how this theoretical reasoning can be used to guide principled algorithm design and translate our analyses into practice.
arXiv Detail & Related papers (2021-01-26T17:11:40Z) - Behavior Priors for Efficient Reinforcement Learning [97.81587970962232]
We consider how information and architectural constraints can be combined with ideas from the probabilistic modeling literature to learn behavior priors.
We discuss how such latent variable formulations connect to related work on hierarchical reinforcement learning (HRL) and mutual information and curiosity based objectives.
We demonstrate the effectiveness of our framework by applying it to a range of simulated continuous control domains.
arXiv Detail & Related papers (2020-10-27T13:17:18Z) - Importance Weighted Policy Learning and Adaptation [89.46467771037054]
We study a complementary approach which is conceptually simple, general, modular and built on top of recent improvements in off-policy learning.
The framework is inspired by ideas from the probabilistic inference literature and combines robust off-policy learning with a behavior prior.
Our approach achieves competitive adaptation performance on hold-out tasks compared to meta reinforcement learning baselines and can scale to complex sparse-reward scenarios.
arXiv Detail & Related papers (2020-09-10T14:16:58Z) - Revisiting Meta-Learning as Supervised Learning [69.2067288158133]
We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
arXiv Detail & Related papers (2020-02-03T06:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.