Towards Scalable Adaptive Learning with Graph Neural Networks and
Reinforcement Learning
- URL: http://arxiv.org/abs/2305.06398v1
- Date: Wed, 10 May 2023 18:16:04 GMT
- Title: Towards Scalable Adaptive Learning with Graph Neural Networks and
Reinforcement Learning
- Authors: Jean Vassoyan, Jill-J\^enn Vie, Pirmin Lemberger
- Abstract summary: We introduce a flexible and scalable approach towards the problem of learning path personalization.
Our model is a sequential recommender system based on a graph neural network.
Our results demonstrate that it can learn to make good recommendations in the small-data regime.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adaptive learning is an area of educational technology that consists in
delivering personalized learning experiences to address the unique needs of
each learner. An important subfield of adaptive learning is learning path
personalization: it aims at designing systems that recommend sequences of
educational activities to maximize students' learning outcomes. Many machine
learning approaches have already demonstrated significant results in a variety
of contexts related to learning path personalization. However, most of them
were designed for very specific settings and are not very reusable. This is
accentuated by the fact that they often rely on non-scalable models, which are
unable to integrate new elements after being trained on a specific set of
educational resources. In this paper, we introduce a flexible and scalable
approach towards the problem of learning path personalization, which we
formalize as a reinforcement learning problem. Our model is a sequential
recommender system based on a graph neural network, which we evaluate on a
population of simulated learners. Our results demonstrate that it can learn to
make good recommendations in the small-data regime.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Ticketed Learning-Unlearning Schemes [57.89421552780526]
We propose a new ticketed model for learning--unlearning.
We provide space-efficient ticketed learning--unlearning schemes for a broad family of concept classes.
arXiv Detail & Related papers (2023-06-27T18:54:40Z) - Activation Learning by Local Competitions [4.441866681085516]
We develop a biology-inspired learning rule that discovers features by local competitions among neurons.
It is demonstrated that the unsupervised features learned by this local learning rule can serve as a pre-training model.
arXiv Detail & Related papers (2022-09-26T10:43:29Z) - Towards a General Pre-training Framework for Adaptive Learning in MOOCs [37.570119583573955]
We propose a unified framework based on data observation and learning style analysis, properly leveraging heterogeneous learning elements.
We find that course structures, text, and knowledge are helpful for modeling and inherently coherent to student non-sequential learning behaviors.
arXiv Detail & Related papers (2022-07-18T13:18:39Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - Learning where to learn: Gradient sparsity in meta and continual
learning [4.845285139609619]
We show that meta-learning can be improved by letting the learning algorithm decide which weights to change.
We find that patterned sparsity emerges from this process, with the pattern of sparsity varying on a problem-by-problem basis.
Our results shed light on an ongoing debate on whether meta-learning can discover adaptable features and suggest that learning by sparse gradient descent is a powerful inductive bias for meta-learning systems.
arXiv Detail & Related papers (2021-10-27T12:54:36Z) - Friendly Training: Neural Networks Can Adapt Data To Make Learning
Easier [23.886422706697882]
We propose a novel training procedure named Friendly Training.
We show that Friendly Training yields improvements with respect to informed data sub-selection and random selection.
Results suggest that adapting the input data is a feasible way to stabilize learning and improve the skills generalization of the network.
arXiv Detail & Related papers (2021-06-21T10:50:34Z) - Exploring Bayesian Deep Learning for Urgent Instructor Intervention Need
in MOOC Forums [58.221459787471254]
Massive Open Online Courses (MOOCs) have become a popular choice for e-learning thanks to their great flexibility.
Due to large numbers of learners and their diverse backgrounds, it is taxing to offer real-time support.
With the large volume of posts and high workloads for MOOC instructors, it is unlikely that the instructors can identify all learners requiring intervention.
This paper explores for the first time Bayesian deep learning on learner-based text posts with two methods: Monte Carlo Dropout and Variational Inference.
arXiv Detail & Related papers (2021-04-26T15:12:13Z) - Deep Reinforcement Learning for Adaptive Learning Systems [4.8685842576962095]
We formulate the problem of how to find an individualized learning plan based on learner's latent traits.
We apply a model-free deep reinforcement learning algorithm that can effectively find the optimal learning policy.
We also develop a transition model estimator that emulates the learner's learning process using neural networks.
arXiv Detail & Related papers (2020-04-17T18:04:03Z) - The large learning rate phase of deep learning: the catapult mechanism [50.23041928811575]
We present a class of neural networks with solvable training dynamics.
We find good agreement between our model's predictions and training dynamics in realistic deep learning settings.
We believe our results shed light on characteristics of models trained at different learning rates.
arXiv Detail & Related papers (2020-03-04T17:52:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.