Stimuli-Sensitive Hawkes Processes for Personalized Student
Procrastination Modeling
- URL: http://arxiv.org/abs/2102.00089v1
- Date: Fri, 29 Jan 2021 22:07:07 GMT
- Title: Stimuli-Sensitive Hawkes Processes for Personalized Student
Procrastination Modeling
- Authors: Mengfan Yao, Siqian Zhao, Shaghayegh Sahebi, Reza Feyzi Behnagh
- Abstract summary: Student procrastination and cramming for deadlines are major challenges in online learning environments.
Previous attempts on dynamic modeling of student procrastination suffer from major issues.
We introduce a new personalized stimuli-sensitive Hawkes process model (SSHP) to predict students' next activity times.
- Score: 1.6822770693792826
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Student procrastination and cramming for deadlines are major challenges in
online learning environments, with negative educational and well-being side
effects. Modeling student activities in continuous time and predicting their
next study time are important problems that can help in creating personalized
timely interventions to mitigate these challenges. However, previous attempts
on dynamic modeling of student procrastination suffer from major issues: they
are unable to predict the next activity times, cannot deal with missing
activity history, are not personalized, and disregard important course
properties, such as assignment deadlines, that are essential in explaining the
cramming behavior. To resolve these problems, we introduce a new personalized
stimuli-sensitive Hawkes process model (SSHP), by jointly modeling all
student-assignment pairs and utilizing their similarities, to predict students'
next activity times even when there are no historical observations. Unlike
regular point processes that assume a constant external triggering effect from
the environment, we model three dynamic types of external stimuli, according to
assignment availabilities, assignment deadlines, and each student's time
management habits. Our experiments on two synthetic datasets and two real-world
datasets show a superior performance of future activity prediction, comparing
with state-of-the-art models. Moreover, we show that our model achieves a
flexible and accurate parameterization of activity intensities in students.
Related papers
- Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Exploring Model Transferability through the Lens of Potential Energy [78.60851825944212]
Transfer learning has become crucial in computer vision tasks due to the vast availability of pre-trained deep learning models.
Existing methods for measuring the transferability of pre-trained models rely on statistical correlations between encoded static features and task labels.
We present an insightful physics-inspired approach named PED to address these challenges.
arXiv Detail & Related papers (2023-08-29T07:15:57Z) - Learning Goal-Conditioned Policies Offline with Self-Supervised Reward
Shaping [94.89128390954572]
We propose a novel self-supervised learning phase on the pre-collected dataset to understand the structure and the dynamics of the model.
We evaluate our method on three continuous control tasks, and show that our model significantly outperforms existing approaches.
arXiv Detail & Related papers (2023-01-05T15:07:10Z) - Learning Self-Modulating Attention in Continuous Time Space with
Applications to Sequential Recommendation [102.24108167002252]
We propose a novel attention network, named self-modulating attention, that models the complex and non-linearly evolving dynamic user preferences.
We empirically demonstrate the effectiveness of our method on top-N sequential recommendation tasks, and the results on three large-scale real-world datasets show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2022-03-30T03:54:11Z) - Learning Neural Models for Continuous-Time Sequences [0.0]
We study the properties of continuous-time event sequences (CTES) and design robust yet scalable neural network-based models to overcome the aforementioned problems.
In this work, we model the underlying generative distribution of events using marked temporal point processes (MTPP) to address a wide range of real-world problems.
arXiv Detail & Related papers (2021-11-13T20:39:15Z) - Jointly Modeling Heterogeneous Student Behaviors and Interactions Among
Multiple Prediction Tasks [35.15654921278549]
Prediction tasks about students have practical significance for both student and college.
In this paper, we focus on modeling heterogeneous behaviors and making multiple predictions together.
We design three motivating behavior prediction tasks based on a real-world dataset collected from a college.
arXiv Detail & Related papers (2021-03-25T02:01:58Z) - Relaxed Clustered Hawkes Process for Procrastination Modeling in MOOCs [1.6822770693792826]
We propose a novel personalized Hawkes process model (RCHawkes-Gamma) that discovers meaningful student behavior clusters.
Our experiments on both synthetic and real-world education datasets show that RCHawkes-Gamma can effectively recover student clusters.
arXiv Detail & Related papers (2021-01-29T22:20:38Z) - Learning Temporal Dynamics from Cycles in Narrated Video [85.89096034281694]
We propose a self-supervised solution to the problem of learning to model how the world changes as time elapses.
Our model learns modality-agnostic functions to predict forward and backward in time, which must undo each other when composed.
We apply the learned dynamics model without further training to various tasks, such as predicting future action and temporally ordering sets of images.
arXiv Detail & Related papers (2021-01-07T02:41:32Z) - Goal-Aware Prediction: Learning to Model What Matters [105.43098326577434]
One of the fundamental challenges in using a learned forward dynamics model is the mismatch between the objective of the learned model and that of the downstream planner or policy.
We propose to direct prediction towards task relevant information, enabling the model to be aware of the current task and encouraging it to only model relevant quantities of the state space.
We find that our method more effectively models the relevant parts of the scene conditioned on the goal, and as a result outperforms standard task-agnostic dynamics models and model-free reinforcement learning.
arXiv Detail & Related papers (2020-07-14T16:42:59Z) - Data-driven modelling and characterisation of task completion sequences
in online courses [0.0]
We show how data-driven analysis of temporal sequences of task completion in online courses can be used.
We identify critical junctures and differences among types of tasks within the course design.
We find that non-rote learning tasks, such as interactive tasks or discussion posts, are correlated with higher performance.
arXiv Detail & Related papers (2020-07-14T12:39:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.