Dropout Prediction over Weeks in MOOCs by Learning Representations of
Clicks and Videos
- URL: http://arxiv.org/abs/2002.01955v1
- Date: Wed, 5 Feb 2020 19:10:01 GMT
- Title: Dropout Prediction over Weeks in MOOCs by Learning Representations of
Clicks and Videos
- Authors: Byungsoo Jeon, Namyong Park
- Abstract summary: We develop a method to learn representation for videos and the correlation between videos and clicks.
The results indicate that modeling videos and their correlation with clicks bring statistically significant improvements in predicting dropout.
- Score: 6.030785848148107
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses a key challenge in MOOC dropout prediction, namely to
build meaningful representations from clickstream data. While a variety of
feature extraction techniques have been explored extensively for such purposes,
to our knowledge, no prior works have explored modeling of educational content
(e.g. video) and their correlation with the learner's behavior (e.g.
clickstream) in this context. We bridge this gap by devising a method to learn
representation for videos and the correlation between videos and clicks. The
results indicate that modeling videos and their correlation with clicks bring
statistically significant improvements in predicting dropout.
Related papers
- Video In-context Learning [46.40277880351059]
In this paper, we study video in-context learning, where the model starts from an existing video clip and generates diverse potential future sequences.
To achieve this, we provide a clear definition of the task, and train an autoregressive Transformer on video datasets.
We design various evaluation metrics, including both objective and subjective measures, to demonstrate the visual quality and semantic accuracy of generation results.
arXiv Detail & Related papers (2024-07-10T04:27:06Z) - AICL: Action In-Context Learning for Video Diffusion Model [124.39948693332552]
We propose AICL, which empowers the generative model with the ability to understand action information in reference videos.
Extensive experiments demonstrate that AICL effectively captures the action and achieves state-of-the-art generation performance.
arXiv Detail & Related papers (2024-03-18T07:41:19Z) - Any-point Trajectory Modeling for Policy Learning [64.23861308947852]
We introduce Any-point Trajectory Modeling (ATM) to predict future trajectories of arbitrary points within a video frame.
ATM outperforms strong video pre-training baselines by 80% on average.
We show effective transfer learning of manipulation skills from human videos and videos from a different robot morphology.
arXiv Detail & Related papers (2023-12-28T23:34:43Z) - Early Action Recognition with Action Prototypes [62.826125870298306]
We propose a novel model that learns a prototypical representation of the full action for each class.
We decompose the video into short clips, where a visual encoder extracts features from each clip independently.
Later, a decoder aggregates together in an online fashion features from all the clips for the final class prediction.
arXiv Detail & Related papers (2023-12-11T18:31:13Z) - Causalainer: Causal Explainer for Automatic Video Summarization [77.36225634727221]
In many application scenarios, improper video summarization can have a large impact.
Modeling explainability is a key concern.
A Causal Explainer, dubbed Causalainer, is proposed to address this issue.
arXiv Detail & Related papers (2023-04-30T11:42:06Z) - Click-Based Student Performance Prediction: A Clustering Guided
Meta-Learning Approach [10.962724342736042]
We study the problem of predicting student knowledge acquisition in online courses from clickstream behavior.
Our methodology for predicting in-video quiz performance is based on three key ideas we develop.
arXiv Detail & Related papers (2021-10-28T14:03:29Z) - WeClick: Weakly-Supervised Video Semantic Segmentation with Click
Annotations [64.52412111417019]
We propose an effective weakly-supervised video semantic segmentation pipeline with click annotations, called WeClick.
Since detailed semantic information is not captured by clicks, directly training with click labels leads to poor segmentation predictions.
WeClick outperforms the state-of-the-art methods, increases performance by 10.24% mIoU than baseline, and achieves real-time execution.
arXiv Detail & Related papers (2021-07-07T09:12:46Z) - CoCon: Cooperative-Contrastive Learning [52.342936645996765]
Self-supervised visual representation learning is key for efficient video analysis.
Recent success in learning image representations suggests contrastive learning is a promising framework to tackle this challenge.
We introduce a cooperative variant of contrastive learning to utilize complementary information across views.
arXiv Detail & Related papers (2021-04-30T05:46:02Z) - Exploring Relations in Untrimmed Videos for Self-Supervised Learning [17.670226952829506]
Existing self-supervised learning methods mainly rely on trimmed videos for model training.
We propose a novel self-supervised method, referred to as Exploring Relations in Untemporal Videos (ERUV)
ERUV is able to learn richer representations and it outperforms state-of-the-art self-supervised methods with significant margins.
arXiv Detail & Related papers (2020-08-06T15:29:25Z) - Dropout Prediction over Weeks in MOOCs via Interpretable Multi-Layer
Representation Learning [6.368257863961961]
This paper aims to predict if a learner is going to drop out within the next week, given clickstream data for the current week.
We present a multi-layer representation learning solution based on branch and bound (BB) algorithm.
In experiments on Coursera data, we show that our model learns a representation that allows a simple model to perform similarly well to more complex, task-specific models.
arXiv Detail & Related papers (2020-02-05T01:15:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.