Predicting Engagement in Video Lectures
- URL: http://arxiv.org/abs/2006.00592v2
- Date: Wed, 10 Jun 2020 15:33:02 GMT
- Title: Predicting Engagement in Video Lectures
- Authors: Sahan Bulathwela, Mar\'ia P\'erez-Ortiz, Aldo Lipani, Emine Yilmaz and
John Shawe-Taylor
- Abstract summary: We introduce a novel, large dataset of video lectures for predicting context-agnostic engagement.
We propose both cross-modal and modality-specific feature sets to achieve this task.
We demonstrate the use of our approach in the case of data scarcity.
- Score: 24.415345855402624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The explosion of Open Educational Resources (OERs) in the recent years
creates the demand for scalable, automatic approaches to process and evaluate
OERs, with the end goal of identifying and recommending the most suitable
educational materials for learners. We focus on building models to find the
characteristics and features involved in context-agnostic engagement (i.e.
population-based), a seldom researched topic compared to other contextualised
and personalised approaches that focus more on individual learner engagement.
Learner engagement, is arguably a more reliable measure than popularity/number
of views, is more abundant than user ratings and has also been shown to be a
crucial component in achieving learning outcomes. In this work, we explore the
idea of building a predictive model for population-based engagement in
education. We introduce a novel, large dataset of video lectures for predicting
context-agnostic engagement and propose both cross-modal and modality-specific
feature sets to achieve this task. We further test different strategies for
quantifying learner engagement signals. We demonstrate the use of our approach
in the case of data scarcity. Additionally, we perform a sensitivity analysis
of the best performing model, which shows promising performance and can be
easily integrated into an educational recommender system for OERs.
Related papers
- Context is Key: A Benchmark for Forecasting with Essential Textual Information [87.3175915185287]
"Context is Key" (CiK) is a time series forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context.
We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters.
Our experiments highlight the importance of incorporating contextual information, demonstrate surprising performance when using LLM-based forecasting models, and also reveal some of their critical shortcomings.
arXiv Detail & Related papers (2024-10-24T17:56:08Z) - A General Model for Detecting Learner Engagement: Implementation and Evaluation [0.0]
This paper proposes a general, lightweight model for selecting and processing features to detect learners' engagement levels.
We analyzed the videos from the publicly available DAiSEE dataset to capture the dynamic essence of learner engagement.
The suggested model achieves an accuracy of 68.57% in a specific implementation and outperforms the studied state-of-the-art models detecting learners' engagement levels.
arXiv Detail & Related papers (2024-05-07T12:11:15Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Unveiling the Tapestry of Automated Essay Scoring: A Comprehensive
Investigation of Accuracy, Fairness, and Generalizability [5.426458555881673]
This study aims to uncover the intricate relationship between an AES model's accuracy, fairness, and generalizability.
We evaluate nine prominent AES methods and evaluate their performance using seven metrics on an open-sourced dataset.
arXiv Detail & Related papers (2024-01-11T04:28:02Z) - Advancing Deep Active Learning & Data Subset Selection: Unifying
Principles with Information-Theory Intuitions [3.0539022029583953]
This thesis aims to enhance the practicality of deep learning by improving the label and training efficiency of deep learning models.
We investigate data subset selection techniques, specifically active learning and active sampling, grounded in information-theoretic principles.
arXiv Detail & Related papers (2024-01-09T01:41:36Z) - One-Shot Open Affordance Learning with Foundation Models [54.15857111929812]
We introduce One-shot Open Affordance Learning (OOAL), where a model is trained with just one example per base object category.
We propose a vision-language framework with simple and effective designs that boost the alignment between visual features and affordance text embeddings.
Experiments on two affordance segmentation benchmarks show that the proposed method outperforms state-of-the-art models with less than 1% of the full training data.
arXiv Detail & Related papers (2023-11-29T16:23:06Z) - Towards a General Pre-training Framework for Adaptive Learning in MOOCs [37.570119583573955]
We propose a unified framework based on data observation and learning style analysis, properly leveraging heterogeneous learning elements.
We find that course structures, text, and knowledge are helpful for modeling and inherently coherent to student non-sequential learning behaviors.
arXiv Detail & Related papers (2022-07-18T13:18:39Z) - Can Population-based Engagement Improve Personalisation? A Novel Dataset
and Experiments [21.12546768556595]
VLE is a novel dataset that consists of content and video based features extracted from publicly available scientific video lectures.
Our experimental results indicate that the newly proposed VLE dataset leads to building context-agnostic engagement prediction models.
Experiments in combining the built model with a personalising algorithm show promising improvements in addressing the cold-start problem encountered in educational recommenders.
arXiv Detail & Related papers (2022-06-22T15:53:24Z) - Self-supervised Co-training for Video Representation Learning [103.69904379356413]
We investigate the benefit of adding semantic-class positives to instance-based Info Noise Contrastive Estimation training.
We propose a novel self-supervised co-training scheme to improve the popular infoNCE loss.
We evaluate the quality of the learnt representation on two different downstream tasks: action recognition and video retrieval.
arXiv Detail & Related papers (2020-10-19T17:59:01Z) - Enhancing Dialogue Generation via Multi-Level Contrastive Learning [57.005432249952406]
We propose a multi-level contrastive learning paradigm to model the fine-grained quality of the responses with respect to the query.
A Rank-aware (RC) network is designed to construct the multi-level contrastive optimization objectives.
We build a Knowledge Inference (KI) component to capture the keyword knowledge from the reference during training and exploit such information to encourage the generation of informative words.
arXiv Detail & Related papers (2020-09-19T02:41:04Z) - Learning From Multiple Experts: Self-paced Knowledge Distillation for
Long-tailed Classification [106.08067870620218]
We propose a self-paced knowledge distillation framework, termed Learning From Multiple Experts (LFME)
We refer to these models as 'Experts', and the proposed LFME framework aggregates the knowledge from multiple 'Experts' to learn a unified student model.
We conduct extensive experiments and demonstrate that our method is able to achieve superior performances compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-06T12:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.