A Toolbox for Modelling Engagement with Educational Videos
- URL: http://arxiv.org/abs/2401.05424v1
- Date: Sat, 30 Dec 2023 21:10:55 GMT
- Title: A Toolbox for Modelling Engagement with Educational Videos
- Authors: Yuxiang Qiu, Karim Djemili, Denis Elezi, Aaneel Shalman, Mar\'ia
P\'erez-Ortiz, Emine Yilmaz, John Shawe-Taylor and Sahan Bulathwela
- Abstract summary: This work presents the PEEKC dataset and the TrueLearn Python library, which contains a dataset and a series of online learner state models.
The dataset contains a large amount of AI-related educational videos, which are of interest for building and validating AI-specific educational recommenders.
- Score: 21.639063299289607
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: With the advancement and utility of Artificial Intelligence (AI),
personalising education to a global population could be a cornerstone of new
educational systems in the future. This work presents the PEEKC dataset and the
TrueLearn Python library, which contains a dataset and a series of online
learner state models that are essential to facilitate research on learner
engagement modelling.TrueLearn family of models was designed following the
"open learner" concept, using humanly-intuitive user representations. This
family of scalable, online models also help end-users visualise the learner
models, which may in the future facilitate user interaction with their
models/recommenders. The extensive documentation and coding examples make the
library highly accessible to both machine learning developers and educational
data mining and learning analytics practitioners. The experiments show the
utility of both the dataset and the library with predictive performance
significantly exceeding comparative baseline models. The dataset contains a
large amount of AI-related educational videos, which are of interest for
building and validating AI-specific educational recommenders.
Related papers
- EduNLP: Towards a Unified and Modularized Library for Educational Resources [78.8523961816045]
We present a unified, modularized, and extensive library, EduNLP, focusing on educational resource understanding.
In the library, we decouple the whole workflow to four key modules with consistent interfaces including data configuration, processing, model implementation, and model evaluation.
For the current version, we primarily provide 10 typical models from four categories, and 5 common downstream-evaluation tasks in the education domain on 8 subjects for users' usage.
arXiv Detail & Related papers (2024-06-03T12:45:40Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - AI-Generated Images as Data Source: The Dawn of Synthetic Era [61.879821573066216]
generative AI has unlocked the potential to create synthetic images that closely resemble real-world photographs.
This paper explores the innovative concept of harnessing these AI-generated images as new data sources.
In contrast to real data, AI-generated data exhibit remarkable advantages, including unmatched abundance and scalability.
arXiv Detail & Related papers (2023-10-03T06:55:19Z) - TrueLearn: A Python Library for Personalised Informational
Recommendations with (Implicit) Feedback [4.575111313202425]
This work describes the TrueLearn Python library, which contains a family of online learning Bayesian models.
For the sake of interpretability and putting the user in control, the TrueLearn library also contains different representations to help end-users visualise the learner models.
arXiv Detail & Related papers (2023-09-20T07:21:50Z) - Synthetic Model Combination: An Instance-wise Approach to Unsupervised
Ensemble Learning [92.89846887298852]
Consider making a prediction over new test data without any opportunity to learn from a training set of labelled data.
Give access to a set of expert models and their predictions alongside some limited information about the dataset used to train them.
arXiv Detail & Related papers (2022-10-11T10:20:31Z) - Can Population-based Engagement Improve Personalisation? A Novel Dataset
and Experiments [21.12546768556595]
VLE is a novel dataset that consists of content and video based features extracted from publicly available scientific video lectures.
Our experimental results indicate that the newly proposed VLE dataset leads to building context-agnostic engagement prediction models.
Experiments in combining the built model with a personalising algorithm show promising improvements in addressing the cold-start problem encountered in educational recommenders.
arXiv Detail & Related papers (2022-06-22T15:53:24Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - PEEK: A Large Dataset of Learner Engagement with Educational Videos [20.49299110732228]
We release a large, novel dataset of learners engaging with educational videos in-the-wild.
The dataset, named Personalised Educational Engagement with Knowledge Topics PEEK, is the first publicly available dataset of this nature.
We believe that granular learner engagement signals in unison with rich content representations will pave the way to building powerful personalization algorithms.
arXiv Detail & Related papers (2021-09-03T11:23:02Z) - Distill on the Go: Online knowledge distillation in self-supervised
learning [1.1470070927586016]
Recent works have shown that wider and deeper models benefit more from self-supervised learning than smaller models.
We propose Distill-on-the-Go (DoGo), a self-supervised learning paradigm using single-stage online knowledge distillation.
Our results show significant performance gain in the presence of noisy and limited labels.
arXiv Detail & Related papers (2021-04-20T09:59:23Z) - Do we need to go Deep? Knowledge Tracing with Big Data [5.218882272051637]
We use EdNet, the largest student interaction dataset publicly available in the education domain, to understand how accurately both deep and traditional models predict future student performances.
Our work observes that logistic regression models with carefully engineered features outperformed deep models from extensive experimentation.
arXiv Detail & Related papers (2021-01-20T22:40:38Z) - Personalized Education in the AI Era: What to Expect Next? [76.37000521334585]
The objective of personalized learning is to design an effective knowledge acquisition track that matches the learner's strengths and bypasses her weaknesses to meet her desired goal.
In recent years, the boost of artificial intelligence (AI) and machine learning (ML) has unfolded novel perspectives to enhance personalized education.
arXiv Detail & Related papers (2021-01-19T12:23:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.