M2LADS: A System for Generating MultiModal Learning Analytics Dashboards
in Open Education
- URL: http://arxiv.org/abs/2305.12561v1
- Date: Sun, 21 May 2023 20:22:38 GMT
- Title: M2LADS: A System for Generating MultiModal Learning Analytics Dashboards
in Open Education
- Authors: \'Alvaro Becerra, Roberto Daza, Ruth Cobos, Aythami Morales, Mutlu
Cukurova, Julian Fierrez
- Abstract summary: M2LADS supports the integration and visualization of multimodal data recorded in MOOCs in the form of Web-based Dashboards.
Based on the edBB platform, the multimodal data gathered contains biometric and behavioral signals.
M2LADS provides opportunities to capture learners' holistic experience during their interactions with the MOOC.
- Score: 15.30924350440346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this article, we present a Web-based System called M2LADS, which supports
the integration and visualization of multimodal data recorded in learning
sessions in a MOOC in the form of Web-based Dashboards. Based on the edBB
platform, the multimodal data gathered contains biometric and behavioral
signals including electroencephalogram data to measure learners' cognitive
attention, heart rate for affective measures, visual attention from the video
recordings. Additionally, learners' static background data and their learning
performance measures are tracked using LOGCE and MOOC tracking logs
respectively, and both are included in the Web-based System. M2LADS provides
opportunities to capture learners' holistic experience during their
interactions with the MOOC, which can in turn be used to improve their learning
outcomes through feedback visualizations and interventions, as well as to
enhance learning analytics models and improve the open content of the MOOC.
Related papers
- Investigating Memorization in Video Diffusion Models [58.70363256771246]
Diffusion models, widely used for image and video generation, face a significant limitation: the risk of memorizing and reproducing training data during inference.
We first formally define the two types of memorization in VDMs (content memorization and motion memorization) in a practical way.
We then introduce new metrics specifically designed to separately assess content and motion memorization in VDMs.
arXiv Detail & Related papers (2024-10-29T02:34:06Z) - Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs [56.391404083287235]
We introduce Cambrian-1, a family of multimodal LLMs (MLLMs) designed with a vision-centric approach.
Our study uses LLMs and visual instruction tuning as an interface to evaluate various visual representations.
We provide model weights, code, supporting tools, datasets, and detailed instruction-tuning and evaluation recipes.
arXiv Detail & Related papers (2024-06-24T17:59:42Z) - VAAD: Visual Attention Analysis Dashboard applied to e-Learning [12.849976246445646]
The tool is named VAAD, an acronym for Visual Attention Analysis Dashboard.
VAAD holds the potential to offer valuable insights into online learning behaviors from both descriptive and predictive perspectives.
arXiv Detail & Related papers (2024-05-30T14:27:40Z) - Informed Meta-Learning [55.2480439325792]
Meta-learning and informed ML stand out as two approaches for incorporating prior knowledge into ML pipelines.
We formalise a hybrid paradigm, informed meta-learning, facilitating the incorporation of priors from unstructured knowledge representations.
We demonstrate the potential benefits of informed meta-learning in improving data efficiency, robustness to observational noise and task distribution shifts.
arXiv Detail & Related papers (2024-02-25T15:08:37Z) - Personalized Federated Learning with Contextual Modulation and
Meta-Learning [2.7716102039510564]
Federated learning has emerged as a promising approach for training machine learning models on decentralized data sources.
We propose a novel framework that combines federated learning with meta-learning techniques to enhance both efficiency and generalization capabilities.
arXiv Detail & Related papers (2023-12-23T08:18:22Z) - Enhancing E-Learning System Through Learning Management System (LMS)
Technologies: Reshape The Learner Experience [0.0]
This E-Learning System can fit any educational needs as follows: chat, virtual classes, supportive resources for the students, individual and group monitoring, and assessment using LMS as maximum efficiency.
arXiv Detail & Related papers (2023-09-01T02:19:08Z) - Multimodal Imbalance-Aware Gradient Modulation for Weakly-supervised
Audio-Visual Video Parsing [107.031903351176]
Weakly-separated audio-visual video parsing (WS-AVVP) aims to localize the temporal extents of audio, visual and audio-visual event instances.
WS-AVVP aims to identify the corresponding event categories with only video-level category labels for training.
arXiv Detail & Related papers (2023-07-05T05:55:10Z) - MATT: Multimodal Attention Level Estimation for e-learning Platforms [16.407885871027887]
This work presents a new multimodal system for remote attention level estimation based on multimodal face analysis.
Our multimodal approach uses different parameters and signals obtained from the behavior and physiological processes that have been related to modeling cognitive load.
The mEBAL database is used in the experimental framework, a public multi-modal database for attention level estimation obtained in an e-learning environment.
arXiv Detail & Related papers (2023-01-22T18:18:20Z) - When CNN Meet with ViT: Towards Semi-Supervised Learning for Multi-Class
Medical Image Semantic Segmentation [13.911947592067678]
In this paper, an advanced consistency-aware pseudo-label-based self-ensembling approach is presented.
Our framework consists of a feature-learning module which is enhanced by ViT and CNN mutually, and a guidance module which is robust for consistency-aware purposes.
Experimental results show that the proposed method achieves state-of-the-art performance on a public benchmark data set.
arXiv Detail & Related papers (2022-08-12T18:21:22Z) - A Unified Continuous Learning Framework for Multi-modal Knowledge
Discovery and Pre-training [73.7507857547549]
We propose to unify knowledge discovery and multi-modal pre-training in a continuous learning framework.
For knowledge discovery, a pre-trained model is used to identify cross-modal links on a graph.
For model pre-training, the knowledge graph is used as the external knowledge to guide the model updating.
arXiv Detail & Related papers (2022-06-11T16:05:06Z) - Relational Graph Learning on Visual and Kinematics Embeddings for
Accurate Gesture Recognition in Robotic Surgery [84.73764603474413]
We propose a novel online approach of multi-modal graph network (i.e., MRG-Net) to dynamically integrate visual and kinematics information.
The effectiveness of our method is demonstrated with state-of-the-art results on the public JIGSAWS dataset.
arXiv Detail & Related papers (2020-11-03T11:00:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.