M2LADS Demo: A System for Generating Multimodal Learning Analytics Dashboards
- URL: http://arxiv.org/abs/2502.15363v2
- Date: Fri, 14 Mar 2025 11:33:44 GMT
- Title: M2LADS Demo: A System for Generating Multimodal Learning Analytics Dashboards
- Authors: Alvaro Becerra, Roberto Daza, Ruth Cobos, Aythami Morales, Julian Fierrez,
- Abstract summary: We present a web-based system called M2LADS ("System for Generating Multimodal Learning Analytics Dashboards"), designed to integrate, synchronize, visualize, and analyze multimodal data recorded during computer-based learning sessions with biosensors.<n>This system presents a range of biometric and behavioral data on web-based dashboards, providing detailed insights into various physiological and activity-based metrics.
- Score: 13.616038134322435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a demonstration of a web-based system called M2LADS ("System for Generating Multimodal Learning Analytics Dashboards"), designed to integrate, synchronize, visualize, and analyze multimodal data recorded during computer-based learning sessions with biosensors. This system presents a range of biometric and behavioral data on web-based dashboards, providing detailed insights into various physiological and activity-based metrics. The multimodal data visualized include electroencephalogram (EEG) data for assessing attention and brain activity, heart rate metrics, eye-tracking data to measure visual attention, webcam video recordings, and activity logs of the monitored tasks. M2LADS aims to assist data scientists in two key ways: (1) by providing a comprehensive view of participants' experiences, displaying all data categorized by the activities in which participants are engaged, and (2) by synchronizing all biosignals and videos, facilitating easier data relabeling if any activity information contains errors.
Related papers
- WearableMil: An End-to-End Framework for Military Activity Recognition and Performance Monitoring [7.130450173185638]
This paper introduces an end-to-end framework for preprocessing, analyzing, and recognizing activities from wearable data in military training contexts.<n>We use data from 135 soldiers wearing textitGarmin--55 smartwatches over six months with over 15 million minutes.<n>Our framework addresses missing data through physiologically-informed methods, reducing unknown sleep states from 40.38% to 3.66%.
arXiv Detail & Related papers (2024-10-07T19:35:15Z) - ViKL: A Mammography Interpretation Framework via Multimodal Aggregation of Visual-knowledge-linguistic Features [54.37042005469384]
We announce MVKL, the first multimodal mammography dataset encompassing multi-view images, detailed manifestations and reports.
Based on this dataset, we focus on the challanging task of unsupervised pretraining.
We propose ViKL, a framework that synergizes Visual, Knowledge, and Linguistic features.
arXiv Detail & Related papers (2024-09-24T05:01:23Z) - A Comprehensive Methodological Survey of Human Activity Recognition Across Divers Data Modalities [2.916558661202724]
Human Activity Recognition (HAR) systems aim to understand human behaviour and assign a label to each action.
HAR can leverage various data modalities, such as RGB images and video, skeleton, depth, infrared, point cloud, event stream, audio, acceleration, and radar signals.
This paper presents a comprehensive survey of the latest advancements in HAR from 2014 to 2024.
arXiv Detail & Related papers (2024-09-15T10:04:44Z) - Mosaic of Modalities: A Comprehensive Benchmark for Multimodal Graph Learning [36.75510196380185]
We introduce the Multimodal Graph Benchmark (MM-GRAPH), a pioneering benchmark that incorporates both visual and textual information into graph learning tasks.
MM-GRAPH extends beyond existing text-attributed graph benchmarks, offering a more comprehensive evaluation framework for multimodal graph learning.
This study offers valuable insights into the challenges and opportunities of integrating visual data into graph learning.
arXiv Detail & Related papers (2024-06-24T05:14:09Z) - Single-Shot and Multi-Shot Feature Learning for Multi-Object Tracking [55.13878429987136]
We propose a simple yet effective two-stage feature learning paradigm to jointly learn single-shot and multi-shot features for different targets.
Our method has achieved significant improvements on MOT17 and MOT20 datasets while reaching state-of-the-art performance on DanceTrack dataset.
arXiv Detail & Related papers (2023-11-17T08:17:49Z) - Two-stream Multi-level Dynamic Point Transformer for Two-person Interaction Recognition [45.0131792009999]
We propose a point cloud-based network named Two-stream Multi-level Dynamic Point Transformer for two-person interaction recognition.
Our model addresses the challenge of recognizing two-person interactions by incorporating local-region spatial information, appearance information, and motion information.
Our network outperforms state-of-the-art approaches in most standard evaluation settings.
arXiv Detail & Related papers (2023-07-22T03:51:32Z) - M2LADS: A System for Generating MultiModal Learning Analytics Dashboards
in Open Education [15.30924350440346]
M2LADS supports the integration and visualization of multimodal data recorded in MOOCs in the form of Web-based Dashboards.
Based on the edBB platform, the multimodal data gathered contains biometric and behavioral signals.
M2LADS provides opportunities to capture learners' holistic experience during their interactions with the MOOC.
arXiv Detail & Related papers (2023-05-21T20:22:38Z) - MATT: Multimodal Attention Level Estimation for e-learning Platforms [16.407885871027887]
This work presents a new multimodal system for remote attention level estimation based on multimodal face analysis.
Our multimodal approach uses different parameters and signals obtained from the behavior and physiological processes that have been related to modeling cognitive load.
The mEBAL database is used in the experimental framework, a public multi-modal database for attention level estimation obtained in an e-learning environment.
arXiv Detail & Related papers (2023-01-22T18:18:20Z) - A Deep Learning Approach for the Segmentation of Electroencephalography
Data in Eye Tracking Applications [56.458448869572294]
We introduce DETRtime, a novel framework for time-series segmentation of EEG data.
Our end-to-end deep learning-based framework brings advances in Computer Vision to the forefront.
Our model generalizes well in the task of EEG sleep stage segmentation.
arXiv Detail & Related papers (2022-06-17T10:17:24Z) - Unsupervised Person Re-Identification with Wireless Positioning under
Weak Scene Labeling [131.18390399368997]
We propose to explore unsupervised person re-identification with both visual data and wireless positioning trajectories under weak scene labeling.
Specifically, we propose a novel unsupervised multimodal training framework (UMTF), which models the complementarity of visual data and wireless information.
Our UMTF contains a multimodal data association strategy (MMDA) and a multimodal graph neural network (MMGN)
arXiv Detail & Related papers (2021-10-29T08:25:44Z) - Relational Graph Learning on Visual and Kinematics Embeddings for
Accurate Gesture Recognition in Robotic Surgery [84.73764603474413]
We propose a novel online approach of multi-modal graph network (i.e., MRG-Net) to dynamically integrate visual and kinematics information.
The effectiveness of our method is demonstrated with state-of-the-art results on the public JIGSAWS dataset.
arXiv Detail & Related papers (2020-11-03T11:00:10Z) - End-to-End Models for the Analysis of System 1 and System 2 Interactions
based on Eye-Tracking Data [99.00520068425759]
We propose a computational method, within a modified visual version of the well-known Stroop test, for the identification of different tasks and potential conflicts events.
A statistical analysis shows that the selected variables can characterize the variation of attentive load within different scenarios.
We show that Machine Learning techniques allow to distinguish between different tasks with a good classification accuracy.
arXiv Detail & Related papers (2020-02-03T17:46:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.