A First Step in Using Machine Learning Methods to Enhance Interaction Analysis for Embodied Learning Environments
- URL: http://arxiv.org/abs/2405.06203v1
- Date: Fri, 10 May 2024 02:40:24 GMT
- Title: A First Step in Using Machine Learning Methods to Enhance Interaction Analysis for Embodied Learning Environments
- Authors: Joyce Fonteles, Eduardo Davalos, Ashwin T. S., Yike Zhang, Mengxi Zhou, Efrat Ayalon, Alicia Lane, Selena Steinberg, Gabriella Anton, Joshua Danish, Noel Enyedy, Gautam Biswas,
- Abstract summary: This study aims to simplify researchers' tasks, using Machine Learning and Multimodal Learning Analytics.
We present an initial case study to determine the feasibility of visually representing students' states, actions, gaze, affect, and movement on a timeline.
The timeline allows us to investigate the alignment of critical learning moments identified by multimodal and interaction analysis.
- Score: 4.349901731099916
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Investigating children's embodied learning in mixed-reality environments, where they collaboratively simulate scientific processes, requires analyzing complex multimodal data to interpret their learning and coordination behaviors. Learning scientists have developed Interaction Analysis (IA) methodologies for analyzing such data, but this requires researchers to watch hours of videos to extract and interpret students' learning patterns. Our study aims to simplify researchers' tasks, using Machine Learning and Multimodal Learning Analytics to support the IA processes. Our study combines machine learning algorithms and multimodal analyses to support and streamline researcher efforts in developing a comprehensive understanding of students' scientific engagement through their movements, gaze, and affective responses in a simulated scenario. To facilitate an effective researcher-AI partnership, we present an initial case study to determine the feasibility of visually representing students' states, actions, gaze, affect, and movement on a timeline. Our case study focuses on a specific science scenario where students learn about photosynthesis. The timeline allows us to investigate the alignment of critical learning moments identified by multimodal and interaction analysis, and uncover insights into students' temporal learning progressions.
Related papers
- Retrieval-Enhanced Machine Learning: Synthesis and Opportunities [60.34182805429511]
Retrieval-enhancement can be extended to a broader spectrum of machine learning (ML)
This work introduces a formal framework of this paradigm, Retrieval-Enhanced Machine Learning (REML), by synthesizing the literature in various domains in ML with consistent notations which is missing from the current literature.
The goal of this work is to equip researchers across various disciplines with a comprehensive, formally structured framework of retrieval-enhanced models, thereby fostering interdisciplinary future research.
arXiv Detail & Related papers (2024-07-17T20:01:21Z) - RIGL: A Unified Reciprocal Approach for Tracing the Independent and Group Learning Processes [22.379764500005503]
We propose RIGL, a unified Reciprocal model to trace knowledge states at both the individual and group levels.
In this paper, we introduce a time frame-aware reciprocal embedding module to concurrently model both student and group response interactions.
We design a relation-guided temporal attentive network, comprised of dynamic graph modeling coupled with a temporal self-attention mechanism.
arXiv Detail & Related papers (2024-06-18T10:16:18Z) - Harnessing Transparent Learning Analytics for Individualized Support
through Auto-detection of Engagement in Face-to-Face Collaborative Learning [3.0184625301151833]
This paper proposes a transparent approach to automatically detect student's individual engagement in the process of collaboration.
The proposed approach can reflect student's individual engagement and can be used as an indicator to distinguish students with different collaborative learning challenges.
arXiv Detail & Related papers (2024-01-03T12:20:28Z) - Predicting the long-term collective behaviour of fish pairs with deep learning [52.83927369492564]
This study introduces a deep learning model to assess social interactions in the fish species Hemigrammus rhodostomus.
We compare the results of our deep learning approach to experiments and to the results of a state-of-the-art analytical model.
We demonstrate that machine learning models social interactions can directly compete with their analytical counterparts in subtle experimental observables.
arXiv Detail & Related papers (2023-02-14T05:25:03Z) - Vision+X: A Survey on Multimodal Learning in the Light of Data [64.03266872103835]
multimodal machine learning that incorporates data from various sources has become an increasingly popular research area.
We analyze the commonness and uniqueness of each data format mainly ranging from vision, audio, text, and motions.
We investigate the existing literature on multimodal learning from both the representation learning and downstream application levels.
arXiv Detail & Related papers (2022-10-05T13:14:57Z) - A Deep Learning Approach to Analyzing Continuous-Time Systems [20.89961728689037]
We show that deep learning can be used to analyze complex processes.
Our approach relaxes standard assumptions that are implausible for many natural systems.
We demonstrate substantial improvements on behavioral and neuroimaging data.
arXiv Detail & Related papers (2022-09-25T03:02:31Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - Sharing to learn and learning to share; Fitting together Meta-Learning, Multi-Task Learning, and Transfer Learning: A meta review [4.462334751640166]
This article reviews research studies that combine (two of) these learning algorithms.
Based on the knowledge accumulated from the literature, we hypothesize a generic task-agnostic and model-agnostic learning network.
arXiv Detail & Related papers (2021-11-23T20:41:06Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.