Exploring Interactions and Regulations in Collaborative Learning: An
Interdisciplinary Multimodal Dataset
- URL: http://arxiv.org/abs/2210.05419v1
- Date: Tue, 11 Oct 2022 12:56:36 GMT
- Title: Exploring Interactions and Regulations in Collaborative Learning: An
Interdisciplinary Multimodal Dataset
- Authors: Yante Li, Yang Liu, Kh\'Anh Nguyen, Henglin Shi, Eija Vuorenmaa, Sanna
Jarvela, and Guoying Zhao
- Abstract summary: This paper introduces a new multimodal dataset with cognitive and emotional triggers to explore how regulations affect interactions during the collaborative process.
A learning task with intentional interventions is designed and assigned to high school students aged 15 years old.
Analysis of annotated emotions, body gestures, and their interactions indicates that our dataset with designed treatments could effectively examine moments of regulation in collaborative learning.
- Score: 40.193998859310156
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Collaborative learning is an educational approach that enhances learning
through shared goals and working together. Interaction and regulation are two
essential factors related to the success of collaborative learning. Since the
information from various modalities can reflect the quality of collaboration, a
new multimodal dataset with cognitive and emotional triggers is introduced in
this paper to explore how regulations affect interactions during the
collaborative process. Specifically, a learning task with intentional
interventions is designed and assigned to high school students aged 15 years
old (N=81) in average. Multimodal signals, including video, Kinect, audio, and
physiological data, are collected and exploited to study regulations in
collaborative learning in terms of individual-participant-single-modality,
individual-participant-multiple-modality, and
multiple-participant-multiple-modality. Analysis of annotated emotions, body
gestures, and their interactions indicates that our multimodal dataset with
designed treatments could effectively examine moments of regulation in
collaborative learning. In addition, preliminary experiments based on baseline
models suggest that the dataset provides a challenging in-the-wild scenario,
which could further contribute to the fields of education and affective
computing.
Related papers
- On the Comparison between Multi-modal and Single-modal Contrastive Learning [50.74988548106031]
We introduce a theoretical foundation for understanding the differences between multi-modal and single-modal contrastive learning.
We identify the critical factor, which is the signal-to-noise ratio (SNR), that impacts the generalizability in downstream tasks of both multi-modal and single-modal contrastive learning.
Our analysis provides a unified framework that can characterize the optimization and generalization of both single-modal and multi-modal contrastive learning.
arXiv Detail & Related papers (2024-11-05T06:21:17Z) - Multi-agent cooperation through learning-aware policy gradients [53.63948041506278]
Self-interested individuals often fail to cooperate, posing a fundamental challenge for multi-agent learning.
We present the first unbiased, higher-derivative-free policy gradient algorithm for learning-aware reinforcement learning.
We derive from the iterated prisoner's dilemma a novel explanation for how and when cooperation arises among self-interested learning-aware agents.
arXiv Detail & Related papers (2024-10-24T10:48:42Z) - What to align in multimodal contrastive learning? [7.7439394183358745]
We introduce Contrastive MultiModal learning strategy that enables the communication between modalities in a single multimodal space.
Our theoretical analysis shows that shared, synergistic and unique terms of information naturally emerge from this formulation, allowing us to estimate multimodal interactions beyond redundancy.
In the latter, CoMM learns complex multimodal interactions and achieves state-of-the-art results on the six multimodal benchmarks.
arXiv Detail & Related papers (2024-09-11T16:42:22Z) - Distributed Continual Learning [12.18012293738896]
We introduce a mathematical framework capturing the essential aspects of distributed continual learning.
We identify three modes of information exchange: data instances, full model parameters, and modular (partial) model parameters.
Our findings reveal three key insights: sharing parameters is more efficient than sharing data as tasks become more complex.
arXiv Detail & Related papers (2024-05-23T21:24:26Z) - Beyond Unimodal Learning: The Importance of Integrating Multiple Modalities for Lifelong Learning [23.035725779568587]
We study the role and interactions of multiple modalities in mitigating forgetting in deep neural networks (DNNs)
Our findings demonstrate that leveraging multiple views and complementary information from multiple modalities enables the model to learn more accurate and robust representations.
We propose a method for integrating and aligning the information from different modalities by utilizing the relational structural similarities between the data points in each modality.
arXiv Detail & Related papers (2024-05-04T22:02:58Z) - Learning Unseen Modality Interaction [54.23533023883659]
Multimodal learning assumes all modality combinations of interest are available during training to learn cross-modal correspondences.
We pose the problem of unseen modality interaction and introduce a first solution.
It exploits a module that projects the multidimensional features of different modalities into a common space with rich information preserved.
arXiv Detail & Related papers (2023-06-22T10:53:10Z) - Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications [90.6849884683226]
We study the challenge of interaction quantification in a semi-supervised setting with only labeled unimodal data.
Using a precise information-theoretic definition of interactions, our key contribution is the derivation of lower and upper bounds.
We show how these theoretical results can be used to estimate multimodal model performance, guide data collection, and select appropriate multimodal models for various tasks.
arXiv Detail & Related papers (2023-06-07T15:44:53Z) - Behavior Priors for Efficient Reinforcement Learning [97.81587970962232]
We consider how information and architectural constraints can be combined with ideas from the probabilistic modeling literature to learn behavior priors.
We discuss how such latent variable formulations connect to related work on hierarchical reinforcement learning (HRL) and mutual information and curiosity based objectives.
We demonstrate the effectiveness of our framework by applying it to a range of simulated continuous control domains.
arXiv Detail & Related papers (2020-10-27T13:17:18Z) - Recurrent Interaction Network for Jointly Extracting Entities and
Classifying Relations [45.79634026256055]
We design a multi-task learning model which allows the learning of interactions dynamically.
Empirical studies on two real-world datasets confirm the superiority of the proposed model.
arXiv Detail & Related papers (2020-05-01T01:03:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.