Leveraging Affect Transfer Learning for Behavior Prediction in an
Intelligent Tutoring System
- URL: http://arxiv.org/abs/2002.05242v2
- Date: Fri, 8 Apr 2022 20:58:23 GMT
- Title: Leveraging Affect Transfer Learning for Behavior Prediction in an
Intelligent Tutoring System
- Authors: Nataniel Ruiz, Hao Yu, Danielle A. Allessio, Mona Jalal, Ajjen Joshi,
Thomas Murray, John J. Magee, Jacob R. Whitehill, Vitaly Ablavsky, Ivon
Arroyo, Beverly P. Woolf, Stan Sclaroff, Margrit Betke
- Abstract summary: We propose a video-based transfer learning approach for predicting problem outcomes of students working with an intelligent tutoring system (ITS)
By analyzing a student's face and gestures, our method predicts the outcome of a student answering a problem in an ITS from a video feed.
- Score: 32.63911260416332
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we propose a video-based transfer learning approach for
predicting problem outcomes of students working with an intelligent tutoring
system (ITS). By analyzing a student's face and gestures, our method predicts
the outcome of a student answering a problem in an ITS from a video feed. Our
work is motivated by the reasoning that the ability to predict such outcomes
enables tutoring systems to adjust interventions, such as hints and
encouragement, and to ultimately yield improved student learning. We collected
a large labeled dataset of student interactions with an intelligent online math
tutor consisting of 68 sessions, where 54 individual students solved 2,749
problems. The dataset is public and available at
https://www.cs.bu.edu/faculty/betke/research/learning/ . Working with this
dataset, our transfer-learning challenge was to design a representation in the
source domain of pictures obtained "in the wild" for the task of facial
expression analysis, and transferring this learned representation to the task
of human behavior prediction in the domain of webcam videos of students in a
classroom environment. We developed a novel facial affect representation and a
user-personalized training scheme that unlocks the potential of this
representation. We designed several variants of a recurrent neural network that
models the temporal structure of video sequences of students solving math
problems. Our final model, named ATL-BP for Affect Transfer Learning for
Behavior Prediction, achieves a relative increase in mean F-score of 50% over
the state-of-the-art method on this new dataset.
Related papers
- Detecting Unsuccessful Students in Cybersecurity Exercises in Two Different Learning Environments [0.37729165787434493]
This paper develops automated tools to predict when a student is having difficulty.
In a potential application, such models can aid instructors in detecting struggling students and providing targeted help.
arXiv Detail & Related papers (2024-08-16T04:57:54Z) - ClickTree: A Tree-based Method for Predicting Math Students' Performance Based on Clickstream Data [0.0]
We developed ClickTree, a tree-based methodology, to predict student performance in mathematical assignments based on students' clickstream data.
The developed method achieved an AUC of 0.78844 in the Educational Data Mining Cup 2023 and ranked second in the competition.
Students who performed well in answering end-unit assignment problems engaged more with in-unit assignments and answered more problems correctly, while those who struggled had higher tutoring request rate.
arXiv Detail & Related papers (2024-03-01T23:39:03Z) - What Makes Pre-Trained Visual Representations Successful for Robust
Manipulation? [57.92924256181857]
We find that visual representations designed for manipulation and control tasks do not necessarily generalize under subtle changes in lighting and scene texture.
We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models.
arXiv Detail & Related papers (2023-11-03T18:09:08Z) - Generalisable Methods for Early Prediction in Interactive Simulations
for Education [5.725477071353353]
Classifying students' interaction data in the simulations based on their expected performance has the potential to enable adaptive guidance.
We first measure the students' conceptual understanding through their in-task performance.
Then, we suggest a novel type of features that, starting from clickstream data, encodes both the state of the simulation and the action performed by the student.
arXiv Detail & Related papers (2022-07-04T14:46:56Z) - A Few-shot Learning Graph Multi-Trajectory Evolution Network for
Forecasting Multimodal Baby Connectivity Development from a Baseline
Timepoint [53.73316520733503]
We propose a Graph Multi-Trajectory Evolution Network (GmTE-Net), which adopts a teacher-student paradigm.
This is the first teacher-student architecture tailored for brain graph multi-trajectory growth prediction.
arXiv Detail & Related papers (2021-10-06T08:26:57Z) - Visual Adversarial Imitation Learning using Variational Models [60.69745540036375]
Reward function specification remains a major impediment for learning behaviors through deep reinforcement learning.
Visual demonstrations of desired behaviors often presents an easier and more natural way to teach agents.
We develop a variational model-based adversarial imitation learning algorithm.
arXiv Detail & Related papers (2021-07-16T00:15:18Z) - Jointly Modeling Heterogeneous Student Behaviors and Interactions Among
Multiple Prediction Tasks [35.15654921278549]
Prediction tasks about students have practical significance for both student and college.
In this paper, we focus on modeling heterogeneous behaviors and making multiple predictions together.
We design three motivating behavior prediction tasks based on a real-world dataset collected from a college.
arXiv Detail & Related papers (2021-03-25T02:01:58Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z) - Peer-inspired Student Performance Prediction in Interactive Online
Question Pools with Graph Neural Network [56.62345811216183]
We propose a novel approach using Graph Neural Networks (GNNs) to achieve better student performance prediction in interactive online question pools.
Specifically, we model the relationship between students and questions using student interactions to construct the student-interaction-question network.
We evaluate the effectiveness of our approach on a real-world dataset consisting of 104,113 mouse trajectories generated in the problem-solving process of over 4000 students on 1631 questions.
arXiv Detail & Related papers (2020-08-04T14:55:32Z) - Emotion Recognition on large video dataset based on Convolutional
Feature Extractor and Recurrent Neural Network [0.2855485723554975]
Our model combines convolutional neural network (CNN) with recurrent neural network (RNN) to predict dimensional emotions on video data.
Experiments are performed on publicly available datasets including the largest modern Aff-Wild2 database.
arXiv Detail & Related papers (2020-06-19T14:54:13Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.