Evaluating Temporal Patterns in Applied Infant Affect Recognition
- URL: http://arxiv.org/abs/2209.03496v1
- Date: Wed, 7 Sep 2022 23:29:15 GMT
- Title: Evaluating Temporal Patterns in Applied Infant Affect Recognition
- Authors: Allen Chang, Lauren Klein, Marcelo R. Rosales, Weiyang Deng, Beth A.
Smith, Maja J. Matari\'c
- Abstract summary: This paper addresses temporal patterns in affect classification performance in the context of an infant-robot interaction.
We trained infant affect recognition classifiers using both facial and body features.
We conducted an in-depth analysis of our best-performing models to evaluate how performance changed over time.
- Score: 5.312541762281102
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Agents must monitor their partners' affective states continuously in order to
understand and engage in social interactions. However, methods for evaluating
affect recognition do not account for changes in classification performance
that may occur during occlusions or transitions between affective states. This
paper addresses temporal patterns in affect classification performance in the
context of an infant-robot interaction, where infants' affective states
contribute to their ability to participate in a therapeutic leg movement
activity. To support robustness to facial occlusions in video recordings, we
trained infant affect recognition classifiers using both facial and body
features. Next, we conducted an in-depth analysis of our best-performing models
to evaluate how performance changed over time as the models encountered missing
data and changing infant affect. During time windows when features were
extracted with high confidence, a unimodal model trained on facial features
achieved the same optimal performance as multimodal models trained on both
facial and body features. However, multimodal models outperformed unimodal
models when evaluated on the entire dataset. Additionally, model performance
was weakest when predicting an affective state transition and improved after
multiple predictions of the same affective state. These findings emphasize the
benefits of incorporating body features in continuous affect recognition for
infants. Our work highlights the importance of evaluating variability in model
performance both over time and in the presence of missing data when applying
affect recognition to social interactions.
Related papers
- Dataset Bias in Human Activity Recognition [57.91018542715725]
This contribution statistically curates the training data to assess to what degree the physical characteristics of humans influence HAR performance.
We evaluate the performance of a state-of-the-art convolutional neural network on two HAR datasets that vary in the sensors, activities, and recording for time-series HAR.
arXiv Detail & Related papers (2023-01-19T12:33:50Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Vision-Based Activity Recognition in Children with Autism-Related
Behaviors [15.915410623440874]
We demonstrate the effect of a region-based computer vision system to help clinicians and parents analyze a child's behavior.
The data is pre-processed by detecting the target child in the video to reduce the impact of background noise.
Motivated by the effectiveness of temporal convolutional models, we propose both light-weight and conventional models capable of extracting action features from video frames.
arXiv Detail & Related papers (2022-08-08T15:12:27Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Temporal Effects on Pre-trained Models for Language Processing Tasks [9.819970078135343]
We present a set of experiments with systems powered by large neural pretrained representations for English to demonstrate that em temporal model deterioration is not as big a concern.
It is however the case that em temporal domain adaptation is beneficial, with better performance for a given time period possible when the system is trained on temporally more recent data.
arXiv Detail & Related papers (2021-11-24T20:44:12Z) - Efficient Modelling Across Time of Human Actions and Interactions [92.39082696657874]
We argue that current fixed-sized-temporal kernels in 3 convolutional neural networks (CNNDs) can be improved to better deal with temporal variations in the input.
We study how we can better handle between classes of actions, by enhancing their feature differences over different layers of the architecture.
The proposed approaches are evaluated on several benchmark action recognition datasets and show competitive results.
arXiv Detail & Related papers (2021-10-05T15:39:11Z) - Harnessing Perceptual Adversarial Patches for Crowd Counting [92.79051296850405]
Crowd counting is vulnerable to adversarial examples in the physical world.
This paper proposes the Perceptual Adrial Patch (PAP) generation framework to learn the shared perceptual features between models.
arXiv Detail & Related papers (2021-09-16T13:51:39Z) - Multi-modal Affect Analysis using standardized data within subjects in
the Wild [8.05417723395965]
We introduce the affective recognition method focusing on facial expression (EXP) and valence-arousal calculation.
Our proposed framework can improve estimation accuracy and robustness effectively.
arXiv Detail & Related papers (2021-07-07T04:18:28Z) - A Multi-term and Multi-task Analyzing Framework for Affective Analysis
in-the-wild [0.2216657815393579]
We introduce the affective recognition method that was submitted to the Affective Behavior Analysis in-the-wild (ABAW) 2020 Contest.
Since affective behaviors have many observable features that have their own time frames, we introduced multiple optimized time windows.
We generated affective recognition models for each time window and ensembled these models together.
arXiv Detail & Related papers (2020-09-29T09:24:29Z) - Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations [71.00754846434744]
We show that imperceptible additive perturbations can significantly alter the disparity map.
We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust.
arXiv Detail & Related papers (2020-09-21T19:20:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.