Introducing CALMED: Multimodal Annotated Dataset for Emotion Detection
in Children with Autism
- URL: http://arxiv.org/abs/2307.13706v1
- Date: Mon, 24 Jul 2023 11:52:05 GMT
- Title: Introducing CALMED: Multimodal Annotated Dataset for Emotion Detection
in Children with Autism
- Authors: Annanda Sousa (NUI Galway), Karen Young (NUI Galway), Mathieu D'aquin
(Data Science, Knowledge, Reasoning and Engineering, LORIA, LORIA - NLPKD),
Manel Zarrouk (LIPN), Jennifer Holloway (ASK)
- Abstract summary: Automatic Emotion Detection (ED) aims to build systems to identify users' emotions automatically.
ED systems tend to perform poorly on people with Autism Spectrum Disorder (ASD)
Previous works have created ED systems tailored for children with ASD but did not share the resulting dataset.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic Emotion Detection (ED) aims to build systems to identify users'
emotions automatically. This field has the potential to enhance HCI, creating
an individualised experience for the user. However, ED systems tend to perform
poorly on people with Autism Spectrum Disorder (ASD). Hence, the need to create
ED systems tailored to how people with autism express emotions. Previous works
have created ED systems tailored for children with ASD but did not share the
resulting dataset. Sharing annotated datasets is essential to enable the
development of more advanced computer models for ED within the research
community. In this paper, we describe our experience establishing a process to
create a multimodal annotated dataset featuring children with a level 1
diagnosis of autism. In addition, we introduce CALMED (Children, Autism,
Multimodal, Emotion, Detection), the resulting multimodal emotion detection
dataset featuring children with autism aged 8-12. CALMED includes audio and
video features extracted from recording files of study sessions with
participants, together with annotations provided by their parents into four
target classes. The generated dataset includes a total of 57,012 examples, with
each example representing a time window of 200ms (0.2s). Our experience and
methods described here, together with the dataset shared, aim to contribute to
future research applications of affective computing in ASD, which has the
potential to create systems to improve the lives of people with ASD.
Related papers
- A Novel Dataset for Video-Based Autism Classification Leveraging Extra-Stimulatory Behavior [10.019271825311316]
Video ASD dataset contains video frame convolutional and attention map feature data.
This dataset contains the features of the frames spanning 2,467 videos, for a total of approximately 1.4 million frames.
In addition to providing features, we also test foundation models on this data to showcase how movement noise affects performance.
arXiv Detail & Related papers (2024-09-06T20:11:02Z) - Hear Me, See Me, Understand Me: Audio-Visual Autism Behavior Recognition [47.550391816383794]
We introduce a novel problem of audio-visual autism behavior recognition.
Social behavior recognition is an essential aspect previously omitted in AI-assisted autism screening research.
We will release our dataset, code, and pre-trained models.
arXiv Detail & Related papers (2024-03-22T22:52:35Z) - Video-Based Autism Detection with Deep Learning [0.0]
We develop a deep learning model that analyzes video clips of children reacting to sensory stimuli.
Results show that our model effectively generalizes and understands key differences in the distinct movements of the children.
arXiv Detail & Related papers (2024-02-26T17:45:00Z) - Introducing SSBD+ Dataset with a Convolutional Pipeline for detecting
Self-Stimulatory Behaviours in Children using raw videos [1.1874952582465603]
The authors propose a novel pipelined deep learning architecture to detect certain self-stimulatory behaviors that help in the diagnosis of autism spectrum disorder (ASD)
An overall accuracy of around 81% was achieved from the proposed pipeline model that is targeted for real-time and hands-free automated diagnosis.
arXiv Detail & Related papers (2023-11-25T16:57:24Z) - Exploiting the Brain's Network Structure for Automatic Identification of
ADHD Subjects [70.37277191524755]
We show that the brain can be modeled as a functional network, and certain properties of the networks differ in ADHD subjects from control subjects.
We train our classifier with 776 subjects and test on 171 subjects provided by The Neuro Bureau for the ADHD-200 challenge.
arXiv Detail & Related papers (2023-06-15T16:22:57Z) - MMASD: A Multimodal Dataset for Autism Intervention Analysis [2.0731167087748994]
This work presents a novel privacy-preserving open-source dataset, MMASD as a MultiModal ASD benchmark dataset.
MMASD includes data from 32 children with ASD, and 1,315 data samples segmented from over 100 hours of intervention recordings.
MMASD aims to assist researchers and therapists in understanding children's cognitive status, monitoring their progress during therapy, and customizing the treatment plan accordingly.
arXiv Detail & Related papers (2023-06-14T05:04:11Z) - Vision-Based Activity Recognition in Children with Autism-Related
Behaviors [15.915410623440874]
We demonstrate the effect of a region-based computer vision system to help clinicians and parents analyze a child's behavior.
The data is pre-processed by detecting the target child in the video to reduce the impact of background noise.
Motivated by the effectiveness of temporal convolutional models, we propose both light-weight and conventional models capable of extracting action features from video frames.
arXiv Detail & Related papers (2022-08-08T15:12:27Z) - Unsupervised Domain Adaptive Learning via Synthetic Data for Person
Re-identification [101.1886788396803]
Person re-identification (re-ID) has gained more and more attention due to its widespread applications in video surveillance.
Unfortunately, the mainstream deep learning methods still need a large quantity of labeled data to train models.
In this paper, we develop a data collector to automatically generate synthetic re-ID samples in a computer game, and construct a data labeler to simultaneously annotate them.
arXiv Detail & Related papers (2021-09-12T15:51:41Z) - AEGIS: A real-time multimodal augmented reality computer vision based
system to assist facial expression recognition for individuals with autism
spectrum disorder [93.0013343535411]
This paper presents the development of a multimodal augmented reality (AR) system which combines the use of computer vision and deep convolutional neural networks (CNN)
The proposed system, which we call AEGIS, is an assistive technology deployable on a variety of user devices including tablets, smartphones, video conference systems, or smartglasses.
We leverage both spatial and temporal information in order to provide an accurate expression prediction, which is then converted into its corresponding visualization and drawn on top of the original video frame.
arXiv Detail & Related papers (2020-10-22T17:20:38Z) - Early Autism Spectrum Disorders Diagnosis Using Eye-Tracking Technology [62.997667081978825]
Lack of money, absence of qualified specialists, and low level of trust to the correction methods are the main issues that affect the in-time diagnoses of ASD.
Our team developed the algorithm that will be able to predict the chances of ASD according to the information from the gaze activity of the child.
arXiv Detail & Related papers (2020-08-21T20:22:55Z) - Deep Learning for Person Re-identification: A Survey and Outlook [233.36948173686602]
Person re-identification (Re-ID) aims at retrieving a person of interest across multiple non-overlapping cameras.
By dissecting the involved components in developing a person Re-ID system, we categorize it into the closed-world and open-world settings.
arXiv Detail & Related papers (2020-01-13T12:49:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.