Cross Your Body: A Cognitive Assessment System for Children
- URL: http://arxiv.org/abs/2111.12824v1
- Date: Wed, 24 Nov 2021 22:38:07 GMT
- Title: Cross Your Body: A Cognitive Assessment System for Children
- Authors: Saif Sayed and Vassilis Athitsos
- Abstract summary: We created a system called Cross-Your-Body and recorded data, which is imprecise in several aspects.
The videos capture real-world usage, as they record children performing tasks during real-world assessment by psychologists.
It is our goal that this system will be useful in advancing research in cognitive assessment of kids.
- Score: 5.279475826661643
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: While many action recognition techniques have great success on public
benchmarks, such performance is not necessarily replicated in real-world
scenarios, where the data comes from specific application requirements. The
specific real-world application that we are focusing on in this paper is
cognitive assessment in children using cognitively demanding physical tasks. We
created a system called Cross-Your-Body and recorded data, which is unique in
several aspects, including the fact that the tasks have been designed by
psychologists, the subjects are children, and the videos capture real-world
usage, as they record children performing tasks during real-world assessment by
psychologists. Other distinguishing features of our system is that it's scores
can directly be translated to measure executive functioning which is one of the
key factor to distinguish onset of ADHD in adolescent kids. Due to imprecise
execution of actions performed by children, and the presence of fine-grained
motion patterns, we systematically investigate and evaluate relevant methods on
the recorded data. It is our goal that this system will be useful in advancing
research in cognitive assessment of kids.
Related papers
- GPT as Psychologist? Preliminary Evaluations for GPT-4V on Visual Affective Computing [74.68232970965595]
Multimodal large language models (MLLMs) are designed to process and integrate information from multiple sources, such as text, speech, images, and videos.
This paper assesses the application of MLLMs with 5 crucial abilities for affective computing, spanning from visual affective tasks and reasoning tasks.
arXiv Detail & Related papers (2024-03-09T13:56:25Z) - Challenges in Video-Based Infant Action Recognition: A Critical
Examination of the State of the Art [9.327466428403916]
We introduce a groundbreaking dataset called InfActPrimitive'', encompassing five significant infant milestone action categories.
We conduct an extensive comparative analysis employing cutting-edge skeleton-based action recognition models.
Our findings reveal that, although the PoseC3D model achieves the highest accuracy at approximately 71%, the remaining models struggle to accurately capture the dynamics of infant actions.
arXiv Detail & Related papers (2023-11-21T02:36:47Z) - Evaluating Subjective Cognitive Appraisals of Emotions from Large
Language Models [47.890846082224066]
This work fills the gap by presenting CovidET-Appraisals, the most comprehensive dataset to-date that assesses 24 appraisal dimensions.
CovidET-Appraisals presents an ideal testbed to evaluate the ability of large language models to automatically assess and explain cognitive appraisals.
arXiv Detail & Related papers (2023-10-22T19:12:17Z) - Exploiting the Brain's Network Structure for Automatic Identification of
ADHD Subjects [70.37277191524755]
We show that the brain can be modeled as a functional network, and certain properties of the networks differ in ADHD subjects from control subjects.
We train our classifier with 776 subjects and test on 171 subjects provided by The Neuro Bureau for the ADHD-200 challenge.
arXiv Detail & Related papers (2023-06-15T16:22:57Z) - Language-Assisted Deep Learning for Autistic Behaviors Recognition [13.200025637384897]
We show that a vision-based problem behaviors recognition system can achieve high accuracy and outperform the previous methods by a large margin.
We propose a two-branch multimodal deep learning framework by incorporating the "freely available" language description for each type of problem behavior.
Experimental results demonstrate that incorporating additional language supervision can bring an obvious performance boost for the autism problem behaviors recognition task.
arXiv Detail & Related papers (2022-11-17T02:58:55Z) - An Application of a Runtime Epistemic Probabilistic Event Calculus to
Decision-making in e-Health Systems [1.7761842246724584]
We present a runtime architecture that integrates sensorial data and classifiers with a logic-based decision-making system.
In this application, children perform a rehabilitation task in the form of games.
The main aim of the system is to derive a set of parameters the child's current level of cognitive and behavioral performance.
arXiv Detail & Related papers (2022-09-26T21:53:01Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Vision-Based Activity Recognition in Children with Autism-Related
Behaviors [15.915410623440874]
We demonstrate the effect of a region-based computer vision system to help clinicians and parents analyze a child's behavior.
The data is pre-processed by detecting the target child in the video to reduce the impact of background noise.
Motivated by the effectiveness of temporal convolutional models, we propose both light-weight and conventional models capable of extracting action features from video frames.
arXiv Detail & Related papers (2022-08-08T15:12:27Z) - Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units
and a Unified Framework [83.21732533130846]
The paper focuses on large in-the-wild databases, i.e., Aff-Wild and Aff-Wild2.
It presents the design of two classes of deep neural networks trained with these databases.
A novel multi-task and holistic framework is presented which is able to jointly learn and effectively generalize and perform affect recognition.
arXiv Detail & Related papers (2021-03-29T17:36:20Z) - Automated system to measure Tandem Gait to assess executive functions in
children [0.0]
This work focuses on assessing motor function in children by analyzing their gait movements.
We have devised a computer vision-based assessment system that only requires a camera which makes it easier to employ in school or home environments.
The results highlight the efficacy of proposed work for automating the assessment of children's performances by achieving 76.61% classification accuracy.
arXiv Detail & Related papers (2020-12-15T23:12:13Z) - A robot that counts like a child: a developmental model of counting and
pointing [69.26619423111092]
A novel neuro-robotics model capable of counting real items is introduced.
The model allows us to investigate the interaction between embodiment and numerical cognition.
The trained model is able to count a set of items and at the same time points to them.
arXiv Detail & Related papers (2020-08-05T21:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.