Detection of Mild Cognitive Impairment Using Facial Features in Video
Conversations
- URL: http://arxiv.org/abs/2308.15624v1
- Date: Tue, 29 Aug 2023 20:45:41 GMT
- Title: Detection of Mild Cognitive Impairment Using Facial Features in Video
Conversations
- Authors: Muath Alsuhaibani, Hiroko H. Dodge, Mohammad H. Mahoor
- Abstract summary: Early detection of Mild Cognitive Impairment (MCI) leads to early interventions to slow the progression from MCI into dementia.
Deep Learning (DL) algorithms could help achieve early non-invasive, low-cost detection of MCI.
This paper presents the detection of MCI in older adults using DL models based only on facial features extracted from video-recorded conversations at home.
- Score: 4.229544696616341
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Early detection of Mild Cognitive Impairment (MCI) leads to early
interventions to slow the progression from MCI into dementia. Deep Learning
(DL) algorithms could help achieve early non-invasive, low-cost detection of
MCI. This paper presents the detection of MCI in older adults using DL models
based only on facial features extracted from video-recorded conversations at
home. We used the data collected from the I-CONECT behavioral intervention
study (NCT02871921), where several sessions of semi-structured interviews
between socially isolated older individuals and interviewers were video
recorded. We develop a framework that extracts spatial holistic facial features
using a convolutional autoencoder and temporal information using transformers.
Our proposed DL model was able to detect the I-CONECT study participants'
cognitive conditions (MCI vs. those with normal cognition (NC)) using facial
features. The segments and sequence information of the facial features improved
the prediction performance compared with the non-temporal features. The
detection accuracy using this combined method reached 88% whereas 84% is the
accuracy without applying the segments and sequences information of the facial
features within a video on a certain theme.
Related papers
- Advanced Gesture Recognition in Autism: Integrating YOLOv7, Video Augmentation and VideoMAE for Video Analysis [9.162792034193373]
This research work aims to identify repetitive behaviors indicative of autism by analyzing videos captured in natural settings as children engage in daily activities.
The focus is on accurately categorizing real-time repetitive gestures such as spinning, head banging, and arm flapping.
A key component of the proposed methodology is the use of textbfVideoMAE, a model designed to improve both spatial and temporal analysis of video data.
arXiv Detail & Related papers (2024-10-12T02:55:37Z) - FacialPulse: An Efficient RNN-based Depression Detection via Temporal Facial Landmarks [21.076600109388394]
Depression is a prevalent mental health disorder that significantly impacts individuals' lives and well-being.
Recently, there are many end-to-end deep learning methods leveraging the facial expression features for automatic depression detection.
We propose a novel framework called FacialPulse, which recognizes depression with high accuracy and speed.
arXiv Detail & Related papers (2024-08-07T01:50:34Z) - UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - Exploring a Multimodal Fusion-based Deep Learning Network for Detecting Facial Palsy [3.2381492754749632]
We present a multimodal fusion-based deep learning model that utilizes unstructured data and structured data to detect facial palsy.
Our model slightly improved the precision score to 77.05 at the expense of a decrease in the recall score.
arXiv Detail & Related papers (2024-05-26T09:16:34Z) - Analyzing Participants' Engagement during Online Meetings Using Unsupervised Remote Photoplethysmography with Behavioral Features [50.82725748981231]
Engagement measurement finds application in healthcare, education, services.
Use of physiological and behavioral features is viable, but impracticality of traditional physiological measurement arises due to the need for contact sensors.
We demonstrate the feasibility of the unsupervised photoplethysmography (rmography) as an alternative for contact sensors.
arXiv Detail & Related papers (2024-04-05T20:39:16Z) - OpticalDR: A Deep Optical Imaging Model for Privacy-Protective
Depression Recognition [66.91236298878383]
Depression Recognition (DR) poses a considerable challenge, especially in the context of privacy concerns.
We design a new imaging system to erase the identity information of captured facial images while retain disease-relevant features.
It is irreversible for identity information recovery while preserving essential disease-related characteristics necessary for accurate DR.
arXiv Detail & Related papers (2024-02-29T01:20:29Z) - MC-ViViT: Multi-branch Classifier-ViViT to detect Mild Cognitive
Impairment in older adults using facial videos [44.72781467904852]
This paper proposes a novel Multi-branch-Video Vision Transformer (MCViViT) model to distinguish from those with normal cognition by analyzing facial features.
The data comes from the I-CONECT, a behavioral intervention trial aimed at improving cognitive function by providing frequent video chats.
Our experimental results on I-CONECT dataset show the great potential of MC-ViViT in predicting MCI with a high accuracy of 90.63%.
arXiv Detail & Related papers (2023-04-11T15:42:20Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Head Matters: Explainable Human-centered Trait Prediction from Head
Motion Dynamics [15.354601615061814]
We demonstrate the utility of elementary head-motion units termed kinemes for behavioral analytics to predict personality and interview traits.
Transforming head-motion patterns into a sequence of kinemes facilitates discovery of latent temporal signatures characterizing the targeted traits.
arXiv Detail & Related papers (2021-12-15T12:17:59Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - Detecting Parkinsonian Tremor from IMU Data Collected In-The-Wild using
Deep Multiple-Instance Learning [59.74684475991192]
Parkinson's Disease (PD) is a slowly evolving neuro-logical disease that affects about 1% of the population above 60 years old.
PD symptoms include tremor, rigidity and braykinesia.
We present a method for automatically identifying tremorous episodes related to PD, based on IMU signals captured via a smartphone device.
arXiv Detail & Related papers (2020-05-06T09:02:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.