Feasibility of assessing cognitive impairment via distributed camera network and privacy-preserving edge computing
- URL: http://arxiv.org/abs/2408.10442v1
- Date: Mon, 19 Aug 2024 22:34:43 GMT
- Title: Feasibility of assessing cognitive impairment via distributed camera network and privacy-preserving edge computing
- Authors: Chaitra Hegde, Yashar Kiarashi, Allan I Levey, Amy D Rodriguez, Hyeokhyen Kwon, Gari D Clifford,
- Abstract summary: Mild cognitive impairment (MCI) is characterized by a decline in cognitive functions beyond typical age and education-related expectations.
We developed movement and social interaction features, which were then used to train a series of machine learning algorithms.
Despite lacking individual identifiers to associate with specific levels of MCI, a machine learning approach using the most significant features provided 71% accuracy.
- Score: 2.2231315943430143
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: INTRODUCTION: Mild cognitive impairment (MCI) is characterized by a decline in cognitive functions beyond typical age and education-related expectations. Since, MCI has been linked to reduced social interactions and increased aimless movements, we aimed to automate the capture of these behaviors to enhance longitudinal monitoring. METHODS: Using a privacy-preserving distributed camera network, we collected movement and social interaction data from groups of individuals with MCI undergoing therapy within a 1700$m^2$ space. We developed movement and social interaction features, which were then used to train a series of machine learning algorithms to distinguish between higher and lower cognitive functioning MCI groups. RESULTS: A Wilcoxon rank-sum test revealed statistically significant differences between high and low-functioning cohorts in features such as linear path length, walking speed, change in direction while walking, entropy of velocity and direction change, and number of group formations in the indoor space. Despite lacking individual identifiers to associate with specific levels of MCI, a machine learning approach using the most significant features provided a 71% accuracy. DISCUSSION: We provide evidence to show that a privacy-preserving low-cost camera network using edge computing framework has the potential to distinguish between different levels of cognitive impairment from the movements and social interactions captured during group activities.
Related papers
- High-fidelity social learning via shared episodic memories enhances collaborative foraging through mnemonic convergence [0.0]
Social learning enables individuals to acquire knowledge by observing and imitating others.
This study explores the interrelation between episodic memory and social learning in collective foraging.
arXiv Detail & Related papers (2024-12-28T20:55:38Z) - Visual-Geometric Collaborative Guidance for Affordance Learning [63.038406948791454]
We propose a visual-geometric collaborative guided affordance learning network that incorporates visual and geometric cues.
Our method outperforms the representative models regarding objective metrics and visual quality.
arXiv Detail & Related papers (2024-10-15T07:35:51Z) - DSAM: A Deep Learning Framework for Analyzing Temporal and Spatial Dynamics in Brain Networks [4.041732967881764]
Most rs-fMRI studies compute a single static functional connectivity matrix across brain regions of interest.
These approaches are at risk of oversimplifying brain dynamics and lack proper consideration of the goal at hand.
We propose a novel interpretable deep learning framework that learns goal-specific functional connectivity matrix directly from time series.
arXiv Detail & Related papers (2024-05-19T23:35:06Z) - Bodily Behaviors in Social Interaction: Novel Annotations and
State-of-the-Art Evaluation [0.0]
We present BBSI, the first set of annotations of complex Bodily Behaviors embedded in continuous Social Interactions.
Based on previous work in psychology, we manually annotated 26 hours of spontaneous human behavior.
We adapt the Pyramid Dilated Attention Network (PDAN), a state-of-the-art approach for human action detection.
arXiv Detail & Related papers (2022-07-26T11:24:00Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - The world seems different in a social context: a neural network analysis
of human experimental data [57.729312306803955]
We show that it is possible to replicate human behavioral data in both individual and social task settings by modifying the precision of prior and sensory signals.
An analysis of the neural activation traces of the trained networks provides evidence that information is coded in fundamentally different ways in the network in the individual and in the social conditions.
arXiv Detail & Related papers (2022-03-03T17:19:12Z) - Learning shared neural manifolds from multi-subject FMRI data [13.093635609349874]
We propose a neural network called MRMD-AEmani that learns a common embedding from multiple subjects in an experiment.
We show that our learned common space represents antemporal manifold (where new points not seen during training can be mapped), improves the classification of stimulus features of unseen timepoints.
We believe this framework can be used for many downstream applications such as guided brain-computer interface (BCI) training in the future.
arXiv Detail & Related papers (2021-12-22T23:08:39Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - Identity-Aware Attribute Recognition via Real-Time Distributed Inference
in Mobile Edge Clouds [53.07042574352251]
We design novel models for pedestrian attribute recognition with re-ID in an MEC-enabled camera monitoring system.
We propose a novel inference framework with a set of distributed modules, by jointly considering the attribute recognition and person re-ID.
We then devise a learning-based algorithm for the distributions of the modules of the proposed distributed inference framework.
arXiv Detail & Related papers (2020-08-12T12:03:27Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.