Subtle Signals: Video-based Detection of Infant Non-nutritive Sucking as
a Neurodevelopmental Cue
- URL: http://arxiv.org/abs/2310.16138v1
- Date: Tue, 24 Oct 2023 19:26:07 GMT
- Title: Subtle Signals: Video-based Detection of Infant Non-nutritive Sucking as
a Neurodevelopmental Cue
- Authors: Shaotong Zhu, Michael Wan, Sai Kumar Reddy Manne, Emily Zimmerman,
Sarah Ostadabbas
- Abstract summary: Non-nutritive sucking (NNS) plays a crucial role in assessing healthy early development.
NNS activity has been proposed as a potential safeguard against sudden infant death syndrome (SIDS)
We introduce a vision-based algorithm designed for non-contact detection of NNS activity using baby monitor footage in natural settings.
- Score: 11.1943906461896
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Non-nutritive sucking (NNS), which refers to the act of sucking on a
pacifier, finger, or similar object without nutrient intake, plays a crucial
role in assessing healthy early development. In the case of preterm infants,
NNS behavior is a key component in determining their readiness for feeding. In
older infants, the characteristics of NNS behavior offer valuable insights into
neural and motor development. Additionally, NNS activity has been proposed as a
potential safeguard against sudden infant death syndrome (SIDS). However, the
clinical application of NNS assessment is currently hindered by labor-intensive
and subjective finger-in-mouth evaluations. Consequently, researchers often
resort to expensive pressure transducers for objective NNS signal measurement.
To enhance the accessibility and reliability of NNS signal monitoring for both
clinicians and researchers, we introduce a vision-based algorithm designed for
non-contact detection of NNS activity using baby monitor footage in natural
settings. Our approach involves a comprehensive exploration of optical flow and
temporal convolutional networks, enabling the detection and amplification of
subtle infant-sucking signals. We successfully classify short video clips of
uniform length into NNS and non-NNS periods. Furthermore, we investigate manual
and learning-based techniques to piece together local classification results,
facilitating the segmentation of longer mixed-activity videos into NNS and
non-NNS segments of varying duration. Our research introduces two novel
datasets of annotated infant videos, including one sourced from our clinical
study featuring 19 infant subjects and 183 hours of overnight baby monitor
footage.
Related papers
- Delay Neural Networks (DeNN) for exploiting temporal information in event-based datasets [49.1574468325115]
Delay Neural Networks (DeNN) are designed to explicitly use exact continuous temporal information of spikes in both forward and backward passes.
Good performances are obtained, especially for datasets where temporal information is important.
arXiv Detail & Related papers (2025-01-10T14:58:15Z) - Learning Developmental Age from 3D Infant Kinetics Using Adaptive Graph Neural Networks [2.2279946664123664]
Kinetic Age (KA) is a data-driven metric to quantifies neurodevelopmental maturity by predicting an infant's age based on their movement patterns.
Our method leverages 3D video recordings of infants, processed with pose estimation to extract-temporal series of anatomical landmarks.
These data are modeled using adaptive graph convolutional networks, able to capture the detection-temporal dependencies in infant movements.
arXiv Detail & Related papers (2024-02-22T09:34:48Z) - Dynamic Gaussian Splatting from Markerless Motion Capture can
Reconstruct Infants Movements [2.44755919161855]
This work paves the way for advanced movement analysis tools that can be applied to diverse clinical populations.
We explored the application of dynamic Gaussian splatting to sparse markerless motion capture data.
Our results demonstrate the potential of this method in rendering novel views of scenes and tracking infant movements.
arXiv Detail & Related papers (2023-10-30T11:09:39Z) - Spiking-LEAF: A Learnable Auditory front-end for Spiking Neural Networks [53.31894108974566]
Spiking-LEAF is a learnable auditory front-end meticulously designed for SNN-based speech processing.
On keyword spotting and speaker identification tasks, the proposed Spiking-LEAF outperforms both SOTA spiking auditory front-ends.
arXiv Detail & Related papers (2023-09-18T04:03:05Z) - A Video-based End-to-end Pipeline for Non-nutritive Sucking Action
Recognition and Segmentation in Young Infants [15.049449914396462]
Non-nutritive sucking is a potential biomarker for developmental delays.
One barrier to clinical assessment of NNS stems from its sparsity.
Our method is based on an underlyingNS action recognition algorithm.
arXiv Detail & Related papers (2023-03-29T17:24:21Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z) - Transfer Learning of Deep Spatiotemporal Networks to Model Arbitrarily
Long Videos of Seizures [58.720142291102135]
Detailed analysis of seizure semiology is critical for management of epilepsy patients.
We present GESTURES, a novel architecture combining convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We show that an STCNN trained on a HAR dataset can be used in combination with an RNN to accurately represent arbitrarily long videos of seizures.
arXiv Detail & Related papers (2021-06-22T18:40:31Z) - Provably-Robust Runtime Monitoring of Neuron Activation Patterns [0.0]
It is desirable to monitor in operation time if the input for a deep neural network is similar to the data used in training.
We address this challenge by integrating formal symbolic reasoning inside the monitor construction process.
The provable robustness is further generalized to cases where monitoring a single neuron can use more than one bit.
arXiv Detail & Related papers (2020-11-24T08:37:18Z) - Little Motion, Big Results: Using Motion Magnification to Reveal Subtle
Tremors in Infants [0.0]
Infants exposed to opioids during pregnancy often show signs and symptoms of withdrawal after birth.
The constellation of clinical features, termed as Neonatal Abstinence Syndrome (NAS), include tremors, seizures, irritability, etc.
Monitoring with FNASS requires highly skilled nursing staff, making continuous monitoring difficult.
In this paper we propose an automated tremor detection system using amplified motion signals.
arXiv Detail & Related papers (2020-08-01T15:35:55Z) - Retinopathy of Prematurity Stage Diagnosis Using Object Segmentation and
Convolutional Neural Networks [68.96150598294072]
Retinopathy of Prematurity (ROP) is an eye disorder primarily affecting premature infants with lower weights.
It causes proliferation of vessels in the retina and could result in vision loss and, eventually, retinal detachment, leading to blindness.
In recent years, there has been a significant effort to automate the diagnosis using deep learning.
This paper builds upon the success of previous models and develops a novel architecture, which combines object segmentation and convolutional neural networks (CNN)
Our proposed system first trains an object segmentation model to identify the demarcation line at a pixel level and adds the resulting mask as an additional "color" channel in
arXiv Detail & Related papers (2020-04-03T14:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.