Dynamic Gaussian Splatting from Markerless Motion Capture can
Reconstruct Infants Movements
- URL: http://arxiv.org/abs/2310.19441v1
- Date: Mon, 30 Oct 2023 11:09:39 GMT
- Title: Dynamic Gaussian Splatting from Markerless Motion Capture can
Reconstruct Infants Movements
- Authors: R. James Cotton and Colleen Peyton
- Abstract summary: This work paves the way for advanced movement analysis tools that can be applied to diverse clinical populations.
We explored the application of dynamic Gaussian splatting to sparse markerless motion capture data.
Our results demonstrate the potential of this method in rendering novel views of scenes and tracking infant movements.
- Score: 2.44755919161855
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Easy access to precise 3D tracking of movement could benefit many aspects of
rehabilitation. A challenge to achieving this goal is that while there are many
datasets and pretrained algorithms for able-bodied adults, algorithms trained
on these datasets often fail to generalize to clinical populations including
people with disabilities, infants, and neonates. Reliable movement analysis of
infants and neonates is important as spontaneous movement behavior is an
important indicator of neurological function and neurodevelopmental disability,
which can help guide early interventions. We explored the application of
dynamic Gaussian splatting to sparse markerless motion capture (MMC) data. Our
approach leverages semantic segmentation masks to focus on the infant,
significantly improving the initialization of the scene. Our results
demonstrate the potential of this method in rendering novel views of scenes and
tracking infant movements. This work paves the way for advanced movement
analysis tools that can be applied to diverse clinical populations, with a
particular emphasis on early detection in infants.
Related papers
- Modeling 3D Infant Kinetics Using Adaptive Graph Convolutional Networks [2.2279946664123664]
Spontaneous motor activity, orkinetics', is shown to provide a powerful surrogate measure of upcoming neurodevelopment.
Here, we follow an alternative approach, predicting infants' maturation based on data-driven evaluation of individual motor patterns.
arXiv Detail & Related papers (2024-02-22T09:34:48Z) - Challenges in Video-Based Infant Action Recognition: A Critical
Examination of the State of the Art [9.327466428403916]
We introduce a groundbreaking dataset called InfActPrimitive'', encompassing five significant infant milestone action categories.
We conduct an extensive comparative analysis employing cutting-edge skeleton-based action recognition models.
Our findings reveal that, although the PoseC3D model achieves the highest accuracy at approximately 71%, the remaining models struggle to accurately capture the dynamics of infant actions.
arXiv Detail & Related papers (2023-11-21T02:36:47Z) - Protecting the Future: Neonatal Seizure Detection with Spatial-Temporal
Modeling [21.955397001414187]
We propose a deep learning framework, namely STATENet, to address the exclusive challenges with exquisite designs at the temporal, spatial and model levels.
The experiments over the real-world large-scale neonatal EEG dataset illustrate that our framework achieves significantly better seizure detection performance.
arXiv Detail & Related papers (2023-07-02T14:28:12Z) - Towards early prediction of neurodevelopmental disorders: Computational
model for Face Touch and Self-adaptors in Infants [0.0]
evaluating a baby's movements is key to understanding possible risks of developmental disorders in their growth.
Previous research in psychology has shown that measuring specific movements or gestures such as face touches in babies is essential to analyse how babies understand themselves and their context.
This research proposes the first automatic approach that detects face touches from video recordings by tracking infants' movements and gestures.
arXiv Detail & Related papers (2023-01-07T18:08:43Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - A Spatio-temporal Attention-based Model for Infant Movement Assessment
from Videos [44.71923220732036]
We develop a new method for fidgety movement assessment using human poses extracted from short clips.
Human poses capture only relevant motion profiles of joints and limbs and are free from irrelevant appearance artifacts.
Our experiments show that the proposed method achieves the ROC-AUC score of 81.87%, significantly outperforming existing competing methods with better interpretability.
arXiv Detail & Related papers (2021-05-20T14:31:54Z) - One-shot action recognition towards novel assistive therapies [63.23654147345168]
This work is motivated by the automated analysis of medical therapies that involve action imitation games.
The presented approach incorporates a pre-processing step that standardizes heterogeneous motion data conditions.
We evaluate the approach on a real use-case of automated video analysis for therapy support with autistic people.
arXiv Detail & Related papers (2021-02-17T19:41:37Z) - Towards human-level performance on automatic pose estimation of infant
spontaneous movements [2.7086496937827005]
Four types of convolutional neural networks were trained and evaluated on a novel infant pose dataset.
The best performing neural network had a similar localization error to the inter-rater spread of human expert annotations.
Overall, the results of our study show that pose estimation of infant spontaneous movements has a great potential to support research initiatives on early detection of developmental disorders in children with perinatal brain injuries.
arXiv Detail & Related papers (2020-10-12T18:17:47Z) - Detecting Parkinsonian Tremor from IMU Data Collected In-The-Wild using
Deep Multiple-Instance Learning [59.74684475991192]
Parkinson's Disease (PD) is a slowly evolving neuro-logical disease that affects about 1% of the population above 60 years old.
PD symptoms include tremor, rigidity and braykinesia.
We present a method for automatically identifying tremorous episodes related to PD, based on IMU signals captured via a smartphone device.
arXiv Detail & Related papers (2020-05-06T09:02:30Z) - Retinopathy of Prematurity Stage Diagnosis Using Object Segmentation and
Convolutional Neural Networks [68.96150598294072]
Retinopathy of Prematurity (ROP) is an eye disorder primarily affecting premature infants with lower weights.
It causes proliferation of vessels in the retina and could result in vision loss and, eventually, retinal detachment, leading to blindness.
In recent years, there has been a significant effort to automate the diagnosis using deep learning.
This paper builds upon the success of previous models and develops a novel architecture, which combines object segmentation and convolutional neural networks (CNN)
Our proposed system first trains an object segmentation model to identify the demarcation line at a pixel level and adds the resulting mask as an additional "color" channel in
arXiv Detail & Related papers (2020-04-03T14:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.