Open video data sharing in developmental and behavioural science
- URL: http://arxiv.org/abs/2207.11020v1
- Date: Fri, 22 Jul 2022 11:47:47 GMT
- Title: Open video data sharing in developmental and behavioural science
- Authors: Peter B Marschik, Tomas Kulvicius, Sarah Fl\"ugge, Claudius Widmann,
Karin Nielsen-Saines, Martin Schulte-R\"uther, Britta H\"uning, Sven B\"olte,
Luise Poustka, Jeff Sigafoos, Florentin W\"org\"otter, Christa Einspieler,
Dajie Zhang
- Abstract summary: Video recording is a widely used method for documenting infant and child behaviours.
The need of shared large-scaled datasets remains increasing.
To share data while abiding by privacy protection rules, a critical question arises whether efforts at data de-identification reduce data utility.
- Score: 1.9312167442699324
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Video recording is a widely used method for documenting infant and child
behaviours in research and clinical practice. Video data has rarely been shared
due to ethical concerns of confidentiality, although the need of shared
large-scaled datasets remains increasing. This demand is even more imperative
when data-driven computer-based approaches are involved, such as screening
tools to complement clinical assessments. To share data while abiding by
privacy protection rules, a critical question arises whether efforts at data
de-identification reduce data utility? We addressed this question by showcasing
the Prechtl's general movements assessment (GMA), an established and globally
practised video-based diagnostic tool in early infancy for detecting
neurological deficits, such as cerebral palsy. To date, no shared
expert-annotated large data repositories for infant movement analyses exist.
Such datasets would massively benefit training and recalibration of human
assessors and the development of computer-based approaches. In the current
study, sequences from a prospective longitudinal infant cohort with a total of
19451 available general movements video snippets were randomly selected for
human clinical reasoning and computer-based analysis. We demonstrated for the
first time that pseudonymisation by face-blurring video recordings is a viable
approach. The video redaction did not affect classification accuracy for either
human assessors or computer vision methods, suggesting an adequate and
easy-to-apply solution for sharing movement video data. We call for further
explorations into efficient and privacy rule-conforming approaches for
deidentifying video data in scientific and clinical fields beyond movement
assessments. These approaches shall enable sharing and merging stand-alone
video datasets into large data pools to advance science and public health.
Related papers
- Advanced Gesture Recognition in Autism: Integrating YOLOv7, Video Augmentation and VideoMAE for Video Analysis [9.162792034193373]
This research work aims to identify repetitive behaviors indicative of autism by analyzing videos captured in natural settings as children engage in daily activities.
The focus is on accurately categorizing real-time repetitive gestures such as spinning, head banging, and arm flapping.
A key component of the proposed methodology is the use of textbfVideoMAE, a model designed to improve both spatial and temporal analysis of video data.
arXiv Detail & Related papers (2024-10-12T02:55:37Z) - Synthetic-To-Real Video Person Re-ID [57.937189569211505]
Person re-identification (Re-ID) is an important task and has significant applications for public security and information forensics.
We investigate a novel and challenging setting of Re-ID, i.e., cross-domain video-based person Re-ID.
We utilize synthetic video datasets as the source domain for training and real-world videos for testing.
arXiv Detail & Related papers (2024-02-03T10:19:21Z) - Challenges in Video-Based Infant Action Recognition: A Critical
Examination of the State of the Art [9.327466428403916]
We introduce a groundbreaking dataset called InfActPrimitive'', encompassing five significant infant milestone action categories.
We conduct an extensive comparative analysis employing cutting-edge skeleton-based action recognition models.
Our findings reveal that, although the PoseC3D model achieves the highest accuracy at approximately 71%, the remaining models struggle to accurately capture the dynamics of infant actions.
arXiv Detail & Related papers (2023-11-21T02:36:47Z) - Learning Human Action Recognition Representations Without Real Humans [66.61527869763819]
We present a benchmark that leverages real-world videos with humans removed and synthetic data containing virtual humans to pre-train a model.
We then evaluate the transferability of the representation learned on this data to a diverse set of downstream action recognition benchmarks.
Our approach outperforms previous baselines by up to 5%.
arXiv Detail & Related papers (2023-11-10T18:38:14Z) - Video object detection for privacy-preserving patient monitoring in
intensive care [0.0]
We propose a new method for exploiting information in the temporal succession of video frames.
Our method outperforms a standard YOLOv5 baseline model by +1.7% mAP@.5 while also training over ten times faster on our proprietary dataset.
arXiv Detail & Related papers (2023-06-26T11:52:22Z) - Incomplete Multimodal Learning for Complex Brain Disorders Prediction [65.95783479249745]
We propose a new incomplete multimodal data integration approach that employs transformers and generative adversarial networks.
We apply our new method to predict cognitive degeneration and disease outcomes using the multimodal imaging genetic data from Alzheimer's Disease Neuroimaging Initiative cohort.
arXiv Detail & Related papers (2023-05-25T16:29:16Z) - When Accuracy Meets Privacy: Two-Stage Federated Transfer Learning
Framework in Classification of Medical Images on Limited Data: A COVID-19
Case Study [77.34726150561087]
COVID-19 pandemic has spread rapidly and caused a shortage of global medical resources.
CNN has been widely utilized and verified in analyzing medical images.
arXiv Detail & Related papers (2022-03-24T02:09:41Z) - Practical Challenges in Differentially-Private Federated Survival
Analysis of Medical Data [57.19441629270029]
In this paper, we take advantage of the inherent properties of neural networks to federate the process of training of survival analysis models.
In the realistic setting of small medical datasets and only a few data centers, this noise makes it harder for the models to converge.
We propose DPFed-post which adds a post-processing stage to the private federated learning scheme.
arXiv Detail & Related papers (2022-02-08T10:03:24Z) - A Deep Learning Approach to Private Data Sharing of Medical Images Using
Conditional GANs [1.2099130772175573]
We present a method for generating a synthetic dataset based on COSENTYX (secukinumab) Ankylosing Spondylitis clinical study.
In this paper, we present a method for generating a synthetic dataset and conduct an in-depth analysis on its properties of along three key metrics: image fidelity, sample diversity and dataset privacy.
arXiv Detail & Related papers (2021-06-24T17:24:06Z) - FLOP: Federated Learning on Medical Datasets using Partial Networks [84.54663831520853]
COVID-19 Disease due to the novel coronavirus has caused a shortage of medical resources.
Different data-driven deep learning models have been developed to mitigate the diagnosis of COVID-19.
The data itself is still scarce due to patient privacy concerns.
We propose a simple yet effective algorithm, named textbfFederated textbfL textbfon Medical datasets using textbfPartial Networks (FLOP)
arXiv Detail & Related papers (2021-02-10T01:56:58Z) - Ultrasound Video Summarization using Deep Reinforcement Learning [12.320114045092291]
We introduce a fully automatic video summarization method tailored to the needs of medical video data.
We show that our method is superior to alternative video summarization methods and that it preserves essential information required by clinical diagnostic standards.
arXiv Detail & Related papers (2020-05-19T15:44:18Z) - LRTD: Long-Range Temporal Dependency based Active Learning for Surgical
Workflow Recognition [67.86810761677403]
We propose a novel active learning method for cost-effective surgical video analysis.
Specifically, we propose a non-local recurrent convolutional network (NL-RCNet), which introduces non-local block to capture the long-range temporal dependency.
We validate our approach on a large surgical video dataset (Cholec80) by performing surgical workflow recognition task.
arXiv Detail & Related papers (2020-04-21T09:21:22Z) - Self-trained Deep Ordinal Regression for End-to-End Video Anomaly
Detection [114.9714355807607]
We show that applying self-trained deep ordinal regression to video anomaly detection overcomes two key limitations of existing methods.
We devise an end-to-end trainable video anomaly detection approach that enables joint representation learning and anomaly scoring without manually labeled normal/abnormal data.
arXiv Detail & Related papers (2020-03-15T08:44:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.