Privacy-preserving Early Detection of Epileptic Seizures in Videos
- URL: http://arxiv.org/abs/2309.08794v1
- Date: Fri, 15 Sep 2023 22:29:07 GMT
- Title: Privacy-preserving Early Detection of Epileptic Seizures in Videos
- Authors: Deval Mehta, Shobi Sivathamboo, Hugh Simpson, Patrick Kwan, Terence
O`Brien, Zongyuan Ge
- Abstract summary: We contribute towards the development of video-based epileptic seizure classification by introducing a novel framework (SETR-PKD)
Our framework is built upon optical flow features extracted from the video of a seizure, which encodes the seizure motion semiotics while preserving the privacy of the patient.
Our framework could detect tonic-clonic seizures (TCSs) in a privacy-preserving manner with an accuracy of 83.9% while they are only half-way into their progression.
- Score: 10.180183020927872
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we contribute towards the development of video-based epileptic
seizure classification by introducing a novel framework (SETR-PKD), which could
achieve privacy-preserved early detection of seizures in videos. Specifically,
our framework has two significant components - (1) It is built upon optical
flow features extracted from the video of a seizure, which encodes the seizure
motion semiotics while preserving the privacy of the patient; (2) It utilizes a
transformer based progressive knowledge distillation, where the knowledge is
gradually distilled from networks trained on a longer portion of video samples
to the ones which will operate on shorter portions. Thus, our proposed
framework addresses the limitations of the current approaches which compromise
the privacy of the patients by directly operating on the RGB video of a seizure
as well as impede real-time detection of a seizure by utilizing the full video
sample to make a prediction. Our SETR-PKD framework could detect tonic-clonic
seizures (TCSs) in a privacy-preserving manner with an accuracy of 83.9% while
they are only half-way into their progression. Our data and code is available
at https://github.com/DevD1092/seizure-detection
Related papers
- Learning Temporally Consistent Video Depth from Video Diffusion Priors [57.929828486615605]
This work addresses the challenge of video depth estimation.
We reformulate the prediction task into a conditional generation problem.
This allows us to leverage the prior knowledge embedded in existing video generation models.
arXiv Detail & Related papers (2024-06-03T16:20:24Z) - VSViG: Real-time Video-based Seizure Detection via Skeleton-based Spatiotemporal ViG [8.100646331930953]
An accurate and efficient epileptic seizure onset detection can significantly benefit patients.
Traditional diagnostic methods, primarily relying on electroencephalograms (EEGs), often result in cumbersome and non-portable solutions.
We propose a novel Video-based Seizure detection model via a skeleton-basedtemporal Vision Graph neural network.
arXiv Detail & Related papers (2023-11-24T15:07:29Z) - A Spatial-Temporal Deformable Attention based Framework for Breast
Lesion Detection in Videos [107.96514633713034]
We propose a spatial-temporal deformable attention based framework, named STNet.
Our STNet introduces a spatial-temporal deformable attention module to perform local spatial-temporal feature fusion.
Experiments on the public breast lesion ultrasound video dataset show that our STNet obtains a state-of-the-art detection performance.
arXiv Detail & Related papers (2023-09-09T07:00:10Z) - Video object detection for privacy-preserving patient monitoring in
intensive care [0.0]
We propose a new method for exploiting information in the temporal succession of video frames.
Our method outperforms a standard YOLOv5 baseline model by +1.7% mAP@.5 while also training over ten times faster on our proprietary dataset.
arXiv Detail & Related papers (2023-06-26T11:52:22Z) - Point Cloud Video Anomaly Detection Based on Point Spatio-Temporal
Auto-Encoder [1.4340883856076097]
We propose Point Spatio-Temporal Auto-Encoder (PSTAE), an autoencoder framework that uses point cloud videos as input to detect anomalies in point cloud videos.
Our method sets a new state-of-the-art (SOTA) on the TIMo dataset.
arXiv Detail & Related papers (2023-06-04T10:30:28Z) - Transform-Equivariant Consistency Learning for Temporal Sentence
Grounding [66.10949751429781]
We introduce a novel Equivariant Consistency Regulation Learning framework to learn more discriminative representations for each video.
Our motivation comes from that the temporal boundary of the query-guided activity should be consistently predicted.
In particular, we devise a self-supervised consistency loss module to enhance the completeness and smoothness of the augmented video.
arXiv Detail & Related papers (2023-05-06T19:29:28Z) - Spatial-Temporal Frequency Forgery Clue for Video Forgery Detection in
VIS and NIR Scenario [87.72258480670627]
Existing face forgery detection methods based on frequency domain find that the GAN forged images have obvious grid-like visual artifacts in the frequency spectrum compared to the real images.
This paper proposes a Cosine Transform-based Forgery Clue Augmentation Network (FCAN-DCT) to achieve a more comprehensive spatial-temporal feature representation.
arXiv Detail & Related papers (2022-07-05T09:27:53Z) - Transfer Learning of Deep Spatiotemporal Networks to Model Arbitrarily
Long Videos of Seizures [58.720142291102135]
Detailed analysis of seizure semiology is critical for management of epilepsy patients.
We present GESTURES, a novel architecture combining convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We show that an STCNN trained on a HAR dataset can be used in combination with an RNN to accurately represent arbitrarily long videos of seizures.
arXiv Detail & Related papers (2021-06-22T18:40:31Z) - Privacy-sensitive Objects Pixelation for Live Video Streaming [52.83247667841588]
We propose a novel Privacy-sensitive Objects Pixelation (PsOP) framework for automatic personal privacy filtering during live video streaming.
Our PsOP is extendable to any potential privacy-sensitive objects pixelation.
In addition to the pixelation accuracy boosting, experiments on the streaming video data we built show that the proposed PsOP can significantly reduce the over-pixelation ratio in privacy-sensitive object pixelation.
arXiv Detail & Related papers (2021-01-03T11:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.