In Search of Life: Learning from Synthetic Data to Detect Vital Signs in
Videos
- URL: http://arxiv.org/abs/2004.07691v2
- Date: Thu, 23 Apr 2020 18:18:39 GMT
- Title: In Search of Life: Learning from Synthetic Data to Detect Vital Signs in
Videos
- Authors: Florin Condrea, Victor-Andrei Ivan, Marius Leordeanu
- Abstract summary: We propose a novel deep learning approach to automatically detect vital signs in videos.
Our system learns to predict the respiration or heart intensity signal for each moment in time and to detect the region of interest that is most relevant for the given task.
We test the effectiveness of our proposed system on the recent LCAS dataset and obtain state-of-the-art results.
- Score: 15.995843386368696
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatically detecting vital signs in videos, such as the estimation of
heart and respiration rates, is a challenging research problem in computer
vision with important applications in the medical field. One of the key
difficulties in tackling this task is the lack of sufficient supervised
training data, which severely limits the use of powerful deep neural networks.
In this paper we address this limitation through a novel deep learning
approach, in which a recurrent deep neural network is trained to detect vital
signs in the infrared thermal domain from purely synthetic data. What is most
surprising is that our novel method for synthetic training data generation is
general, relatively simple and uses almost no prior medical domain knowledge.
Moreover, our system, which is trained in a purely automatic manner and needs
no human annotation, also learns to predict the respiration or heart intensity
signal for each moment in time and to detect the region of interest that is
most relevant for the given task, e.g. the nose area in the case of
respiration. We test the effectiveness of our proposed system on the recent
LCAS dataset and obtain state-of-the-art results.
Related papers
- Breath as a biomarker: A survey of contact and contactless applications and approaches in respiratory monitoring [0.0]
Breath analysis has emerged as a critical tool in health monitoring, offering insights into respiratory function, disease detection, and continuous health assessment.<n>While traditional contact-based methods are reliable, they often pose challenges in comfort and practicality, particularly for long-term monitoring.<n>This survey examines contact-based and contactless approaches, emphasizing recent advances in machine learning and deep learning techniques applied to breath analysis.
arXiv Detail & Related papers (2025-08-07T19:51:37Z) - CAST-Phys: Contactless Affective States Through Physiological signals Database [74.28082880875368]
The lack of affective multi-modal datasets remains a major bottleneck in developing accurate emotion recognition systems.<n>We present the Contactless Affective States Through Physiological Signals Database (CAST-Phys), a novel high-quality dataset capable of remote physiological emotion recognition.<n>Our analysis highlights the crucial role of physiological signals in realistic scenarios where facial expressions alone may not provide sufficient emotional information.
arXiv Detail & Related papers (2025-07-08T15:20:24Z) - Integrating Causality with Neurochaos Learning: Proposed Approach and Research Agenda [1.534667887016089]
We investigate how causal and neurochaos learning approaches can be integrated together to produce better results.
We propose an approach for this integration to enhance classification, prediction and reinforcement learning.
arXiv Detail & Related papers (2025-01-23T15:45:29Z) - TACOS: Task Agnostic Continual Learning in Spiking Neural Networks [1.703671463296347]
Catastrophic interference, the loss of previously learned information when learning new information, remains a major challenge in machine learning.
We show that neuro-inspired mechanisms such as synaptic consolidation and metaplasticity can mitigate catastrophic interference in a spiking neural network.
Our model, TACOS, combines neuromodulation with complex synaptic dynamics to enable new learning while protecting previous information.
arXiv Detail & Related papers (2024-08-16T15:42:16Z) - Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - Critical Learning Periods for Multisensory Integration in Deep Networks [112.40005682521638]
We show that the ability of a neural network to integrate information from diverse sources hinges critically on being exposed to properly correlated signals during the early phases of training.
We show that critical periods arise from the complex and unstable early transient dynamics, which are decisive of final performance of the trained system and their learned representations.
arXiv Detail & Related papers (2022-10-06T23:50:38Z) - Neuro-BERT: Rethinking Masked Autoencoding for Self-supervised Neurological Pretraining [24.641328814546842]
We present Neuro-BERT, a self-supervised pre-training framework of neurological signals based on masked autoencoding in the Fourier domain.
We propose a novel pre-training task dubbed Fourier Inversion Prediction (FIP), which randomly masks out a portion of the input signal and then predicts the missing information.
By evaluating our method on several benchmark datasets, we show that Neuro-BERT improves downstream neurological-related tasks by a large margin.
arXiv Detail & Related papers (2022-04-20T16:48:18Z) - A Novel Capsule Neural Network Based Model for Drowsiness Detection
Using Electroencephalography Signals [0.0]
Capsule Neural Networks are a brand-new Deep Learning algorithm proposed for work with reduced amounts of data.
This paper presents a Deep Learning-based method for drowsiness detection with CapsNet by using a concatenation of spectrogram images of the electroencephalography signals channels.
arXiv Detail & Related papers (2022-04-04T17:23:53Z) - The ideal data compression and automatic discovery of hidden law using
neural network [0.0]
We consider how the human brain recognizes events and memorizes them.
We reproduce the system of the human brain on a machine learning model with a new autoencoder neural network.
arXiv Detail & Related papers (2022-03-31T10:55:24Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks [4.874780144224057]
We use a variant of deep generative models called - CycleGAN, to learn the unknown mapping between pre- and post-learning neural activities.
We develop an end-to-end pipeline to preprocess, train and evaluate calcium fluorescence signals, and a procedure to interpret the resulting deep learning models.
arXiv Detail & Related papers (2021-11-25T13:24:19Z) - Deep Metric Learning with Locality Sensitive Angular Loss for
Self-Correcting Source Separation of Neural Spiking Signals [77.34726150561087]
We propose a methodology based on deep metric learning to address the need for automated post-hoc cleaning and robust separation filters.
We validate this method with an artificially corrupted label set based on source-separated high-density surface electromyography recordings.
This approach enables a neural network to learn to accurately decode neurophysiological time series using any imperfect method of labelling the signal.
arXiv Detail & Related papers (2021-10-13T21:51:56Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.