Self-supervised learning for infant cry analysis
- URL: http://arxiv.org/abs/2305.01578v1
- Date: Tue, 2 May 2023 16:27:18 GMT
- Title: Self-supervised learning for infant cry analysis
- Authors: Arsenii Gorin, Cem Subakan, Sajjad Abdoli, Junhao Wang, Samantha
Latremouille, Charles Onu
- Abstract summary: We explore self-supervised learning (SSL) for analyzing a first-of-its-kind database of cry recordings containing clinical indications of more than a thousand newborns.
Specifically, we target cry-based detection of neurological injury as well as identification of cry triggers such as pain, hunger, and discomfort.
We show that pre-training with SSL contrastive loss (SimCLR) performs significantly better than supervised pre-training for both neuro injury and cry triggers.
- Score: 2.7973623341455602
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we explore self-supervised learning (SSL) for analyzing a
first-of-its-kind database of cry recordings containing clinical indications of
more than a thousand newborns. Specifically, we target cry-based detection of
neurological injury as well as identification of cry triggers such as pain,
hunger, and discomfort. Annotating a large database in the medical setting is
expensive and time-consuming, typically requiring the collaboration of several
experts over years. Leveraging large amounts of unlabeled audio data to learn
useful representations can lower the cost of building robust models and,
ultimately, clinical solutions. In this work, we experiment with
self-supervised pre-training of a convolutional neural network on large audio
datasets. We show that pre-training with SSL contrastive loss (SimCLR) performs
significantly better than supervised pre-training for both neuro injury and cry
triggers. In addition, we demonstrate further performance gains through
SSL-based domain adaptation using unlabeled infant cries. We also show that
using such SSL-based pre-training for adaptation to cry sounds decreases the
need for labeled data of the overall system.
Related papers
- Self-Supervised Multiple Instance Learning for Acute Myeloid Leukemia Classification [1.1874560263468232]
Diseases like Acute Myeloid Leukemia (AML) pose challenges due to scarce and costly annotations on a single-cell level.
Multiple Instance Learning (MIL) addresses weakly labeled scenarios but necessitates powerful encoders typically trained with labeled data.
In this study, we explore Self-Supervised Learning (SSL) as a pre-training approach for MIL-based subtype AML classification from blood smears.
arXiv Detail & Related papers (2024-03-08T15:16:15Z) - Self-supervised learning for skin cancer diagnosis with limited training data [0.196629787330046]
Self-supervised learning (SSL) is an alternative to the standard supervised pre-training on ImageNet data for scenarios with limited training data.
We find that minimal further SSL pre-training on task-specific data can be as effective as large-scale SSL pre-training on ImageNet for medical image classification tasks with limited labelled data.
arXiv Detail & Related papers (2024-01-01T08:11:38Z) - Self-supervised TransUNet for Ultrasound regional segmentation of the
distal radius in children [0.6291443816903801]
Masked Autoencoder for SSL (SSL-MAE) of TransUNet, for segmenting bony regions from children's wrist ultrasound scans.
This paper investigates the feasibility of deploying the Masked Autoencoder for SSL (SSL-MAE) of TransUNet, for segmenting bony regions from children's wrist ultrasound scans.
arXiv Detail & Related papers (2023-09-18T05:23:33Z) - Self-Supervision for Tackling Unsupervised Anomaly Detection: Pitfalls
and Opportunities [50.231837687221685]
Self-supervised learning (SSL) has transformed machine learning and its many real world applications.
Unsupervised anomaly detection (AD) has also capitalized on SSL, by self-generating pseudo-anomalies.
arXiv Detail & Related papers (2023-08-28T07:55:01Z) - Self-Supervised Learning for Endoscopic Video Analysis [16.873220533299573]
Self-supervised learning (SSL) has led to important breakthroughs in computer vision by allowing learning from large amounts of unlabeled data.
We study the use of a leading SSL framework, namely Masked Siamese Networks (MSNs), for endoscopic video analysis such as colonoscopy and laparoscopy.
arXiv Detail & Related papers (2023-08-23T19:27:59Z) - SB-SSL: Slice-Based Self-Supervised Transformers for Knee Abnormality
Classification from MRI [5.199134881541244]
We propose a slice-based self-supervised deep learning framework (SBSSL) for classifying abnormality using knee MRI scans.
For a limited number of cases (1000), our proposed framework is capable to identify anterior cruciate ligament tear with an accuracy of 89.17% and an AUC of 0.954.
arXiv Detail & Related papers (2022-08-29T23:08:41Z) - Self-supervised Learning is More Robust to Dataset Imbalance [65.84339596595383]
We investigate self-supervised learning under dataset imbalance.
Off-the-shelf self-supervised representations are already more robust to class imbalance than supervised representations.
We devise a re-weighted regularization technique that consistently improves the SSL representation quality on imbalanced datasets.
arXiv Detail & Related papers (2021-10-11T06:29:56Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z) - Uncovering the structure of clinical EEG signals with self-supervised
learning [64.4754948595556]
Supervised learning paradigms are often limited by the amount of labeled data that is available.
This phenomenon is particularly problematic in clinically-relevant data, such as electroencephalography (EEG)
By extracting information from unlabeled data, it might be possible to reach competitive performance with deep neural networks.
arXiv Detail & Related papers (2020-07-31T14:34:47Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z) - Retinopathy of Prematurity Stage Diagnosis Using Object Segmentation and
Convolutional Neural Networks [68.96150598294072]
Retinopathy of Prematurity (ROP) is an eye disorder primarily affecting premature infants with lower weights.
It causes proliferation of vessels in the retina and could result in vision loss and, eventually, retinal detachment, leading to blindness.
In recent years, there has been a significant effort to automate the diagnosis using deep learning.
This paper builds upon the success of previous models and develops a novel architecture, which combines object segmentation and convolutional neural networks (CNN)
Our proposed system first trains an object segmentation model to identify the demarcation line at a pixel level and adds the resulting mask as an additional "color" channel in
arXiv Detail & Related papers (2020-04-03T14:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.