Spontaneous preterm birth prediction using convolutional neural networks
- URL: http://arxiv.org/abs/2008.07000v2
- Date: Fri, 21 Aug 2020 19:35:33 GMT
- Title: Spontaneous preterm birth prediction using convolutional neural networks
- Authors: Tomasz W{\l}odarczyk, Szymon P{\l}otka, Przemys{\l}aw Rokita, Nicole
Sochacki-W\'ojcicka, Jakub W\'ojcicki, Micha{\l} Lipa, Tomasz Trzci\'nski
- Abstract summary: An estimated 15 million babies are born too early every year.
Approximately 1 million children die each year due to complications of preterm birth (PTB)
- Score: 8.47519763941156
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An estimated 15 million babies are born too early every year. Approximately 1
million children die each year due to complications of preterm birth (PTB).
Many survivors face a lifetime of disability, including learning disabilities
and visual and hearing problems. Although manual analysis of ultrasound images
(US) is still prevalent, it is prone to errors due to its subjective component
and complex variations in the shape and position of organs across patients. In
this work, we introduce a conceptually simple convolutional neural network
(CNN) trained for segmenting prenatal ultrasound images and classifying task
for the purpose of preterm birth detection. Our method efficiently segments
different types of cervixes in transvaginal ultrasound images while
simultaneously predicting a preterm birth based on extracted image features
without human oversight. We employed three popular network models: U-Net, Fully
Convolutional Network, and Deeplabv3 for the cervix segmentation task. Based on
the conducted results and model efficiency, we decided to extend U-Net by
adding a parallel branch for classification task. The proposed model is trained
and evaluated on a dataset consisting of 354 2D transvaginal ultrasound images
and achieved a segmentation accuracy with a mean Jaccard coefficient index of
0.923 $\pm$ 0.081 and a classification sensitivity of 0.677 $\pm$ 0.042 with a
3.49\% false positive rate. Our method obtained better results in the
prediction of preterm birth based on transvaginal ultrasound images compared to
state-of-the-art methods.
Related papers
- Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - Breast tumor classification based on self-supervised contrastive learning from ultrasound videos [7.825379326219145]
We adopted a triplet network and a self-supervised contrastive learning technique to learn representations from unlabeled breast ultrasound video clips.
Our model achieved an area under the receiver operating characteristic curve (AUC) of 0.952, which is significantly higher than the others.
The proposed framework greatly reduces the demand for labeled data and holds potential for use in automatic breast ultrasound image diagnosis.
arXiv Detail & Related papers (2024-08-20T07:16:01Z) - WATUNet: A Deep Neural Network for Segmentation of Volumetric Sweep
Imaging Ultrasound [1.2903292694072621]
Volume sweep imaging (VSI) is an innovative approach that enables untrained operators to capture quality ultrasound images.
We present a novel segmentation model known as Wavelet_Attention_UNet (WATUNet)
In this model, we incorporate wavelet gates (WGs) and attention gates (AGs) between the encoder and decoder instead of a simple connection to overcome the limitations mentioned.
arXiv Detail & Related papers (2023-11-17T20:32:37Z) - Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound
Imaging [61.60067283680348]
With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging.
We propose a vision-based, data driven method that incorporates learning-based computer vision techniques.
Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69)deg for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets.
arXiv Detail & Related papers (2022-12-15T14:34:12Z) - BabyNet: Residual Transformer Module for Birth Weight Prediction on
Fetal Ultrasound Video [8.468600443532413]
We propose the Residual Transformer Module which extends a 3D ResNet-based network for analysis of 2D+t-temporal ultrasound video scans.
Our end-to-end method, called BabyNet, automatically predicts fetal birth weight based on fetal ultrasound video scans.
arXiv Detail & Related papers (2022-05-19T08:27:23Z) - Enabling faster and more reliable sonographic assessment of gestational
age through machine learning [1.3238745915345225]
Fetal ultrasounds are an essential part of prenatal care and can be used to estimate gestational age (GA)
We developed three AI models: an image model using standard plane images, a video model using fly-to videos, and an ensemble model (combining both image and video)
All three were statistically superior to standard fetal biometry-based GA estimates derived by expert sonographers.
arXiv Detail & Related papers (2022-03-22T17:15:56Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Predi\c{c}\~ao da Idade Cerebral a partir de Imagens de Resson\^ancia
Magn\'etica utilizando Redes Neurais Convolucionais [57.52103125083341]
Deep learning techniques for brain age prediction from magnetic resonance images are investigated.
The identification of biomarkers is useful for detecting an early-stage neurodegenerative process, as well as for predicting age-related or non-age-related cognitive decline.
The best result was obtained by the 2D model, which achieved a mean absolute error of 3.83 years.
arXiv Detail & Related papers (2021-12-23T14:51:45Z) - Convolutional neural network based on transfer learning for breast
cancer screening [0.0]
In this paper, a deep convolutional neural network-based algorithm is proposed to aid in accurately identifying breast cancer from ultrasonic images.
Several experiments were conducted on the breast ultrasound dataset consisting of 537 Benign, 360 malignant, and 133 normal images.
Using k-fold cross-validation and a bagging ensemble, we achieved an accuracy of 99.5% and a sensitivity of 99.6%.
arXiv Detail & Related papers (2021-12-22T02:27:12Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - Hybrid Attention for Automatic Segmentation of Whole Fetal Head in
Prenatal Ultrasound Volumes [52.53375964591765]
We propose the first fully-automated solution to segment the whole fetal head in US volumes.
The segmentation task is firstly formulated as an end-to-end volumetric mapping under an encoder-decoder deep architecture.
We then combine the segmentor with a proposed hybrid attention scheme (HAS) to select discriminative features and suppress the non-informative volumetric features.
arXiv Detail & Related papers (2020-04-28T14:43:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.