FUSC: Fetal Ultrasound Semantic Clustering of Second Trimester Scans
Using Deep Self-supervised Learning
- URL: http://arxiv.org/abs/2310.12600v2
- Date: Tue, 16 Jan 2024 08:47:04 GMT
- Title: FUSC: Fetal Ultrasound Semantic Clustering of Second Trimester Scans
Using Deep Self-supervised Learning
- Authors: Hussain Alasmawi, Leanne Bricker, Mohammad Yaqub
- Abstract summary: More than 140M fetuses are born yearly, resulting in numerous scans.
The availability of a large volume of fetal ultrasound scans presents the opportunity to train robust machine learning models.
This study presents an unsupervised approach for automatically clustering ultrasound images into a large range of fetal views.
- Score: 1.0819408603463427
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Ultrasound is the primary imaging modality in clinical practice during
pregnancy. More than 140M fetuses are born yearly, resulting in numerous scans.
The availability of a large volume of fetal ultrasound scans presents the
opportunity to train robust machine learning models. However, the abundance of
scans also has its challenges, as manual labeling of each image is needed for
supervised methods. Labeling is typically labor-intensive and requires
expertise to annotate the images accurately. This study presents an
unsupervised approach for automatically clustering ultrasound images into a
large range of fetal views, reducing or eliminating the need for manual
labeling. Our Fetal Ultrasound Semantic Clustering (FUSC) method is developed
using a large dataset of 88,063 images and further evaluated on an additional
unseen dataset of 8,187 images achieving over 92% clustering purity. The result
of our investigation hold the potential to significantly impact the field of
fetal ultrasound imaging and pave the way for more advanced automated labeling
solutions. Finally, we make the code and the experimental setup publicly
available to help advance the field.
Related papers
- Privacy-Preserving Federated Foundation Model for Generalist Ultrasound Artificial Intelligence [83.02106623401885]
We present UltraFedFM, an innovative privacy-preserving ultrasound foundation model.
UltraFedFM is collaboratively pre-trained using federated learning across 16 distributed medical institutions in 9 countries.
It achieves an average area under the receiver operating characteristic curve of 0.927 for disease diagnosis and a dice similarity coefficient of 0.878 for lesion segmentation.
arXiv Detail & Related papers (2024-11-25T13:40:11Z) - S-CycleGAN: Semantic Segmentation Enhanced CT-Ultrasound Image-to-Image Translation for Robotic Ultrasonography [2.07180164747172]
We introduce an advanced deep learning model, dubbed S-CycleGAN, which generates high-quality synthetic ultrasound images from computed tomography (CT) data.
The synthetic images are utilized to enhance various aspects of our development of the robot-assisted ultrasound scanning system.
arXiv Detail & Related papers (2024-06-03T10:53:45Z) - Towards Realistic Ultrasound Fetal Brain Imaging Synthesis [0.7315240103690552]
There are few public ultrasound fetal imaging datasets due to insufficient amounts of clinical data, patient privacy, rare occurrence of abnormalities in general practice, and limited experts for data collection and validation.
To address such data scarcity, we proposed generative adversarial networks (GAN)-based models, diffusion-super-resolution-GAN and transformer-based-GAN, to synthesise images of fetal ultrasound brain planes from one public dataset.
arXiv Detail & Related papers (2023-04-08T07:07:20Z) - Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound
Imaging [61.60067283680348]
With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging.
We propose a vision-based, data driven method that incorporates learning-based computer vision techniques.
Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69)deg for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets.
arXiv Detail & Related papers (2022-12-15T14:34:12Z) - Learning Ultrasound Scanning Skills from Human Demonstrations [6.971573270058377]
We propose a learning-based framework to acquire ultrasound scanning skills from human demonstrations.
The parameters of the model are learned using the data collected from skilled sonographers' demonstrations.
The robustness of the proposed framework is validated with the experiments on real data from sonographers.
arXiv Detail & Related papers (2021-11-09T12:29:25Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z) - Semantic segmentation of multispectral photoacoustic images using deep
learning [53.65837038435433]
Photoacoustic imaging has the potential to revolutionise healthcare.
Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information.
We present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images.
arXiv Detail & Related papers (2021-05-20T09:33:55Z) - Ultrasound Image Classification using ACGAN with Small Training Dataset [0.0]
Training deep learning models requires large labeled datasets, which is often unavailable for ultrasound images.
We exploit Generative Adversarial Network (ACGAN) that combines the benefits of large data augmentation and transfer learning.
We conduct experiment on a dataset of breast ultrasound images that shows the effectiveness of the proposed approach.
arXiv Detail & Related papers (2021-01-31T11:11:24Z) - Hybrid Attention for Automatic Segmentation of Whole Fetal Head in
Prenatal Ultrasound Volumes [52.53375964591765]
We propose the first fully-automated solution to segment the whole fetal head in US volumes.
The segmentation task is firstly formulated as an end-to-end volumetric mapping under an encoder-decoder deep architecture.
We then combine the segmentor with a proposed hybrid attention scheme (HAS) to select discriminative features and suppress the non-informative volumetric features.
arXiv Detail & Related papers (2020-04-28T14:43:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.