Towards Realistic Ultrasound Fetal Brain Imaging Synthesis
- URL: http://arxiv.org/abs/2304.03941v1
- Date: Sat, 8 Apr 2023 07:07:20 GMT
- Title: Towards Realistic Ultrasound Fetal Brain Imaging Synthesis
- Authors: Michelle Iskandar, Harvey Mannering, Zhanxiang Sun, Jacqueline
Matthew, Hamideh Kerdegari, Laura Peralta, Miguel Xochicale
- Abstract summary: There are few public ultrasound fetal imaging datasets due to insufficient amounts of clinical data, patient privacy, rare occurrence of abnormalities in general practice, and limited experts for data collection and validation.
To address such data scarcity, we proposed generative adversarial networks (GAN)-based models, diffusion-super-resolution-GAN and transformer-based-GAN, to synthesise images of fetal ultrasound brain planes from one public dataset.
- Score: 0.7315240103690552
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Prenatal ultrasound imaging is the first-choice modality to assess fetal
health. Medical image datasets for AI and ML methods must be diverse (i.e.
diagnoses, diseases, pathologies, scanners, demographics, etc), however there
are few public ultrasound fetal imaging datasets due to insufficient amounts of
clinical data, patient privacy, rare occurrence of abnormalities in general
practice, and limited experts for data collection and validation. To address
such data scarcity, we proposed generative adversarial networks (GAN)-based
models, diffusion-super-resolution-GAN and transformer-based-GAN, to synthesise
images of fetal ultrasound brain planes from one public dataset. We reported
that GAN-based methods can generate 256x256 pixel size of fetal ultrasound
trans-cerebellum brain image plane with stable training losses, resulting in
lower FID values for diffusion-super-resolution-GAN (average 7.04 and lower FID
5.09 at epoch 10) than the FID values of transformer-based-GAN (average 36.02
and lower 28.93 at epoch 60). The results of this work illustrate the potential
of GAN-based methods to synthesise realistic high-resolution ultrasound images,
leading to future work with other fetal brain planes, anatomies, devices and
the need of a pool of experts to evaluate synthesised images. Code, data and
other resources to reproduce this work are available at
\url{https://github.com/budai4medtech/midl2023}.
Related papers
- Privacy-Preserving Federated Foundation Model for Generalist Ultrasound Artificial Intelligence [83.02106623401885]
We present UltraFedFM, an innovative privacy-preserving ultrasound foundation model.
UltraFedFM is collaboratively pre-trained using federated learning across 16 distributed medical institutions in 9 countries.
It achieves an average area under the receiver operating characteristic curve of 0.927 for disease diagnosis and a dice similarity coefficient of 0.878 for lesion segmentation.
arXiv Detail & Related papers (2024-11-25T13:40:11Z) - S-CycleGAN: Semantic Segmentation Enhanced CT-Ultrasound Image-to-Image Translation for Robotic Ultrasonography [2.07180164747172]
We introduce an advanced deep learning model, dubbed S-CycleGAN, which generates high-quality synthetic ultrasound images from computed tomography (CT) data.
The synthetic images are utilized to enhance various aspects of our development of the robot-assisted ultrasound scanning system.
arXiv Detail & Related papers (2024-06-03T10:53:45Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - GAN-GA: A Generative Model based on Genetic Algorithm for Medical Image
Generation [0.0]
Generative models offer a promising solution for addressing medical image shortage problems.
This paper proposes the GAN-GA, a generative model optimized by embedding a genetic algorithm.
The proposed model enhances image fidelity and diversity while preserving distinctive features.
arXiv Detail & Related papers (2023-12-30T20:16:45Z) - FUSC: Fetal Ultrasound Semantic Clustering of Second Trimester Scans
Using Deep Self-supervised Learning [1.0819408603463427]
More than 140M fetuses are born yearly, resulting in numerous scans.
The availability of a large volume of fetal ultrasound scans presents the opportunity to train robust machine learning models.
This study presents an unsupervised approach for automatically clustering ultrasound images into a large range of fetal views.
arXiv Detail & Related papers (2023-10-19T09:11:23Z) - Reslicing Ultrasound Images for Data Augmentation and Vessel
Reconstruction [22.336362581634706]
This paper introduces RESUS, a weak supervision data augmentation technique for ultrasound images based on slicing reconstructed 3D volumes from tracked 2D images.
We generate views which cannot be easily obtained in vivo due to physical constraints of ultrasound imaging, and use these augmented ultrasound images to train a semantic segmentation model.
We demonstrate that RESUS achieves statistically significant improvement over training with non-augmented images and highlight qualitative improvements through vessel reconstruction.
arXiv Detail & Related papers (2023-01-18T03:22:47Z) - Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound
Imaging [61.60067283680348]
With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging.
We propose a vision-based, data driven method that incorporates learning-based computer vision techniques.
Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69)deg for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets.
arXiv Detail & Related papers (2022-12-15T14:34:12Z) - Ultrasound Signal Processing: From Models to Deep Learning [64.56774869055826]
Medical ultrasound imaging relies heavily on high-quality signal processing to provide reliable and interpretable image reconstructions.
Deep learning based methods, which are optimized in a data-driven fashion, have gained popularity.
A relatively new paradigm combines the power of the two: leveraging data-driven deep learning, as well as exploiting domain knowledge.
arXiv Detail & Related papers (2022-04-09T13:04:36Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z) - Ultrasound Image Classification using ACGAN with Small Training Dataset [0.0]
Training deep learning models requires large labeled datasets, which is often unavailable for ultrasound images.
We exploit Generative Adversarial Network (ACGAN) that combines the benefits of large data augmentation and transfer learning.
We conduct experiment on a dataset of breast ultrasound images that shows the effectiveness of the proposed approach.
arXiv Detail & Related papers (2021-01-31T11:11:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.