Generative Adversarial Networks in Ultrasound Imaging: Extending Field of View Beyond Conventional Limits
- URL: http://arxiv.org/abs/2405.20981v1
- Date: Fri, 31 May 2024 16:26:30 GMT
- Title: Generative Adversarial Networks in Ultrasound Imaging: Extending Field of View Beyond Conventional Limits
- Authors: Matej Gazda, Samuel Kadoury, Jakub Gazda, Peter Drotar,
- Abstract summary: TTE ultrasound imaging faces inherent limitations, notably the trade-off between field of view (FoV) and resolution.
This paper introduces a novel application of conditional Generative Adversarial Networks (cGANs)
Our proposed cGAN architecture, termed echoGAN, demonstrates the capability to generate realistic anatomical structures through outpainting.
- Score: 1.6588671405657123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transthoracic Echocardiography (TTE) is a fundamental, non-invasive diagnostic tool in cardiovascular medicine, enabling detailed visualization of cardiac structures crucial for diagnosing various heart conditions. Despite its widespread use, TTE ultrasound imaging faces inherent limitations, notably the trade-off between field of view (FoV) and resolution. This paper introduces a novel application of conditional Generative Adversarial Networks (cGANs), specifically designed to extend the FoV in TTE ultrasound imaging while maintaining high resolution. Our proposed cGAN architecture, termed echoGAN, demonstrates the capability to generate realistic anatomical structures through outpainting, effectively broadening the viewable area in medical imaging. This advancement has the potential to enhance both automatic and manual ultrasound navigation, offering a more comprehensive view that could significantly reduce the learning curve associated with ultrasound imaging and aid in more accurate diagnoses. The results confirm that echoGAN reliably reproduce detailed cardiac features, thereby promising a significant step forward in the field of non-invasive cardiac naviagation and diagnostics.
Related papers
- Privacy-Preserving Federated Foundation Model for Generalist Ultrasound Artificial Intelligence [83.02106623401885]
We present UltraFedFM, an innovative privacy-preserving ultrasound foundation model.
UltraFedFM is collaboratively pre-trained using federated learning across 16 distributed medical institutions in 9 countries.
It achieves an average area under the receiver operating characteristic curve of 0.927 for disease diagnosis and a dice similarity coefficient of 0.878 for lesion segmentation.
arXiv Detail & Related papers (2024-11-25T13:40:11Z) - Uterine Ultrasound Image Captioning Using Deep Learning Techniques [0.0]
This paper investigates the use of deep learning for medical image captioning, with a particular focus on uterine ultrasound images.
Our research aims to assist medical professionals in making timely and accurate diagnoses, ultimately contributing to improved patient care.
arXiv Detail & Related papers (2024-11-21T11:41:42Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - BAAF: A Benchmark Attention Adaptive Framework for Medical Ultrasound
Image Segmentation Tasks [15.998631461609968]
We propose a Benchmark Attention Adaptive Framework (BAAF) to assist doctors segment or diagnose lesions and tissues in ultrasound images.
BAAF consists of a parallel hybrid attention module (PHAM) and an adaptive calibration mechanism (ACM)
The design of BAAF further optimize the "what" and "where" focus and selection problems in CNNs and seeks to improve the segmentation accuracy of lesions or tissues in medical ultrasound images.
arXiv Detail & Related papers (2023-10-02T06:15:50Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - Reslicing Ultrasound Images for Data Augmentation and Vessel
Reconstruction [22.336362581634706]
This paper introduces RESUS, a weak supervision data augmentation technique for ultrasound images based on slicing reconstructed 3D volumes from tracked 2D images.
We generate views which cannot be easily obtained in vivo due to physical constraints of ultrasound imaging, and use these augmented ultrasound images to train a semantic segmentation model.
We demonstrate that RESUS achieves statistically significant improvement over training with non-augmented images and highlight qualitative improvements through vessel reconstruction.
arXiv Detail & Related papers (2023-01-18T03:22:47Z) - Generation of Artificial CT Images using Patch-based Conditional
Generative Adversarial Networks [0.0]
We present an image generation approach that uses generative adversarial networks with a conditional discriminator.
We validate the feasibility of GAN-enhanced medical image generation on whole heart computed tomography (CT) images.
arXiv Detail & Related papers (2022-05-19T20:29:25Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.