Cardiac ultrasound simulation for autonomous ultrasound navigation
- URL: http://arxiv.org/abs/2402.06463v1
- Date: Fri, 9 Feb 2024 15:14:48 GMT
- Title: Cardiac ultrasound simulation for autonomous ultrasound navigation
- Authors: Abdoul Aziz Amadou, Laura Peralta, Paul Dryburgh, Paul Klein, Kaloian
Petkov, Richard James Housden, Vivek Singh, Rui Liao, Young-Ho Kim, Florin
Christian Ghesu, Tommaso Mansi, Ronak Rajani, Alistair Young and Kawal Rhode
- Abstract summary: We propose a method to generate large amounts of ultrasound images from other modalities and from arbitrary positions.
We present a novel simulation pipeline which uses segmentations from other modalities, an optimized data representation and GPU-accelerated Monte Carlo path tracing.
The proposed approach allows for fast and accurate patient-specific ultrasound image generation, and its usability for training networks for navigation-related tasks is demonstrated.
- Score: 4.036497185262817
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Ultrasound is well-established as an imaging modality for diagnostic and
interventional purposes. However, the image quality varies with operator skills
as acquiring and interpreting ultrasound images requires extensive training due
to the imaging artefacts, the range of acquisition parameters and the
variability of patient anatomies. Automating the image acquisition task could
improve acquisition reproducibility and quality but training such an algorithm
requires large amounts of navigation data, not saved in routine examinations.
Thus, we propose a method to generate large amounts of ultrasound images from
other modalities and from arbitrary positions, such that this pipeline can
later be used by learning algorithms for navigation. We present a novel
simulation pipeline which uses segmentations from other modalities, an
optimized volumetric data representation and GPU-accelerated Monte Carlo path
tracing to generate view-dependent and patient-specific ultrasound images. We
extensively validate the correctness of our pipeline with a phantom experiment,
where structures' sizes, contrast and speckle noise properties are assessed.
Furthermore, we demonstrate its usability to train neural networks for
navigation in an echocardiography view classification experiment by generating
synthetic images from more than 1000 patients. Networks pre-trained with our
simulations achieve significantly superior performance in settings where large
real datasets are not available, especially for under-represented classes. The
proposed approach allows for fast and accurate patient-specific ultrasound
image generation, and its usability for training networks for
navigation-related tasks is demonstrated.
Related papers
- CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - LOTUS: Learning to Optimize Task-based US representations [39.81131738128329]
Anatomical segmentation of organs in ultrasound images is essential to many clinical applications.
Existing deep neural networks require a large amount of labeled data for training in order to achieve clinically acceptable performance.
In this paper, we propose a novel approach for learning to optimize task-based ultra-sound image representations.
arXiv Detail & Related papers (2023-07-29T16:29:39Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z) - Semantic segmentation of multispectral photoacoustic images using deep
learning [53.65837038435433]
Photoacoustic imaging has the potential to revolutionise healthcare.
Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information.
We present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images.
arXiv Detail & Related papers (2021-05-20T09:33:55Z) - Ultrasound Image Classification using ACGAN with Small Training Dataset [0.0]
Training deep learning models requires large labeled datasets, which is often unavailable for ultrasound images.
We exploit Generative Adversarial Network (ACGAN) that combines the benefits of large data augmentation and transfer learning.
We conduct experiment on a dataset of breast ultrasound images that shows the effectiveness of the proposed approach.
arXiv Detail & Related papers (2021-01-31T11:11:24Z) - Learning Ultrasound Rendering from Cross-Sectional Model Slices for
Simulated Training [13.640630434743837]
Computational simulations can facilitate the training of such skills in virtual reality.
We propose herein to bypass any rendering and simulation process at interactive time.
We use a generative adversarial framework with a dedicated generator architecture and input feeding scheme.
arXiv Detail & Related papers (2021-01-20T21:58:19Z) - A deep learning pipeline for identification of motor units in
musculoskeletal ultrasound [0.5249805590164902]
It has been shown that ultrafast ultrasound can be used to record and analyze the mechanical response of individual MUs.
We present an alternative method - a deep learning pipeline - to identify active MUs in ultrasound image sequences.
We train evaluate the model using simulated data mimicking the complex activation pattern and overlapping territories.
arXiv Detail & Related papers (2020-09-23T20:44:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.