Screen Tracking for Clinical Translation of Live Ultrasound Image
Analysis Methods
- URL: http://arxiv.org/abs/2007.06272v1
- Date: Mon, 13 Jul 2020 09:53:20 GMT
- Title: Screen Tracking for Clinical Translation of Live Ultrasound Image
Analysis Methods
- Authors: Simona Treivase, Alberto Gomez, Jacqueline Matthew, Emily Skelton,
Julia A. Schnabel, Nicolas Toussaint
- Abstract summary: The proposed method captures the US image by tracking the screen with a camera fixed at the sonographer's view point and reformats the captured image to the right aspect ratio.
It is hypothesized that this would enable to input such retrieved image into an image processing pipeline to extract information that can help improve the examination.
This information could eventually be projected back to the sonographer's field of view in real time using, for example, an augmented reality (AR) headset.
- Score: 2.5805793749729857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ultrasound (US) imaging is one of the most commonly used non-invasive imaging
techniques. However, US image acquisition requires simultaneous guidance of the
transducer and interpretation of images, which is a highly challenging task
that requires years of training. Despite many recent developments in
intra-examination US image analysis, the results are not easy to translate to a
clinical setting. We propose a generic framework to extract the US images and
superimpose the results of an analysis task, without any need for physical
connection or alteration to the US system. The proposed method captures the US
image by tracking the screen with a camera fixed at the sonographer's view
point and reformats the captured image to the right aspect ratio, in 87.66 +-
3.73ms on average.
It is hypothesized that this would enable to input such retrieved image into
an image processing pipeline to extract information that can help improve the
examination. This information could eventually be projected back to the
sonographer's field of view in real time using, for example, an augmented
reality (AR) headset.
Related papers
- Breast Ultrasound Report Generation using LangChain [58.07183284468881]
We propose the integration of multiple image analysis tools through a LangChain using Large Language Models (LLM) into the breast reporting process.
Our method can accurately extract relevant features from ultrasound images, interpret them in a clinical context, and produce comprehensive and standardized reports.
arXiv Detail & Related papers (2023-12-05T00:28:26Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - Ultra-NeRF: Neural Radiance Fields for Ultrasound Imaging [40.72047687523214]
We present a physics-enhanced implicit neural representation (INR) for ultrasound (US) imaging that learns tissue properties from overlapping US sweeps.
Our proposed method leverages a ray-tracing-based neural rendering for novel view US synthesis.
arXiv Detail & Related papers (2023-01-25T11:02:09Z) - Sketch guided and progressive growing GAN for realistic and editable
ultrasound image synthesis [12.32829386817706]
We propose a generative adversarial network (GAN) based image synthesis framework.
We present the first work that can synthesize realistic B-mode US images with high-resolution and customized texture editing features.
In addition, a feature loss is proposed to minimize the difference of high-level features between the generated and real images.
arXiv Detail & Related papers (2022-04-14T12:50:18Z) - Image-Guided Navigation of a Robotic Ultrasound Probe for Autonomous
Spinal Sonography Using a Shadow-aware Dual-Agent Framework [35.17207004351791]
We propose a novel dual-agent framework that integrates a reinforcement learning agent and a deep learning agent.
Our method can effectively interpret the US images and navigate the probe to acquire multiple standard views of the spine.
arXiv Detail & Related papers (2021-11-03T12:11:27Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z) - Semantic segmentation of multispectral photoacoustic images using deep
learning [53.65837038435433]
Photoacoustic imaging has the potential to revolutionise healthcare.
Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information.
We present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images.
arXiv Detail & Related papers (2021-05-20T09:33:55Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Improving Endoscopic Decision Support Systems by Translating Between
Imaging Modalities [4.760079434948197]
We investigate the applicability of image-to-image translation to endoscopic images showing different imaging modalities.
In a study on computer-aided celiac disease diagnosis, we explore whether image-to-image translation is capable of effectively performing the translation between the domains.
arXiv Detail & Related papers (2020-04-27T06:55:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.