Identifying Visible Tissue in Intraoperative Ultrasound Images during
Brain Surgery: A Method and Application
- URL: http://arxiv.org/abs/2306.01190v1
- Date: Thu, 1 Jun 2023 23:06:14 GMT
- Title: Identifying Visible Tissue in Intraoperative Ultrasound Images during
Brain Surgery: A Method and Application
- Authors: Alistair Weld, Luke Dixon, Giulio Anichini, Michael Dyck, Alex Ranne,
Sophie Camp, Stamatia Giannarou
- Abstract summary: Intraoperative ultrasound scanning is a demanding visuotactile task.
It requires operators to simultaneously localise the ultrasound perspective and manually perform slight adjustments to the pose of the probe.
We propose a method for the identification of the visible tissue, which enables the analysis of ultrasound probe and tissue contact.
- Score: 1.4408275800058263
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intraoperative ultrasound scanning is a demanding visuotactile task. It
requires operators to simultaneously localise the ultrasound perspective and
manually perform slight adjustments to the pose of the probe, making sure not
to apply excessive force or breaking contact with the tissue, whilst also
characterising the visible tissue. In this paper, we propose a method for the
identification of the visible tissue, which enables the analysis of ultrasound
probe and tissue contact via the detection of acoustic shadow and construction
of confidence maps of the perceptual salience. Detailed validation with both in
vivo and phantom data is performed. First, we show that our technique is
capable of achieving state of the art acoustic shadow scan line classification
- with an average binary classification accuracy on unseen data of 0.87.
Second, we show that our framework for constructing confidence maps is able to
produce an ideal response to a probe's pose that is being oriented in and out
of optimality - achieving an average RMSE across five scans of 0.174. The
performance evaluation justifies the potential clinical value of the method
which can be used both to assist clinical training and optimise robot-assisted
ultrasound tissue scanning.
Related papers
- Class-Aware Cartilage Segmentation for Autonomous US-CT Registration in Robotic Intercostal Ultrasound Imaging [39.597735935731386]
A class-aware cartilage bone segmentation network with geometry-constraint post-processing is presented to capture patient-specific rib skeletons.
A dense skeleton graph-based non-rigid registration is presented to map the intercostal scanning path from a generic template to individual patients.
Results demonstrate that the proposed graph-based registration method can robustly and precisely map the path from CT template to individual patients.
arXiv Detail & Related papers (2024-06-06T14:15:15Z) - Real-time guidewire tracking and segmentation in intraoperative x-ray [52.51797358201872]
We propose a two-stage deep learning framework for real-time guidewire segmentation and tracking.
In the first stage, a Yolov5 detector is trained, using the original X-ray images as well as synthetic ones, to output the bounding boxes of possible target guidewires.
In the second stage, a novel and efficient network is proposed to segment the guidewire in each detected bounding box.
arXiv Detail & Related papers (2024-04-12T20:39:19Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - LOTUS: Learning to Optimize Task-based US representations [39.81131738128329]
Anatomical segmentation of organs in ultrasound images is essential to many clinical applications.
Existing deep neural networks require a large amount of labeled data for training in order to achieve clinically acceptable performance.
In this paper, we propose a novel approach for learning to optimize task-based ultra-sound image representations.
arXiv Detail & Related papers (2023-07-29T16:29:39Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - A CNN Segmentation-Based Approach to Object Detection and Tracking in
Ultrasound Scans with Application to the Vagus Nerve Detection [17.80391011147757]
We propose a deep learning framework to automatically detect and track a specific anatomical target structure in ultrasound scans.
Our framework is designed to be accurate and robust across subjects and imaging devices, to operate in real-time, and to not require a large training set.
We tested the framework on two different ultrasound datasets with the aim to detect and track the Vagus nerve, where it outperformed current state-of-the-art real-time object detection networks.
arXiv Detail & Related papers (2021-06-25T19:12:46Z) - Semantic segmentation of multispectral photoacoustic images using deep
learning [53.65837038435433]
Photoacoustic imaging has the potential to revolutionise healthcare.
Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information.
We present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images.
arXiv Detail & Related papers (2021-05-20T09:33:55Z) - Assisted Probe Positioning for Ultrasound Guided Radiotherapy Using
Image Sequence Classification [55.96221340756895]
Effective transperineal ultrasound image guidance in prostate external beam radiotherapy requires consistent alignment between probe and prostate at each session during patient set-up.
We demonstrate a method for ensuring accurate probe placement through joint classification of images and probe position data.
Using a multi-input multi-task algorithm, spatial coordinate data from an optically tracked ultrasound probe is combined with an image clas-sifier using a recurrent neural network to generate two sets of predictions in real-time.
The algorithm identified optimal probe alignment within a mean (standard deviation) range of 3.7$circ$ (1.2$circ$) from
arXiv Detail & Related papers (2020-10-06T13:55:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.