Identifying Visible Tissue in Intraoperative Ultrasound Images during
Brain Surgery: A Method and Application
- URL: http://arxiv.org/abs/2306.01190v1
- Date: Thu, 1 Jun 2023 23:06:14 GMT
- Title: Identifying Visible Tissue in Intraoperative Ultrasound Images during
Brain Surgery: A Method and Application
- Authors: Alistair Weld, Luke Dixon, Giulio Anichini, Michael Dyck, Alex Ranne,
Sophie Camp, Stamatia Giannarou
- Abstract summary: Intraoperative ultrasound scanning is a demanding visuotactile task.
It requires operators to simultaneously localise the ultrasound perspective and manually perform slight adjustments to the pose of the probe.
We propose a method for the identification of the visible tissue, which enables the analysis of ultrasound probe and tissue contact.
- Score: 1.4408275800058263
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intraoperative ultrasound scanning is a demanding visuotactile task. It
requires operators to simultaneously localise the ultrasound perspective and
manually perform slight adjustments to the pose of the probe, making sure not
to apply excessive force or breaking contact with the tissue, whilst also
characterising the visible tissue. In this paper, we propose a method for the
identification of the visible tissue, which enables the analysis of ultrasound
probe and tissue contact via the detection of acoustic shadow and construction
of confidence maps of the perceptual salience. Detailed validation with both in
vivo and phantom data is performed. First, we show that our technique is
capable of achieving state of the art acoustic shadow scan line classification
- with an average binary classification accuracy on unseen data of 0.87.
Second, we show that our framework for constructing confidence maps is able to
produce an ideal response to a probe's pose that is being oriented in and out
of optimality - achieving an average RMSE across five scans of 0.174. The
performance evaluation justifies the potential clinical value of the method
which can be used both to assist clinical training and optimise robot-assisted
ultrasound tissue scanning.
Related papers
- Automated Measurement of Optic Nerve Sheath Diameter Using Ocular Ultrasound Video [14.016658180958444]
This paper presents a novel method to automatically identify the optimal frame from video sequences for ONSD measurement.<n>The proposed method achieved a mean error, mean squared deviation, and intraclass correlation coefficient (ICC) of 0.04, 0.054, and 0.782, respectively.
arXiv Detail & Related papers (2025-06-03T12:14:51Z) - EchoWorld: Learning Motion-Aware World Models for Echocardiography Probe Guidance [79.66329903007869]
We present EchoWorld, a motion-aware world modeling framework for probe guidance.
It encodes anatomical knowledge and motion-induced visual dynamics.
It is trained on more than one million ultrasound images from over 200 routine scans.
arXiv Detail & Related papers (2025-04-17T16:19:05Z) - Image Retrieval with Intra-Sweep Representation Learning for Neck Ultrasound Scanning Guidance [4.987315310656657]
Intraoperative ultrasound (US) can enhance real-time visualization in transoral robotic surgery.
We propose a self-supervised contrastive learning approach to match intraoperative US views to a preoperative image database.
Our method achieves 92.30% retrieval accuracy on simulated data and outperforms state-of-the-art temporal-based contrastive learning approaches.
arXiv Detail & Related papers (2024-12-10T18:39:33Z) - Class-Aware Cartilage Segmentation for Autonomous US-CT Registration in Robotic Intercostal Ultrasound Imaging [39.597735935731386]
A class-aware cartilage bone segmentation network with geometry-constraint post-processing is presented to capture patient-specific rib skeletons.
A dense skeleton graph-based non-rigid registration is presented to map the intercostal scanning path from a generic template to individual patients.
Results demonstrate that the proposed graph-based registration method can robustly and precisely map the path from CT template to individual patients.
arXiv Detail & Related papers (2024-06-06T14:15:15Z) - Real-time guidewire tracking and segmentation in intraoperative x-ray [52.51797358201872]
We propose a two-stage deep learning framework for real-time guidewire segmentation and tracking.
In the first stage, a Yolov5 detector is trained, using the original X-ray images as well as synthetic ones, to output the bounding boxes of possible target guidewires.
In the second stage, a novel and efficient network is proposed to segment the guidewire in each detected bounding box.
arXiv Detail & Related papers (2024-04-12T20:39:19Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - LOTUS: Learning to Optimize Task-based US representations [39.81131738128329]
Anatomical segmentation of organs in ultrasound images is essential to many clinical applications.
Existing deep neural networks require a large amount of labeled data for training in order to achieve clinically acceptable performance.
In this paper, we propose a novel approach for learning to optimize task-based ultra-sound image representations.
arXiv Detail & Related papers (2023-07-29T16:29:39Z) - Tissue Classification During Needle Insertion Using Self-Supervised
Contrastive Learning and Optical Coherence Tomography [53.38589633687604]
We propose a deep neural network that classifies the tissues from the phase and intensity data of complex OCT signals acquired at the needle tip.
We show that with 10% of the training set, our proposed pretraining strategy helps the model achieve an F1 score of 0.84 whereas the model achieves an F1 score of 0.60 without it.
arXiv Detail & Related papers (2023-04-26T14:11:04Z) - Fluorescence angiography classification in colorectal surgery -- A
preliminary report [8.075715438276244]
The aim is to develop an artificial intelligence algorithm to classify colonic tissue as 'perfused' or 'not perfused' based on fluorescence angiography data.
A web based app was made available to deploy the algorithm.
arXiv Detail & Related papers (2022-06-13T07:10:59Z) - A Deep Learning Approach to Predicting Collateral Flow in Stroke
Patients Using Radiomic Features from Perfusion Images [58.17507437526425]
Collateral circulation results from specialized anastomotic channels which provide oxygenated blood to regions with compromised blood flow.
The actual grading is mostly done through manual inspection of the acquired images.
We present a deep learning approach to predicting collateral flow grading in stroke patients based on radiomic features extracted from MR perfusion data.
arXiv Detail & Related papers (2021-10-24T18:58:40Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Systematic Clinical Evaluation of A Deep Learning Method for Medical
Image Segmentation: Radiosurgery Application [48.89674088331313]
We systematically evaluate a Deep Learning (DL) method in a 3D medical image segmentation task.
Our method is integrated into the radiosurgery treatment process and directly impacts the clinical workflow.
arXiv Detail & Related papers (2021-08-21T16:15:40Z) - An Interpretable Algorithm for Uveal Melanoma Subtyping from Whole Slide
Cytology Images [3.33281597371121]
We describe an automated yet interpretable system for uveal melanoma subtyping with digital images from fine needle aspiration biopsies.
Our method embeds every automatically segmented cell of a candidate image as a point in a 2D manifold defined by many representative slides.
A rule-based slide-level classification algorithm is trained on the partitions of the circularly distorted 2D manifold.
arXiv Detail & Related papers (2021-08-13T13:55:08Z) - A CNN Segmentation-Based Approach to Object Detection and Tracking in
Ultrasound Scans with Application to the Vagus Nerve Detection [17.80391011147757]
We propose a deep learning framework to automatically detect and track a specific anatomical target structure in ultrasound scans.
Our framework is designed to be accurate and robust across subjects and imaging devices, to operate in real-time, and to not require a large training set.
We tested the framework on two different ultrasound datasets with the aim to detect and track the Vagus nerve, where it outperformed current state-of-the-art real-time object detection networks.
arXiv Detail & Related papers (2021-06-25T19:12:46Z) - Semantic segmentation of multispectral photoacoustic images using deep
learning [53.65837038435433]
Photoacoustic imaging has the potential to revolutionise healthcare.
Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information.
We present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images.
arXiv Detail & Related papers (2021-05-20T09:33:55Z) - Automated Detection of Coronary Artery Stenosis in X-ray Angiography
using Deep Neural Networks [0.0]
We propose a two-step deep-learning framework to partially automate the detection of stenosis from X-ray coronary angiography images.
We achieved a 0.97 accuracy on the task of classifying the Left/Right Coronary Artery angle view and 0.68/0.73 recall on the determination of the regions of interest, for LCA and RCA, respectively.
arXiv Detail & Related papers (2021-03-04T11:45:54Z) - Assisted Probe Positioning for Ultrasound Guided Radiotherapy Using
Image Sequence Classification [55.96221340756895]
Effective transperineal ultrasound image guidance in prostate external beam radiotherapy requires consistent alignment between probe and prostate at each session during patient set-up.
We demonstrate a method for ensuring accurate probe placement through joint classification of images and probe position data.
Using a multi-input multi-task algorithm, spatial coordinate data from an optically tracked ultrasound probe is combined with an image clas-sifier using a recurrent neural network to generate two sets of predictions in real-time.
The algorithm identified optimal probe alignment within a mean (standard deviation) range of 3.7$circ$ (1.2$circ$) from
arXiv Detail & Related papers (2020-10-06T13:55:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.