Assisted Probe Positioning for Ultrasound Guided Radiotherapy Using
Image Sequence Classification
- URL: http://arxiv.org/abs/2010.02732v1
- Date: Tue, 6 Oct 2020 13:55:02 GMT
- Title: Assisted Probe Positioning for Ultrasound Guided Radiotherapy Using
Image Sequence Classification
- Authors: Alexander Grimwood, Helen McNair, Yipeng Hu, Ester Bonmati, Dean
Barratt, Emma Harris
- Abstract summary: Effective transperineal ultrasound image guidance in prostate external beam radiotherapy requires consistent alignment between probe and prostate at each session during patient set-up.
We demonstrate a method for ensuring accurate probe placement through joint classification of images and probe position data.
Using a multi-input multi-task algorithm, spatial coordinate data from an optically tracked ultrasound probe is combined with an image clas-sifier using a recurrent neural network to generate two sets of predictions in real-time.
The algorithm identified optimal probe alignment within a mean (standard deviation) range of 3.7$circ$ (1.2$circ$) from
- Score: 55.96221340756895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effective transperineal ultrasound image guidance in prostate external beam
radiotherapy requires consistent alignment between probe and prostate at each
session during patient set-up. Probe placement and ultrasound image
inter-pretation are manual tasks contingent upon operator skill, leading to
interoperator uncertainties that degrade radiotherapy precision. We demonstrate
a method for ensuring accurate probe placement through joint classification of
images and probe position data. Using a multi-input multi-task algorithm,
spatial coordinate data from an optically tracked ultrasound probe is combined
with an image clas-sifier using a recurrent neural network to generate two sets
of predictions in real-time. The first set identifies relevant prostate anatomy
visible in the field of view using the classes: outside prostate, prostate
periphery, prostate centre. The second set recommends a probe angular
adjustment to achieve alignment between the probe and prostate centre with the
classes: move left, move right, stop. The algo-rithm was trained and tested on
9,743 clinical images from 61 treatment sessions across 32 patients. We
evaluated classification accuracy against class labels de-rived from three
experienced observers at 2/3 and 3/3 agreement thresholds. For images with
unanimous consensus between observers, anatomical classification accuracy was
97.2% and probe adjustment accuracy was 94.9%. The algorithm identified optimal
probe alignment within a mean (standard deviation) range of 3.7$^{\circ}$
(1.2$^{\circ}$) from angle labels with full observer consensus, comparable to
the 2.8$^{\circ}$ (2.6$^{\circ}$) mean interobserver range. We propose such an
algorithm could assist ra-diotherapy practitioners with limited experience of
ultrasound image interpreta-tion by providing effective real-time feedback
during patient set-up.
Related papers
- Nested ResNet: A Vision-Based Method for Detecting the Sensing Area of a Drop-in Gamma Probe [2.835688998859888]
Drop-in gamma probes are widely used in robotic-assisted minimally invasive surgery (RAMIS) for lymph node detection.
Previous work attempted to predict the sensing area location using laparoscopic images, but the prediction accuracy was unsatisfactory.
We introduce a three-branch deep learning framework to predict the sensing area of the probe.
arXiv Detail & Related papers (2024-10-30T16:08:43Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - RUSOpt: Robotic UltraSound Probe Normalization with Bayesian
Optimization for In-plane and Out-plane Scanning [4.420121239028863]
Proper orientation of the robotized probe plays a crucial role in governing the quality of ultrasound images.
We propose a sample-efficient method to automatically adjust the orientation of the ultrasound probe normal to the point of contact on the scanning surface.
arXiv Detail & Related papers (2023-10-05T09:22:16Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - Identifying Visible Tissue in Intraoperative Ultrasound Images during
Brain Surgery: A Method and Application [1.4408275800058263]
Intraoperative ultrasound scanning is a demanding visuotactile task.
It requires operators to simultaneously localise the ultrasound perspective and manually perform slight adjustments to the pose of the probe.
We propose a method for the identification of the visible tissue, which enables the analysis of ultrasound probe and tissue contact.
arXiv Detail & Related papers (2023-06-01T23:06:14Z) - Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound
Imaging [61.60067283680348]
With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging.
We propose a vision-based, data driven method that incorporates learning-based computer vision techniques.
Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69)deg for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets.
arXiv Detail & Related papers (2022-12-15T14:34:12Z) - Domain Generalization for Prostate Segmentation in Transrectal
Ultrasound Images: A Multi-center Study [2.571022281023314]
We introduce a novel 2.5D deep neural network for prostate segmentation on ultrasound images.
We trained our model on 764 subjects from one institution and finetuned our model using only ten subjects from subsequent institutions.
Our method achieved an average Dice Similarity Coefficient (Dice) of $94.0pm0.03$ and Hausdorff Distance (HD95) of 2.28 $mm$ in an independent set of subjects.
arXiv Detail & Related papers (2022-09-05T20:20:19Z) - Comparison of Depth Estimation Setups from Stereo Endoscopy and Optical
Tracking for Point Measurements [1.1084983279967584]
To support minimally-invasive mitral valve repair, quantitative measurements from the valve can be obtained using an infra-red tracked stylus.
Hand-eye calibration is required that links both coordinate systems and is a prerequisite to project the points onto the image plane.
A complementary approach to this is to use a vision-based endoscopic stereo-setup to detect and triangulate points of interest, to obtain the 3D coordinates.
Preliminary results indicate that 3D landmark estimation, either labeled manually or through partly automated detection with a deep learning approach, provides more accurate triangulated depth measurements when performed with a tailored image-based method than
arXiv Detail & Related papers (2022-01-26T10:15:46Z) - Multiple Time Series Fusion Based on LSTM An Application to CAP A Phase
Classification Using EEG [56.155331323304]
Deep learning based electroencephalogram channels' feature level fusion is carried out in this work.
Channel selection, fusion, and classification procedures were optimized by two optimization algorithms.
arXiv Detail & Related papers (2021-12-18T14:17:49Z) - Robust Medical Instrument Segmentation Challenge 2019 [56.148440125599905]
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
arXiv Detail & Related papers (2020-03-23T14:35:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.