Laparoscopic Scene Analysis for Intraoperative Visualisation of Gamma Probe Signals in Minimally Invasive Cancer Surgery
- URL: http://arxiv.org/abs/2501.01752v1
- Date: Fri, 03 Jan 2025 10:50:15 GMT
- Title: Laparoscopic Scene Analysis for Intraoperative Visualisation of Gamma Probe Signals in Minimally Invasive Cancer Surgery
- Authors: Baoru Huang,
- Abstract summary: Cancer remains a significant health challenge worldwide, with a new diagnosis occurring every two minutes in the UK.
Surgery is one of the main treatment options for cancer.
Surgeons rely on the sense of touch and naked eye to guide the excision of cancerous tissues and metastases.
- Score: 2.7195102129095003
- License:
- Abstract: Cancer remains a significant health challenge worldwide, with a new diagnosis occurring every two minutes in the UK. Surgery is one of the main treatment options for cancer. However, surgeons rely on the sense of touch and naked eye with limited use of pre-operative image data to directly guide the excision of cancerous tissues and metastases due to the lack of reliable intraoperative visualisation tools. This leads to increased costs and harm to the patient where the cancer is removed with positive margins, or where other critical structures are unintentionally impacted. There is therefore a pressing need for more reliable and accurate intraoperative visualisation tools for minimally invasive surgery to improve surgical outcomes and enhance patient care. A recent miniaturised cancer detection probe (i.e., SENSEI developed by Lightpoint Medical Ltd.) leverages the cancer-targeting ability of nuclear agents to more accurately identify cancer intra-operatively using the emitted gamma signal. However, the use of this probe presents a visualisation challenge as the probe is non-imaging and is air-gapped from the tissue, making it challenging for the surgeon to locate the probe-sensing area on the tissue surface. Geometrically, the sensing area is defined as the intersection point between the gamma probe axis and the tissue surface in 3D space but projected onto the 2D laparoscopic image. Hence, in this thesis, tool tracking, pose estimation, and segmentation tools were developed first, followed by laparoscope image depth estimation algorithms and 3D reconstruction methods.
Related papers
- Monocular Microscope to CT Registration using Pose Estimation of the
Incus for Augmented Reality Cochlear Implant Surgery [3.8909273404657556]
We develop a method that permits direct 2D-to-3D registration of the view microscope video to the pre-operative Computed Tomography (CT) scan without the need for external tracking equipment.
Our results demonstrate the accuracy with an average rotation error of less than 25 degrees and a translation error of less than 2 mm, 3 mm, and 0.55% for the x, y, and z axes, respectively.
arXiv Detail & Related papers (2024-03-12T00:26:08Z) - Segmentation-based Assessment of Tumor-Vessel Involvement for Surgical
Resectability Prediction of Pancreatic Ductal Adenocarcinoma [1.880228463170355]
Pancreatic ductal adenocarcinoma (PDAC) is a highly aggressive cancer with limited treatment options.
This research proposes a workflow and deep learning-based segmentation models to automatically assess tumor-vessel involvement.
arXiv Detail & Related papers (2023-10-01T10:39:38Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - Detecting the Sensing Area of A Laparoscopic Probe in Minimally Invasive
Cancer Surgery [6.0097646269887965]
In surgical oncology, it is challenging for surgeons to identify lymph nodes and completely resect cancer.
A novel tethered laparoscopic gamma detector is used to localize a preoperatively injected radiotracer.
Gamma activity visualization is challenging to present to the operator because the probe is non-imaging and it does not visibly indicate the activity on the tissue surface.
arXiv Detail & Related papers (2023-07-07T15:33:49Z) - Intra-operative Brain Tumor Detection with Deep Learning-Optimized
Hyperspectral Imaging [37.21885467891782]
Surgery for gliomas (intrinsic brain tumors) is challenging due to the infiltrative nature of the lesion.
No real-time, intra-operative, label-free and wide-field tool is available to assist and guide the surgeon to find the relevant demarcations for these tumors.
We build a deep-learning-based diagnostic tool for cancer resection with potential for intra-operative guidance.
arXiv Detail & Related papers (2023-02-06T15:52:03Z) - Learned super resolution ultrasound for improved breast lesion
characterization [52.77024349608834]
Super resolution ultrasound localization microscopy enables imaging of the microvasculature at the capillary level.
In this work we use a deep neural network architecture that makes effective use of signal structure to address these challenges.
By leveraging our trained network, the microvasculature structure is recovered in a short time, without prior PSF knowledge, and without requiring separability of the UCAs.
arXiv Detail & Related papers (2021-07-12T09:04:20Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z) - Detecting Scatteredly-Distributed, Small, andCritically Important
Objects in 3D OncologyImaging via Decision Stratification [23.075722503902714]
We focus on the detection and segmentation of oncology-significant (or suspicious cancer metastasized) lymph nodes.
We propose a divide-and-conquer decision stratification approach that divides OSLNs into tumor-proximal and tumor-distal categories.
We present a novel global-local network (GLNet) that combines high-level lesion characteristics with features learned from localized 3D image patches.
arXiv Detail & Related papers (2020-05-27T23:12:11Z) - Spatio-spectral deep learning methods for in-vivo hyperspectral
laryngeal cancer detection [49.32653090178743]
Early detection of head and neck tumors is crucial for patient survival.
Hyperspectral imaging (HSI) can be used for non-invasive detection of head and neck tumors.
We present multiple deep learning techniques for in-vivo laryngeal cancer detection based on HSI.
arXiv Detail & Related papers (2020-04-21T17:07:18Z) - Robust Medical Instrument Segmentation Challenge 2019 [56.148440125599905]
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
arXiv Detail & Related papers (2020-03-23T14:35:08Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.