Needle tip force estimation by deep learning from raw spectral OCT data
- URL: http://arxiv.org/abs/2006.16675v1
- Date: Tue, 30 Jun 2020 10:49:54 GMT
- Title: Needle tip force estimation by deep learning from raw spectral OCT data
- Authors: M. Gromniak and N. Gessert and T. Saathoff and A. Schlaefer
- Abstract summary: Needle placement is a challenging problem for applications such as biopsy or brachytherapy.
fiber-optical sensors can be directly integrated into the needle tip.
We study how to calibrate Optical coherence tomography to sense forces.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Purpose. Needle placement is a challenging problem for applications such as
biopsy or brachytherapy. Tip force sensing can provide valuable feedback for
needle navigation inside the tissue. For this purpose, fiber-optical sensors
can be directly integrated into the needle tip. Optical coherence tomography
(OCT) can be used to image tissue. Here, we study how to calibrate OCT to sense
forces, e.g. during robotic needle placement.
Methods. We investigate whether using raw spectral OCT data without a typical
image reconstruction can improve a deep learning-based calibration between
optical signal and forces. For this purpose, we consider three different
needles with a new, more robust design which are calibrated using convolutional
neural networks (CNNs). We compare training the CNNs with the raw OCT signal
and the reconstructed depth profiles.
Results. We find that using raw data as an input for the largest CNN model
outperforms the use of reconstructed data with a mean absolute error of 5.81 mN
compared to 8.04 mN.
Conclusions. We find that deep learning with raw spectral OCT data can
improve learning for the task of force estimation. Our needle design and
calibration approach constitute a very accurate fiber-optical sensor for
measuring forces at the needle tip.
Related papers
- Neurovascular Segmentation in sOCT with Deep Learning and Synthetic Training Data [4.5276169699857505]
This study demonstrates a synthesis engine for neurovascular segmentation in serial-section optical coherence tomography images.
Our approach comprises two phases: label synthesis and label-to-image transformation.
We demonstrate the efficacy of the former by comparing it to several more realistic sets of training labels, and the latter by an ablation study of synthetic noise and artifact models.
arXiv Detail & Related papers (2024-07-01T16:09:07Z) - K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality
Assessment [71.27193056354741]
The problem of how to assess cross-modality medical image synthesis has been largely unexplored.
We propose a new metric K-CROSS to spur progress on this challenging problem.
K-CROSS uses a pre-trained multi-modality segmentation network to predict the lesion location.
arXiv Detail & Related papers (2023-07-10T01:26:48Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - Tissue Classification During Needle Insertion Using Self-Supervised
Contrastive Learning and Optical Coherence Tomography [53.38589633687604]
We propose a deep neural network that classifies the tissues from the phase and intensity data of complex OCT signals acquired at the needle tip.
We show that with 10% of the training set, our proposed pretraining strategy helps the model achieve an F1 score of 0.84 whereas the model achieves an F1 score of 0.60 without it.
arXiv Detail & Related papers (2023-04-26T14:11:04Z) - Mediastinal Lymph Node Detection and Segmentation Using Deep Learning [1.7188280334580195]
In clinical practice, computed tomography (CT) and positron emission tomography (PET) imaging detect abnormal lymph nodes (LNs)
Deep convolutional neural networks frequently segment items in medical photographs.
A well-established deep learning technique UNet was modified using bilinear and total generalized variation (TGV) based up strategy to segment and detect mediastinal lymph nodes.
The modified UNet maintains texture discontinuities, selects noisy areas, searches appropriate balance points through backpropagation, and recreates image resolution.
arXiv Detail & Related papers (2022-11-24T02:55:20Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - A novel optical needle probe for deep learning-based tissue elasticity
characterization [59.698811329287174]
Optical coherence elastography (OCE) probes have been proposed for needle insertions but have so far lacked the necessary load sensing capabilities.
We present a novel OCE needle probe that provides simultaneous optical coherence tomography ( OCT) imaging and load sensing at the needle tip.
arXiv Detail & Related papers (2021-09-20T08:29:29Z) - Deep Lesion Tracker: Monitoring Lesions in 4D Longitudinal Imaging
Studies [19.890200389017213]
Deep lesion tracker (DLT) is a deep learning approach that uses both appearance- and anatomical-based signals.
We release the first lesion tracking benchmark, consisting of 3891 lesion pairs from the public DeepLesion database.
DLT generalizes well on an external clinical test set of 100 longitudinal studies, achieving 88% accuracy.
arXiv Detail & Related papers (2020-12-09T05:23:46Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - Segmentation of Retinal Low-Cost Optical Coherence Tomography Images
using Deep Learning [2.571523045125397]
The need for treatment is determined by the presence or change of disease-specific OCT-based biomarkers.
The monitoring frequency of current treatment schemes is not individually adapted to the patient and therefore often insufficient.
One of the key requirements of a home monitoring OCT system is a computer-aided diagnosis to automatically detect and quantify pathological changes.
arXiv Detail & Related papers (2020-01-23T12:55:53Z) - Deep OCT Angiography Image Generation for Motion Artifact Suppression [8.442020709975015]
Affected scans emerge as high intensity (white) or missing (black) regions, resulting in lost information.
Deep generative model for OCT to OCTA image translation relies on a single intact OCT scan.
A U-Net is trained to extract the angiographic information from OCT patches.
At inference, a detection algorithm finds outlier OCTA scans based on their surroundings, which are then replaced by the trained network.
arXiv Detail & Related papers (2020-01-08T13:31:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.