Surface-Enhanced Raman Spectroscopy and Transfer Learning Toward
Accurate Reconstruction of the Surgical Zone
- URL: http://arxiv.org/abs/2401.08821v1
- Date: Tue, 16 Jan 2024 20:47:19 GMT
- Title: Surface-Enhanced Raman Spectroscopy and Transfer Learning Toward
Accurate Reconstruction of the Surgical Zone
- Authors: Ashutosh Raman, Ren A. Odion, Kent K. Yamamoto, Weston Ross, Tuan
Vo-Dinh, Patrick J. Codd
- Abstract summary: We develop a robotic Raman system that can reliably pinpoint the location and boundaries of a tumor embedded in healthy tissue.
We reconstruct a surgical field of 30x60mm in 10.2 minutes, and achieve 98.2% accuracy, preserving measurements between features in the phantom.
- Score: 0.9507070656654631
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Raman spectroscopy, a photonic modality based on the inelastic backscattering
of coherent light, is a valuable asset to the intraoperative sensing space,
offering non-ionizing potential and highly-specific molecular fingerprint-like
spectroscopic signatures that can be used for diagnosis of pathological tissue
in the dynamic surgical field. Though Raman suffers from weakness in intensity,
Surface-Enhanced Raman Spectroscopy (SERS), which uses metal nanostructures to
amplify Raman signals, can achieve detection sensitivities that rival
traditional photonic modalities. In this study, we outline a robotic Raman
system that can reliably pinpoint the location and boundaries of a tumor
embedded in healthy tissue, modeled here as a tissue-mimicking phantom with
selectively infused Gold Nanostar regions. Further, due to the relative dearth
of collected biological SERS or Raman data, we implement transfer learning to
achieve 100% validation classification accuracy for Gold Nanostars compared to
Control Agarose, thus providing a proof-of-concept for Raman-based deep
learning training pipelines. We reconstruct a surgical field of 30x60mm in 10.2
minutes, and achieve 98.2% accuracy, preserving relative measurements between
features in the phantom. We also achieve an 84.3% Intersection-over-Union
score, which is the extent of overlap between the ground truth and predicted
reconstructions. Lastly, we also demonstrate that the Raman system and
classification algorithm do not discern based on sample color, but instead on
presence of SERS agents. This study provides a crucial step in the translation
of intelligent Raman systems in intraoperative oncological spaces.
Related papers
- Enhancing Diagnostic Precision in Gastric Bleeding through Automated Lesion Segmentation: A Deep DuS-KFCM Approach [20.416923956241497]
We introduce a novel deep learning model, the Dual Spatial Kernelized Constrained Fuzzy C-Means (Deep DuS-KFCM) clustering algorithm.
This system synergizes Neural Networks with Fuzzy Logic to offer a highly precise and efficient identification of bleeding regions.
Our model demonstrated unprecedented accuracy rates of 87.95%, coupled with a specificity of 96.33%, outperforming contemporary segmentation methods.
arXiv Detail & Related papers (2024-11-21T18:21:42Z) - TopoTxR: A topology-guided deep convolutional network for breast parenchyma learning on DCE-MRIs [49.69047720285225]
We propose a novel topological approach that explicitly extracts multi-scale topological structures to better approximate breast parenchymal structures.
We empirically validate emphTopoTxR using the VICTRE phantom breast dataset.
Our qualitative and quantitative analyses suggest differential topological behavior of breast tissue in treatment-na"ive imaging.
arXiv Detail & Related papers (2024-11-05T19:35:10Z) - Towards a Benchmark for Colorectal Cancer Segmentation in Endorectal Ultrasound Videos: Dataset and Model Development [59.74920439478643]
In this paper, we collect and annotated the first benchmark dataset that covers diverse ERUS scenarios.
Our ERUS-10K dataset comprises 77 videos and 10,000 high-resolution annotated frames.
We introduce a benchmark model for colorectal cancer segmentation, named the Adaptive Sparse-context TRansformer (ASTR)
arXiv Detail & Related papers (2024-08-19T15:04:42Z) - Enhancing Open-World Bacterial Raman Spectra Identification by Feature
Regularization for Improved Resilience against Unknown Classes [0.0]
Traditional closed-set classification approaches assume that all test samples belong to one of the known pathogens.
We demonstrate that the current state-of-the-art Neural Networks identifying pathogens through Raman spectra are vulnerable to unknown inputs.
We develop a novel ensemble of ResNet architectures combined with the attention mechanism which outperforms existing closed-world methods.
arXiv Detail & Related papers (2023-10-19T17:19:47Z) - K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality
Assessment [71.27193056354741]
The problem of how to assess cross-modality medical image synthesis has been largely unexplored.
We propose a new metric K-CROSS to spur progress on this challenging problem.
K-CROSS uses a pre-trained multi-modality segmentation network to predict the lesion location.
arXiv Detail & Related papers (2023-07-10T01:26:48Z) - Artificial-intelligence-based molecular classification of diffuse
gliomas using rapid, label-free optical imaging [59.79875531898648]
DeepGlioma is an artificial-intelligence-based diagnostic screening system.
DeepGlioma can predict the molecular alterations used by the World Health Organization to define the adult-type diffuse glioma taxonomy.
arXiv Detail & Related papers (2023-03-23T18:50:18Z) - Orientation-Shared Convolution Representation for CT Metal Artifact
Learning [63.67718355820655]
During X-ray computed tomography (CT) scanning, metallic implants carrying with patients often lead to adverse artifacts.
Existing deep-learning-based methods have gained promising reconstruction performance.
We propose an orientation-shared convolution representation strategy to adapt the physical prior structures of artifacts.
arXiv Detail & Related papers (2022-12-26T13:56:12Z) - A novel optical needle probe for deep learning-based tissue elasticity
characterization [59.698811329287174]
Optical coherence elastography (OCE) probes have been proposed for needle insertions but have so far lacked the necessary load sensing capabilities.
We present a novel OCE needle probe that provides simultaneous optical coherence tomography ( OCT) imaging and load sensing at the needle tip.
arXiv Detail & Related papers (2021-09-20T08:29:29Z) - Learned super resolution ultrasound for improved breast lesion
characterization [52.77024349608834]
Super resolution ultrasound localization microscopy enables imaging of the microvasculature at the capillary level.
In this work we use a deep neural network architecture that makes effective use of signal structure to address these challenges.
By leveraging our trained network, the microvasculature structure is recovered in a short time, without prior PSF knowledge, and without requiring separability of the UCAs.
arXiv Detail & Related papers (2021-07-12T09:04:20Z) - Retinal OCT Denoising with Pseudo-Multimodal Fusion Network [0.41998444721319206]
We propose a learning-based method that exploits information from the single-frame noisy B-scan and a pseudo-modality that is created with the aid of the self-fusion method.
Our method can effectively suppress the speckle noise and enhance the contrast between retina layers while the overall structure and small blood vessels are preserved.
arXiv Detail & Related papers (2021-07-09T08:00:20Z) - Segmentation of Anatomical Layers and Artifacts in Intravascular
Polarization Sensitive Optical Coherence Tomography Using Attending Physician
and Boundary Cardinality Lost Terms [4.93836246080317]
Intravascular ultrasound and optical coherence tomography are widely available for characterizing coronary stenoses.
We propose a convolutional neural network model and optimize its performance using a new multi-term loss function.
Our model segments two classes of major artifacts and detects the anatomical layers within the thickened vessel wall regions.
arXiv Detail & Related papers (2021-05-11T15:52:31Z) - Harvesting, Detecting, and Characterizing Liver Lesions from Large-scale
Multi-phase CT Data via Deep Dynamic Texture Learning [24.633802585888812]
We propose a fully-automated and multi-stage liver tumor characterization framework for dynamic contrast computed tomography (CT)
Our system comprises four sequential processes of tumor proposal detection, tumor harvesting, primary tumor site selection, and deep texture-based tumor characterization.
arXiv Detail & Related papers (2020-06-28T19:55:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.