Deep-Learning Approach for Tissue Classification using Acoustic Waves during Ablation with an Er:YAG Laser (Updated)
- URL: http://arxiv.org/abs/2406.14570v2
- Date: Mon, 24 Jun 2024 09:25:33 GMT
- Title: Deep-Learning Approach for Tissue Classification using Acoustic Waves during Ablation with an Er:YAG Laser (Updated)
- Authors: Carlo Seppi, Philippe C. Cattin,
- Abstract summary: A reliable feedback system is crucial during laser surgery to prevent damage to surrounding tissues.
We propose a tissue classification method analyzing acoustic waves generated during laser ablation.
- Score: 0.7892577704654171
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Today's mechanical tools for bone cutting (osteotomy) cause mechanical trauma that prolongs the healing process. Medical device manufacturers aim to minimize this trauma, with minimally invasive surgery using laser cutting as one innovation. This method ablates tissue using laser light instead of mechanical tools, reducing post-surgery healing time. A reliable feedback system is crucial during laser surgery to prevent damage to surrounding tissues. We propose a tissue classification method analyzing acoustic waves generated during laser ablation, demonstrating its applicability in an ex-vivo experiment. The ablation process with a microsecond pulsed Er:YAG laser produces acoustic waves, acquired with an air-coupled transducer. These waves were used to classify five porcine tissue types: hard bone, soft bone, muscle, fat, and skin. For automated tissue classification, we compared five Neural Network (NN) approaches: a one-dimensional Convolutional Neural Network (CNN) with time-dependent input, a Fully-connected Neural Network (FcNN) with either the frequency spectrum or principal components of the frequency spectrum as input, and a combination of a CNN and an FcNN with time-dependent data and its frequency spectrum as input. Consecutive acoustic waves were used to improve classification accuracy. Grad-Cam identified the activation map of the frequencies, showing low frequencies as the most important for this task. Our results indicated that combining time-dependent data with its frequency spectrum achieved the highest classification accuracy (65.5%-75.5%). We also found that using the frequency spectrum alone was sufficient, with no additional benefit from applying Principal Components Analysis (PCA).
Related papers
- Classification of Heart Sounds Using Multi-Branch Deep Convolutional Network and LSTM-CNN [2.7699831151653305]
This paper presents a fast and cost-effective method for diagnosing cardiac abnormalities using low-cost systems in clinics.
The overall classification accuracy of heart sounds with the LSCN network is more than 96%.
arXiv Detail & Related papers (2024-07-15T13:02:54Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - What do neural networks learn in image classification? A frequency
shortcut perspective [3.9858496473361402]
This study empirically investigates the learning dynamics of frequency shortcuts in neural networks (NNs)
We show that NNs tend to find simple solutions for classification, and what they learn first during training depends on the most distinctive frequency characteristics.
We propose a metric to measure class-wise frequency characteristics and a method to identify frequency shortcuts.
arXiv Detail & Related papers (2023-07-19T08:34:25Z) - CNN-based fully automatic wrist cartilage volume quantification in MR
Image [55.41644538483948]
The U-net convolutional neural network with additional attention layers provides the best wrist cartilage segmentation performance.
The error of cartilage volume measurement should be assessed independently using a non-MRI method.
arXiv Detail & Related papers (2022-06-22T14:19:06Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Deep Metric Learning with Locality Sensitive Angular Loss for
Self-Correcting Source Separation of Neural Spiking Signals [77.34726150561087]
We propose a methodology based on deep metric learning to address the need for automated post-hoc cleaning and robust separation filters.
We validate this method with an artificially corrupted label set based on source-separated high-density surface electromyography recordings.
This approach enables a neural network to learn to accurately decode neurophysiological time series using any imperfect method of labelling the signal.
arXiv Detail & Related papers (2021-10-13T21:51:56Z) - FREA-Unet: Frequency-aware U-net for Modality Transfer [9.084926957557842]
We propose a new frequency-aware attention U-net for generating synthetic PET images from MRI data.
Our attention Unet computes the attention scores for feature maps in low/high frequency layers and use it to help the model focus more on the most important regions.
arXiv Detail & Related papers (2020-12-31T01:58:44Z) - Focal Frequency Loss for Image Reconstruction and Synthesis [125.7135706352493]
We show that narrowing gaps in the frequency domain can ameliorate image reconstruction and synthesis quality further.
We propose a novel focal frequency loss, which allows a model to adaptively focus on frequency components that are hard to synthesize.
arXiv Detail & Related papers (2020-12-23T17:32:04Z) - A Spiking Neural Network (SNN) for detecting High Frequency Oscillations
(HFOs) in the intraoperative ECoG [1.8464222520424338]
High frequency oscillations (HFOs) generated by epileptogenic tissue can be used to tailor the resection margin.
We present a spiking neural network (SNN) for automatic HFO detection that is optimally suited for neuromorphic hardware implementation.
arXiv Detail & Related papers (2020-11-17T17:24:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.