Attenuation artifact detection and severity classification in intracoronary OCT using mixed image representations
- URL: http://arxiv.org/abs/2503.05322v1
- Date: Fri, 07 Mar 2025 11:01:00 GMT
- Title: Attenuation artifact detection and severity classification in intracoronary OCT using mixed image representations
- Authors: Pierandrea Cancian, Simone Saitta, Xiaojin Gu, Rudolf L. M. van Herten, Thijs J. Luttikholt, Jos Thannhauser, Rick H. J. A. Volleberg, Ruben G. A. van der Waerden, Joske L. van der Zande, Clarisa I. Sánchez, Bram van Ginneken, Niels van Royen, Ivana Išgum,
- Abstract summary: We propose a convolutional neural network that performs classification of the attenuation lines (A-lines) into three classes: no artifact, mild artifact and severe artifact.<n>Our method detects the presence of attenuation artifacts in OCT frames reaching F-scores of 0.77 and 0.94 for mild and severe artifacts, respectively.
- Score: 2.334201943310467
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In intracoronary optical coherence tomography (OCT), blood residues and gas bubbles cause attenuation artifacts that can obscure critical vessel structures. The presence and severity of these artifacts may warrant re-acquisition, prolonging procedure time and increasing use of contrast agent. Accurate detection of these artifacts can guide targeted re-acquisition, reducing the amount of repeated scans needed to achieve diagnostically viable images. However, the highly heterogeneous appearance of these artifacts poses a challenge for the automated detection of the affected image regions. To enable automatic detection of the attenuation artifacts caused by blood residues and gas bubbles based on their severity, we propose a convolutional neural network that performs classification of the attenuation lines (A-lines) into three classes: no artifact, mild artifact and severe artifact. Our model extracts and merges features from OCT images in both Cartesian and polar coordinates, where each column of the image represents an A-line. Our method detects the presence of attenuation artifacts in OCT frames reaching F-scores of 0.77 and 0.94 for mild and severe artifacts, respectively. The inference time over a full OCT scan is approximately 6 seconds. Our experiments show that analysis of images represented in both Cartesian and polar coordinate systems outperforms the analysis in polar coordinates only, suggesting that these representations contain complementary features. This work lays the foundation for automated artifact assessment and image acquisition guidance in intracoronary OCT imaging.
Related papers
- A detection-task-specific deep-learning method to improve the quality of sparse-view myocardial perfusion SPECT images [17.91266458357747]
Myocardial perfusion imaging (MPI) with single-photon emission computed tomography (SPECT) is a widely used and cost-effective diagnostic tool for coronary artery disease.
The lengthy scanning time in this imaging procedure can cause patient discomfort, motion artifacts, and potentially inaccurate diagnoses.
We propose a detection-task-specific deep-learning method for sparse-view MPI SPECT images.
arXiv Detail & Related papers (2025-04-22T18:01:03Z) - DiffDoctor: Diagnosing Image Diffusion Models Before Treating [57.82359018425674]
We propose DiffDoctor, a two-stage pipeline to assist image diffusion models in generating fewer artifacts.<n>We collect a dataset of over 1M flawed synthesized images and set up an efficient human-in-the-loop annotation process.<n>The learned artifact detector is then involved in the second stage to tune the diffusion model through assigning a per-pixel confidence map for each image.
arXiv Detail & Related papers (2025-01-21T18:56:41Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - O2CTA: Introducing Annotations from OCT to CCTA in Coronary Plaque
Analysis [19.099761377777412]
Coronary CT angiography (CCTA) is widely used for artery imaging and determining the stenosis degree.
It can be settled by invasive optical coherence tomography ( OCT) without much trouble for physicians, but bringing higher costs and potential risks to patients.
We propose a method to handle the O2CTA problem. CCTA scans are first reconstructed into multi-planar reformatted (MPR) images, which agree with OCT images in term of semantic contents.
The artery segment in OCT, which is manually labelled, is then spatially aligned with the entire artery in MPR images via the proposed alignment strategy.
arXiv Detail & Related papers (2023-03-11T09:40:05Z) - Enhanced Sharp-GAN For Histopathology Image Synthesis [63.845552349914186]
Histopathology image synthesis aims to address the data shortage issue in training deep learning approaches for accurate cancer detection.
We propose a novel approach that enhances the quality of synthetic images by using nuclei topology and contour regularization.
The proposed approach outperforms Sharp-GAN in all four image quality metrics on two datasets.
arXiv Detail & Related papers (2023-01-24T17:54:01Z) - Anatomically constrained CT image translation for heterogeneous blood
vessel segmentation [3.88838725116957]
Anatomical structures in contrast-enhanced CT (ceCT) images can be challenging to segment due to variability in contrast medium diffusion.
To limit the radiation dose, generative models could be used to synthesize one modality, instead of acquiring it.
CycleGAN has attracted particular attention because it alleviates the need for paired data.
We present an extension of CycleGAN to generate high fidelity images, with good structural consistency.
arXiv Detail & Related papers (2022-10-04T16:14:49Z) - Self-Attention Generative Adversarial Network for Iterative
Reconstruction of CT Images [0.9208007322096533]
The aim of this study is to train a single neural network to reconstruct high-quality CT images from noisy or incomplete data.
The network includes a self-attention block to model long-range dependencies in the data.
Our approach is shown to have comparable overall performance to CIRCLE GAN, while outperforming the other two approaches.
arXiv Detail & Related papers (2021-12-23T19:20:38Z) - SQUID: Deep Feature In-Painting for Unsupervised Anomaly Detection [76.01333073259677]
We propose the use of Space-aware Memory Queues for In-painting and Detecting anomalies from radiography images (abbreviated as SQUID)
We show that SQUID can taxonomize the ingrained anatomical structures into recurrent patterns; and in the inference, it can identify anomalies (unseen/modified patterns) in the image.
arXiv Detail & Related papers (2021-11-26T13:47:34Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Weakly- and Semi-Supervised Probabilistic Segmentation and
Quantification of Ultrasound Needle-Reverberation Artifacts to Allow Better
AI Understanding of Tissue Beneath Needles [0.0]
We propose a probabilistic needle-and-reverberation-artifact segmentation algorithm to separate desired tissue-based pixel values from superimposed artifacts.
Our method matches state-of-the-art artifact segmentation performance and sets a new standard in estimating the per-pixel contributions of artifact vs underlying anatomy.
arXiv Detail & Related papers (2020-11-24T08:34:38Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z) - Deep OCT Angiography Image Generation for Motion Artifact Suppression [8.442020709975015]
Affected scans emerge as high intensity (white) or missing (black) regions, resulting in lost information.
Deep generative model for OCT to OCTA image translation relies on a single intact OCT scan.
A U-Net is trained to extract the angiographic information from OCT patches.
At inference, a detection algorithm finds outlier OCTA scans based on their surroundings, which are then replaced by the trained network.
arXiv Detail & Related papers (2020-01-08T13:31:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.