Unsupervised Iterative U-Net with an Internal Guidance Layer for
Vertebrae Contrast Enhancement in Chest X-Ray Images
- URL: http://arxiv.org/abs/2306.03983v1
- Date: Tue, 6 Jun 2023 19:36:11 GMT
- Title: Unsupervised Iterative U-Net with an Internal Guidance Layer for
Vertebrae Contrast Enhancement in Chest X-Ray Images
- Authors: Ella Eidlin, Assaf Hoogi, Nathan S. Netanyahu
- Abstract summary: We propose a novel and robust approach to improve the quality of X-ray images by iteratively training a deep neural network.
Our framework includes an embedded internal guidance layer that enhances the fine structures of spinal vertebrae in chest X-ray images.
Experimental results demonstrate that our proposed method surpasses existing detail enhancement methods in terms of BRISQUE scores.
- Score: 1.521162809610347
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: X-ray imaging is a fundamental clinical tool for screening and diagnosing
various diseases. However, the spatial resolution of radiographs is often
limited, making it challenging to diagnose small image details and leading to
difficulties in identifying vertebrae anomalies at an early stage in chest
radiographs. To address this limitation, we propose a novel and robust approach
to significantly improve the quality of X-ray images by iteratively training a
deep neural network. Our framework includes an embedded internal guidance layer
that enhances the fine structures of spinal vertebrae in chest X-ray images
through fully unsupervised training, utilizing an iterative procedure that
employs the same network architecture in each enhancement phase. Additionally,
we have designed an optimized loss function that accurately identifies object
boundaries and enhances spinal features, thereby further enhancing the quality
of the images. Experimental results demonstrate that our proposed method
surpasses existing detail enhancement methods in terms of BRISQUE scores, and
is comparable in terms of LPC-SI. Furthermore, our approach exhibits superior
performance in restoring hidden fine structures, as evidenced by our
qualitative results. This innovative approach has the potential to
significantly enhance the diagnostic accuracy and early detection of diseases,
making it a promising advancement in X-ray imaging technology.
Related papers
- Towards Accurate and Interpretable Neuroblastoma Diagnosis via Contrastive Multi-scale Pathological Image Analysis [16.268045905735818]
CMSwinKAN is a contrastive-learning-based multi-scale feature fusion model tailored for pathological image classification.
We introduce a soft voting mechanism guided by clinical insights to seamlessly bridge patch-level predictions to whole slide image-level classifications.
Results demonstrate that CMSwinKAN performs better than existing state-of-the-art pathology-specific models pre-trained on large datasets.
arXiv Detail & Related papers (2025-04-18T15:39:46Z) - AttCDCNet: Attention-enhanced Chest Disease Classification using X-Ray Images [0.0]
We propose a novel detection model named textbfAttCDCNet for the task of X-ray image diagnosis.
The proposed model achieved an accuracy, precision and recall of 94.94%, 95.14% and 94.53%, respectively, on the COVID-19 Radiography dataset.
arXiv Detail & Related papers (2024-10-20T16:08:20Z) - Dual-Domain CLIP-Assisted Residual Optimization Perception Model for Metal Artifact Reduction [9.028901322902913]
Metal artifacts in computed tomography (CT) imaging pose significant challenges to accurate clinical diagnosis.
Deep learning-based approaches, particularly generative models, have been proposed for metal artifact reduction (MAR)
arXiv Detail & Related papers (2024-08-14T02:37:26Z) - Real-time guidewire tracking and segmentation in intraoperative x-ray [52.51797358201872]
We propose a two-stage deep learning framework for real-time guidewire segmentation and tracking.
In the first stage, a Yolov5 detector is trained, using the original X-ray images as well as synthetic ones, to output the bounding boxes of possible target guidewires.
In the second stage, a novel and efficient network is proposed to segment the guidewire in each detected bounding box.
arXiv Detail & Related papers (2024-04-12T20:39:19Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Deep Few-view High-resolution Photon-counting Extremity CT at Halved Dose for a Clinical Trial [8.393536317952085]
We propose a deep learning-based approach for PCCT image reconstruction at halved dose and doubled speed in a New Zealand clinical trial.
We present a patch-based volumetric refinement network to alleviate the GPU memory limitation, train network with synthetic data, and use model-based iterative refinement to bridge the gap between synthetic and real-world data.
arXiv Detail & Related papers (2024-03-19T00:07:48Z) - BarlowTwins-CXR : Enhancing Chest X-Ray abnormality localization in
heterogeneous data with cross-domain self-supervised learning [1.7479385556004874]
"BarlwoTwins-CXR" is a self-supervised learning strategy for autonomic abnormality localization of chest X-ray image analysis.
The approach achieved a 3% increase in mAP50 accuracy compared to traditional ImageNet pre-trained models.
arXiv Detail & Related papers (2024-02-09T16:10:13Z) - Multi-Scale Feature Fusion using Parallel-Attention Block for COVID-19
Chest X-ray Diagnosis [2.15242029196761]
Under the global COVID-19 crisis, accurate diagnosis of COVID-19 from Chest X-ray (CXR) images is critical.
We propose a novel multi-feature fusion network using parallel attention blocks to fuse the original CXR images and local-phase feature-enhanced CXR images at multi-scales.
arXiv Detail & Related papers (2023-04-25T16:56:12Z) - Significantly improving zero-shot X-ray pathology classification via fine-tuning pre-trained image-text encoders [50.689585476660554]
We propose a new fine-tuning strategy that includes positive-pair loss relaxation and random sentence sampling.
Our approach consistently improves overall zero-shot pathology classification across four chest X-ray datasets and three pre-trained models.
arXiv Detail & Related papers (2022-12-14T06:04:18Z) - Artificial Intelligence for Automatic Detection and Classification
Disease on the X-Ray Images [0.0]
This work presents rapid detection of diseases in the lung using the efficient Deep learning pre-trained RepVGG algorithm.
We are applying Artificial Intelligence technology for automatic highlighted detection of affected areas of people's lungs.
arXiv Detail & Related papers (2022-11-14T03:51:12Z) - Optimising Chest X-Rays for Image Analysis by Identifying and Removing
Confounding Factors [49.005337470305584]
During the COVID-19 pandemic, the sheer volume of imaging performed in an emergency setting for COVID-19 diagnosis has resulted in a wide variability of clinical CXR acquisitions.
The variable quality of clinically-acquired CXRs within publicly available datasets could have a profound effect on algorithm performance.
We propose a simple and effective step-wise approach to pre-processing a COVID-19 chest X-ray dataset to remove undesired biases.
arXiv Detail & Related papers (2022-08-22T13:57:04Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Cross-Modal Contrastive Learning for Abnormality Classification and
Localization in Chest X-rays with Radiomics using a Feedback Loop [63.81818077092879]
We propose an end-to-end semi-supervised cross-modal contrastive learning framework for medical images.
We first apply an image encoder to classify the chest X-rays and to generate the image features.
The radiomic features are then passed through another dedicated encoder to act as the positive sample for the image features generated from the same chest X-ray.
arXiv Detail & Related papers (2021-04-11T09:16:29Z) - In-Line Image Transformations for Imbalanced, Multiclass Computer Vision
Classification of Lung Chest X-Rays [91.3755431537592]
This study aims to leverage a body of literature in order to apply image transformations that would serve to balance the lack of COVID-19 LCXR data.
Deep learning techniques such as convolutional neural networks (CNNs) are able to select features that distinguish between healthy and disease states.
This study utilizes a simple CNN architecture for high-performance multiclass LCXR classification at 94 percent accuracy.
arXiv Detail & Related papers (2021-04-06T02:01:43Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Cross Chest Graph for Disease Diagnosis with Structural Relational
Reasoning [2.7148274921314615]
Locating lesions is important in the computer-aided diagnosis of X-ray images.
General weakly-supervised methods have failed to consider the characteristics of X-ray images.
We propose the Cross-chest Graph (CCG), which improves the performance of automatic lesion detection.
arXiv Detail & Related papers (2021-01-22T08:24:04Z) - Advancing diagnostic performance and clinical usability of neural
networks via adversarial training and dual batch normalization [2.1699022621790736]
We let six radiologists rate the interpretability of saliency maps in datasets of X-rays, computed tomography, and magnetic resonance imaging scans.
We found that the accuracy of adversarially trained models was equal to standard models when sufficiently large datasets and dual batch norm training were used.
arXiv Detail & Related papers (2020-11-25T20:41:01Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.