Adaptation to CT Reconstruction Kernels by Enforcing Cross-domain
Feature Maps Consistency
- URL: http://arxiv.org/abs/2203.14616v1
- Date: Mon, 28 Mar 2022 10:00:03 GMT
- Title: Adaptation to CT Reconstruction Kernels by Enforcing Cross-domain
Feature Maps Consistency
- Authors: Stanislav Shimovolos, Andrey Shushko, Mikhail Belyaev, Boris Shirokikh
- Abstract summary: We show a decrease in the COVID-19 segmentation quality of the model trained on the smooth and tested on the sharp reconstruction kernels.
We propose the unsupervised adaptation method, called F-Consistency, that outperforms the previous approaches.
- Score: 0.06117371161379209
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning methods provide significant assistance in analyzing coronavirus
disease (COVID-19) in chest computed tomography (CT) images, including
identification, severity assessment, and segmentation. Although the earlier
developed methods address the lack of data and specific annotations, the
current goal is to build a robust algorithm for clinical use, having a larger
pool of available data. With the larger datasets, the domain shift problem
arises, affecting the performance of methods on the unseen data. One of the
critical sources of domain shift in CT images is the difference in
reconstruction kernels used to generate images from the raw data (sinograms).
In this paper, we show a decrease in the COVID-19 segmentation quality of the
model trained on the smooth and tested on the sharp reconstruction kernels.
Furthermore, we compare several domain adaptation approaches to tackle the
problem, such as task-specific augmentation and unsupervised adversarial
learning. Finally, we propose the unsupervised adaptation method, called
F-Consistency, that outperforms the previous approaches. Our method exploits a
set of unlabeled CT image pairs which differ only in reconstruction kernels
within every pair. It enforces the similarity of the network hidden
representations (feature maps) by minimizing mean squared error (MSE) between
paired feature maps. We show our method achieving 0.64 Dice Score on the test
dataset with unseen sharp kernels, compared to the 0.56 Dice Score of the
baseline model. Moreover, F-Consistency scores 0.80 Dice Score between
predictions on the paired images, which almost doubles the baseline score of
0.46 and surpasses the other methods. We also show F-Consistency to better
generalize on the unseen kernels and without the specific semantic content,
e.g., presence of the COVID-19 lesions.
Related papers
- CDSE-UNet: Enhancing COVID-19 CT Image Segmentation with Canny Edge
Detection and Dual-Path SENet Feature Fusion [10.831487161893305]
CDSE-UNet is a novel UNet-based segmentation model that integrates Canny operator edge detection and a dual-path SENet feature fusion mechanism.
We have developed a Multiscale Convolution approach, replacing the standard Convolution in UNet, to adapt to the varied lesion sizes and shapes.
Our evaluations on public datasets demonstrate CDSE-UNet's superior performance over other leading models.
arXiv Detail & Related papers (2024-03-03T13:36:07Z) - Enhanced Sharp-GAN For Histopathology Image Synthesis [63.845552349914186]
Histopathology image synthesis aims to address the data shortage issue in training deep learning approaches for accurate cancer detection.
We propose a novel approach that enhances the quality of synthetic images by using nuclei topology and contour regularization.
The proposed approach outperforms Sharp-GAN in all four image quality metrics on two datasets.
arXiv Detail & Related papers (2023-01-24T17:54:01Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - Detect-and-Segment: a Deep Learning Approach to Automate Wound Image
Segmentation [8.354517822940783]
We present a deep learning approach to produce wound segmentation maps with high generalization capabilities.
In our approach, dedicated deep neural networks detected the wound position, isolated the wound from the uninformative background, and computed the wound segmentation map.
arXiv Detail & Related papers (2021-11-02T13:39:13Z) - Cross-Site Severity Assessment of COVID-19 from CT Images via Domain
Adaptation [64.59521853145368]
Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event.
To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites.
This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features.
arXiv Detail & Related papers (2021-09-08T07:56:51Z) - Zero-Shot Domain Adaptation in CT Segmentation by Filtered Back
Projection Augmentation [0.1197985185770095]
Domain shift is one of the most salient challenges in medical computer vision.
We address variability in computed tomography (CT) images caused by different convolution kernels used in the reconstruction process.
We propose Filtered Back-Projection Augmentation (FBPAug), a simple and surprisingly efficient approach to augment CT images in sinogram space emulating reconstruction with different kernels.
arXiv Detail & Related papers (2021-07-18T21:46:49Z) - Binary segmentation of medical images using implicit spline
representations and deep learning [1.5293427903448025]
We propose a novel approach to image segmentation based on combining implicit spline representations with deep convolutional neural networks.
For our best network, we achieve an average volumetric test Dice score of almost 92%, which reaches the state of the art for this congenital heart disease dataset.
arXiv Detail & Related papers (2021-02-25T10:04:25Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Cross-Domain Segmentation with Adversarial Loss and Covariate Shift for
Biomedical Imaging [2.1204495827342438]
This manuscript aims to implement a novel model that can learn robust representations from cross-domain data by encapsulating distinct and shared patterns from different modalities.
The tests on CT and MRI liver data acquired in routine clinical trials show that the proposed model outperforms all other baseline with a large margin.
arXiv Detail & Related papers (2020-06-08T07:35:55Z) - Data Consistent CT Reconstruction from Insufficient Data with Learned
Prior Images [70.13735569016752]
We investigate the robustness of deep learning in CT image reconstruction by showing false negative and false positive lesion cases.
We propose a data consistent reconstruction (DCR) method to improve their image quality, which combines the advantages of compressed sensing and deep learning.
The efficacy of the proposed method is demonstrated in cone-beam CT with truncated data, limited-angle data and sparse-view data, respectively.
arXiv Detail & Related papers (2020-05-20T13:30:49Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.