RPLHR-CT Dataset and Transformer Baseline for Volumetric
Super-Resolution from CT Scans
- URL: http://arxiv.org/abs/2206.06253v1
- Date: Mon, 13 Jun 2022 15:35:59 GMT
- Title: RPLHR-CT Dataset and Transformer Baseline for Volumetric
Super-Resolution from CT Scans
- Authors: Pengxin Yu, Haoyue Zhang, Han Kang, Wen Tang, Corey W. Arnold, Rongguo
Zhang
- Abstract summary: coarse resolution may lead to difficulties in medical diagnosis by either physicians or computer-aided diagnosis algorithms.
Deep learning-based volumetric super-resolution (SR) methods are feasible ways to improve resolution.
This paper builds the first public real-paired dataset RPLHR-CT as a benchmark for volumetric SR.
Considering the inherent shortcoming of CNN, we also propose a transformer volumetric super-resolution network (TVSRN) based on attention mechanisms.
- Score: 12.066026343488453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In clinical practice, anisotropic volumetric medical images with low
through-plane resolution are commonly used due to short acquisition time and
lower storage cost. Nevertheless, the coarse resolution may lead to
difficulties in medical diagnosis by either physicians or computer-aided
diagnosis algorithms. Deep learning-based volumetric super-resolution (SR)
methods are feasible ways to improve resolution, with convolutional neural
networks (CNN) at their core. Despite recent progress, these methods are
limited by inherent properties of convolution operators, which ignore content
relevance and cannot effectively model long-range dependencies. In addition,
most of the existing methods use pseudo-paired volumes for training and
evaluation, where pseudo low-resolution (LR) volumes are generated by a simple
degradation of their high-resolution (HR) counterparts. However, the domain gap
between pseudo- and real-LR volumes leads to the poor performance of these
methods in practice. In this paper, we build the first public real-paired
dataset RPLHR-CT as a benchmark for volumetric SR, and provide baseline results
by re-implementing four state-of-the-art CNN-based methods. Considering the
inherent shortcoming of CNN, we also propose a transformer volumetric
super-resolution network (TVSRN) based on attention mechanisms, dispensing with
convolutions entirely. This is the first research to use a pure transformer for
CT volumetric SR. The experimental results show that TVSRN significantly
outperforms all baselines on both PSNR and SSIM. Moreover, the TVSRN method
achieves a better trade-off between the image quality, the number of
parameters, and the running time. Data and code are available at
https://github.com/smilenaxx/RPLHR-CT.
Related papers
- Unifying Subsampling Pattern Variations for Compressed Sensing MRI with Neural Operators [72.79532467687427]
Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled and compressed measurements.
Deep neural networks have shown great potential for reconstructing high-quality images from highly undersampled measurements.
We propose a unified model that is robust to different subsampling patterns and image resolutions in CS-MRI.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - BarlowTwins-CXR : Enhancing Chest X-Ray abnormality localization in
heterogeneous data with cross-domain self-supervised learning [1.7479385556004874]
"BarlwoTwins-CXR" is a self-supervised learning strategy for autonomic abnormality localization of chest X-ray image analysis.
The approach achieved a 3% increase in mAP50 accuracy compared to traditional ImageNet pre-trained models.
arXiv Detail & Related papers (2024-02-09T16:10:13Z) - Enhancing Super-Resolution Networks through Realistic Thick-Slice CT Simulation [4.43162303545687]
Deep learning-based Generative Models have the potential to convert low-resolution CT images into high-resolution counterparts without long acquisition times and increased radiation exposure in thin-slice CT imaging.
procuring appropriate training data for these Super-Resolution (SR) models is challenging.
Previous SR research has simulated thick-slice CT images from thin-slice CT images to create training pairs.
We introduce a simple yet realistic method to generate thick CT images from thin-slice CT images, facilitating the creation of training pairs for SR algorithms.
arXiv Detail & Related papers (2023-07-02T11:09:08Z) - DA-VSR: Domain Adaptable Volumetric Super-Resolution For Medical Images [69.63915773870758]
We present a novel algorithm called domain adaptable super-resolution (DA-VSR) to better bridge the domain inconsistency gap.
DA-VSR uses a unified feature extraction backbone and a series of network heads to improve image quality over different planes.
We demonstrate that DA-VSR significantly improves super-resolution quality across numerous datasets of different domains.
arXiv Detail & Related papers (2022-10-11T03:16:35Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Convolutional Neural Network to Restore Low-Dose Digital Breast
Tomosynthesis Projections in a Variance Stabilization Domain [15.149874383250236]
convolution neural network (CNN) proposed to restore low-dose (LD) projections to image quality equivalent to a standard full-dose (FD) acquisition.
Network achieved superior results in terms of the mean squared error (MNSE), normalized training time and noise spatial correlation compared with networks trained with traditional data-driven methods.
arXiv Detail & Related papers (2022-03-22T13:31:47Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - A comparative study of paired versus unpaired deep learning methods for
physically enhancing digital rock image resolution [0.0]
We rigorously compare two state-of-the-art SR deep learning techniques, using both paired and unpaired data, with like-for-like ground truth data.
Unpaired GAN approach can reconstruct super-resolution images as precise as paired CNN method, with comparable training times and dataset requirement.
This unlocks new applications for micro-CT image enhancement using unpaired deep learning methods.
arXiv Detail & Related papers (2021-12-16T05:50:25Z) - Multimodal-Boost: Multimodal Medical Image Super-Resolution using
Multi-Attention Network with Wavelet Transform [5.416279158834623]
Loss of corresponding image resolution degrades the overall performance of medical image diagnosis.
Deep learning based single image super resolution (SISR) algorithms has revolutionized the overall diagnosis framework.
This work proposes generative adversarial network (GAN) with deep multi-attention modules to learn high-frequency information from low-frequency data.
arXiv Detail & Related papers (2021-10-22T10:13:46Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - Perception Consistency Ultrasound Image Super-resolution via
Self-supervised CycleGAN [63.49373689654419]
We propose a new perception consistency ultrasound image super-resolution (SR) method based on self-supervision and cycle generative adversarial network (CycleGAN)
We first generate the HR fathers and the LR sons of the test ultrasound LR image through image enhancement.
We then make full use of the cycle loss of LR-SR-LR and HR-LR-SR and the adversarial characteristics of the discriminator to promote the generator to produce better perceptually consistent SR results.
arXiv Detail & Related papers (2020-12-28T08:24:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.