Improved Super Resolution of MR Images Using CNNs and Vision
Transformers
- URL: http://arxiv.org/abs/2207.11748v1
- Date: Sun, 24 Jul 2022 14:01:52 GMT
- Title: Improved Super Resolution of MR Images Using CNNs and Vision
Transformers
- Authors: Dwarikanath Mahapatra
- Abstract summary: Vision transformers (ViT) learn better global context that is helpful in generating superior quality HR images.
We combine local information of CNNs and global information from ViTs for image super resolution and output super resolved images.
- Score: 5.6512908295414
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: State of the art magnetic resonance (MR) image super-resolution methods (ISR)
using convolutional neural networks (CNNs) leverage limited contextual
information due to the limited spatial coverage of CNNs. Vision transformers
(ViT) learn better global context that is helpful in generating superior
quality HR images. We combine local information of CNNs and global information
from ViTs for image super resolution and output super resolved images that have
superior quality than those produced by state of the art methods. We include
extra constraints through multiple novel loss functions that preserve structure
and texture information from the low resolution to high resolution images.
Related papers
- CoSeR: Bridging Image and Language for Cognitive Super-Resolution [74.24752388179992]
We introduce the Cognitive Super-Resolution (CoSeR) framework, empowering SR models with the capacity to comprehend low-resolution images.
We achieve this by marrying image appearance and language understanding to generate a cognitive embedding.
To further improve image fidelity, we propose a novel condition injection scheme called "All-in-Attention"
arXiv Detail & Related papers (2023-11-27T16:33:29Z) - CoT-MISR:Marrying Convolution and Transformer for Multi-Image
Super-Resolution [3.105999623265897]
How to transform a low-resolution image to restore its high-resolution image information is a problem that researchers have been exploring.
CoT-MISR network makes up for local and global information by using the advantages of convolution and tr.
arXiv Detail & Related papers (2023-03-12T03:01:29Z) - Image Super-resolution with An Enhanced Group Convolutional Neural
Network [102.2483249598621]
CNNs with strong learning abilities are widely chosen to resolve super-resolution problem.
We present an enhanced super-resolution group CNN (ESRGCNN) with a shallow architecture.
Experiments report that our ESRGCNN surpasses the state-of-the-arts in terms of SISR performance, complexity, execution speed, image quality evaluation and visual effect in SISR.
arXiv Detail & Related papers (2022-05-29T00:34:25Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Cross-Modality High-Frequency Transformer for MR Image Super-Resolution [100.50972513285598]
We build an early effort to build a Transformer-based MR image super-resolution framework.
We consider two-fold domain priors including the high-frequency structure prior and the inter-modality context prior.
We establish a novel Transformer architecture, called Cross-modality high-frequency Transformer (Cohf-T), to introduce such priors into super-resolving the low-resolution images.
arXiv Detail & Related papers (2022-03-29T07:56:55Z) - Fusformer: A Transformer-based Fusion Approach for Hyperspectral Image
Super-resolution [9.022005574190182]
We design a network based on the transformer for fusing the low-resolution hyperspectral images and high-resolution multispectral images.
Considering the LR-HSIs hold the main spectral structure, the network focuses on the spatial detail estimation.
Various experiments and quality indexes show our approach's superiority compared with other state-of-the-art methods.
arXiv Detail & Related papers (2021-09-05T14:00:34Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z) - Adaptive Loss Function for Super Resolution Neural Networks Using Convex
Optimization Techniques [24.582559317893274]
Single Image Super-Resolution (SISR) task refers to learn a mapping from low-resolution images to the corresponding high-resolution ones.
CNNs are encouraged to learn high-frequency components of the images as well as low-frequency components.
We have shown that the proposed method can recover fine details of the images and it is stable in the training process.
arXiv Detail & Related papers (2020-01-21T20:31:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.