CNNs and GANs in MRI-based cross-modality medical image estimation
- URL: http://arxiv.org/abs/2106.02198v1
- Date: Fri, 4 Jun 2021 01:27:57 GMT
- Title: CNNs and GANs in MRI-based cross-modality medical image estimation
- Authors: Azin Shokraei Fard, David C. Reutens, Viktor Vegh
- Abstract summary: Cross-modality image estimation involves the generation of images of one medical imaging modality from that of another modality.
CNNs have been shown to be useful in identifying, characterising and extracting image patterns.
Generative adversarial networks (GANs) use CNNs as generators and estimated images are discriminated as true or false based on an additional network.
- Score: 1.5469452301122177
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-modality image estimation involves the generation of images of one
medical imaging modality from that of another modality. Convolutional neural
networks (CNNs) have been shown to be useful in identifying, characterising and
extracting image patterns. Generative adversarial networks (GANs) use CNNs as
generators and estimated images are discriminated as true or false based on an
additional network. CNNs and GANs within the image estimation framework may be
considered more generally as deep learning approaches, since imaging data tends
to be large, leading to a larger number of network weights. Almost all research
in the CNN/GAN image estimation literature has involved the use of MRI data
with the other modality primarily being PET or CT. This review provides an
overview of the use of CNNs and GANs for MRI-based cross-modality medical image
estimation. We outline the neural networks implemented, and detail network
constructs employed for CNN and GAN image-to-image estimators. Motivations
behind cross-modality image estimation are provided as well. GANs appear to
provide better utility in cross-modality image estimation in comparison with
CNNs, a finding drawn based on our analysis involving metrics comparing
estimated and actual images. Our final remarks highlight key challenges faced
by the cross-modality medical image estimation field, and suggestions for
future research are outlined.
Related papers
- Comparative Analysis of Deep Convolutional Neural Networks for Detecting Medical Image Deepfakes [0.0]
This paper employs a comprehensive evaluation of 13 state-of-the-art Deep Convolutional Neural Network (DCNN) models.
We find that ResNet50V2 excels in precision and specificity, whereas DenseNet169 is distinguished by its accuracy, recall, and F1-score.
We also assess the latent space separability quality across the examined DCNNs, showing superiority in both the DenseNet and EfficientNet model families.
arXiv Detail & Related papers (2024-01-08T16:37:22Z) - Compact & Capable: Harnessing Graph Neural Networks and Edge Convolution
for Medical Image Classification [0.0]
We introduce a novel model that combines GNNs and edge convolution, leveraging the interconnectedness of RGB channel feature values to strongly represent connections between crucial graph nodes.
Our proposed model performs on par with state-of-the-art Deep Neural Networks (DNNs) but does so with 1000 times fewer parameters, resulting in reduced training time and data requirements.
arXiv Detail & Related papers (2023-07-24T13:39:21Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Conversion Between CT and MRI Images Using Diffusion and Score-Matching
Models [7.745729132928934]
We propose to use an emerging deep learning framework called diffusion and score-matching models.
Our results show that the diffusion and score-matching models generate better synthetic CT images than the CNN and GAN models.
Our study suggests that diffusion and score-matching models are powerful to generate high quality images conditioned on an image obtained using a complementary imaging modality.
arXiv Detail & Related papers (2022-09-24T23:50:54Z) - Medulloblastoma Tumor Classification using Deep Transfer Learning with
Multi-Scale EfficientNets [63.62764375279861]
We propose an end-to-end MB tumor classification and explore transfer learning with various input sizes and matching network dimensions.
Using a data set with 161 cases, we demonstrate that pre-trained EfficientNets with larger input resolutions lead to significant performance improvements.
arXiv Detail & Related papers (2021-09-10T13:07:11Z) - Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain
Adaptation [9.659642285903418]
Cross-modality synthesis of medical images to reduce the costly annotation burden by radiologists.
We present a novel approach for image-to-image translation in medical images, capable of supervised or unsupervised (unpaired image data) setups.
arXiv Detail & Related papers (2021-03-05T16:22:31Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z) - Realistic Adversarial Data Augmentation for MR Image Segmentation [17.951034264146138]
We propose an adversarial data augmentation method for training neural networks for medical image segmentation.
Our model generates plausible and realistic signal corruptions, which models the intensity inhomogeneities caused by a common type of artefacts in MR imaging: bias field.
We show that such an approach can improve the ability generalization and robustness of models as well as provide significant improvements in low-data scenarios.
arXiv Detail & Related papers (2020-06-23T20:43:18Z) - Interpretation of 3D CNNs for Brain MRI Data Classification [56.895060189929055]
We extend the previous findings in gender differences from diffusion-tensor imaging on T1 brain MRI scans.
We provide the voxel-wise 3D CNN interpretation comparing the results of three interpretation methods.
arXiv Detail & Related papers (2020-06-20T17:56:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.