Correlation between image quality metrics of magnetic resonance images
and the neural network segmentation accuracy
- URL: http://arxiv.org/abs/2111.01093v1
- Date: Mon, 1 Nov 2021 17:02:34 GMT
- Title: Correlation between image quality metrics of magnetic resonance images
and the neural network segmentation accuracy
- Authors: Rajarajeswari Muthusivarajan, Adrian Celaya, Joshua P. Yung, Satish
Viswanath, Daniel S. Marcus, Caroline Chung, David Fuentes
- Abstract summary: In this study, we investigated the correlation between the image quality metrics of MR images with the neural network segmentation accuracy.
The difference in the segmentation accuracy between models based on the random training inputs with IQM based training inputs shed light on the role of image quality metrics on segmentation accuracy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks with multilevel connections process input data in
complex ways to learn the information.A networks learning efficiency depends
not only on the complex neural network architecture but also on the input
training images.Medical image segmentation with deep neural networks for skull
stripping or tumor segmentation from magnetic resonance images enables learning
both global and local features of the images.Though medical images are
collected in a controlled environment,there may be artifacts or equipment based
variance that cause inherent bias in the input set.In this study, we
investigated the correlation between the image quality metrics of MR images
with the neural network segmentation accuracy.For that we have used the 3D
DenseNet architecture and let the network trained on the same input but
applying different methodologies to select the training data set based on the
IQM values.The difference in the segmentation accuracy between models based on
the random training inputs with IQM based training inputs shed light on the
role of image quality metrics on segmentation accuracy.By running the image
quality metrics to choose the training inputs,further we may tune the learning
efficiency of the network and the segmentation accuracy.
Related papers
- Increasing the Accuracy of a Neural Network Using Frequency Selective
Mesh-to-Grid Resampling [4.211128681972148]
We propose the use of keypoint frequency selective mesh-to-grid resampling (FSMR) for the processing of input data for neural networks.
We show that depending on the network architecture and classification task the application of FSMR during training aids learning process.
The classification accuracy can be increased by up to 4.31 percentage points for ResNet50 and the Oxflower17 dataset.
arXiv Detail & Related papers (2022-09-28T21:34:47Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Understanding the Influence of Receptive Field and Network Complexity in
Neural-Network-Guided TEM Image Analysis [0.0]
We systematically examine how neural network architecture choices affect how neural networks segment in transmission electron microscopy (TEM) images.
We find that for low-resolution TEM images which rely on amplitude contrast to distinguish nanoparticles from background, the receptive field does not significantly influence segmentation performance.
On the other hand, for high-resolution TEM images which rely on a combination of amplitude and phase contrast changes to identify nanoparticles, receptive field is a key parameter for increased performance.
arXiv Detail & Related papers (2022-04-08T18:45:15Z) - Multiscale Convolutional Transformer with Center Mask Pretraining for
Hyperspectral Image Classificationtion [14.33259265286265]
We propose a noval multi-scale convolutional embedding module for hyperspectral images (HSI) to realize effective extraction of spatial-spectral information.
Similar to Mask autoencoder, but our pre-training method only masks the corresponding token of the central pixel in the encoder, and inputs the remaining token into the decoder to reconstruct the spectral information of the central pixel.
arXiv Detail & Related papers (2022-03-09T14:42:26Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Knowledge Distillation By Sparse Representation Matching [107.87219371697063]
We propose Sparse Representation Matching (SRM) to transfer intermediate knowledge from one Convolutional Network (CNN) to another by utilizing sparse representation.
We formulate as a neural processing block, which can be efficiently optimized using gradient descent and integrated into any CNN in a plug-and-play manner.
Our experiments demonstrate that is robust to architectural differences between the teacher and student networks, and outperforms other KD techniques across several datasets.
arXiv Detail & Related papers (2021-03-31T11:47:47Z) - Learning With Context Feedback Loop for Robust Medical Image
Segmentation [1.881091632124107]
We present a fully automatic deep learning method for medical image segmentation using two systems.
The first one is a forward system of an encoder-decoder CNN that predicts the segmentation result from the input image.
The predicted probabilistic output of the forward system is then encoded by a fully convolutional network (FCN)-based context feedback system.
arXiv Detail & Related papers (2021-03-04T05:44:59Z) - Exploring Intensity Invariance in Deep Neural Networks for Brain Image
Registration [0.0]
We investigate the effect of intensity distribution among input image pairs for deep learning-based image registration methods.
Deep learning models trained with structure similarity-based loss seems to perform better for both datasets.
arXiv Detail & Related papers (2020-09-21T17:49:03Z) - Learning to Learn Parameterized Classification Networks for Scalable
Input Images [76.44375136492827]
Convolutional Neural Networks (CNNs) do not have a predictable recognition behavior with respect to the input resolution change.
We employ meta learners to generate convolutional weights of main networks for various input scales.
We further utilize knowledge distillation on the fly over model predictions based on different input resolutions.
arXiv Detail & Related papers (2020-07-13T04:27:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.