RegQCNET: Deep Quality Control for Image-to-template Brain MRI Affine
Registration
- URL: http://arxiv.org/abs/2005.06835v2
- Date: Wed, 16 Sep 2020 16:58:46 GMT
- Title: RegQCNET: Deep Quality Control for Image-to-template Brain MRI Affine
Registration
- Authors: Baudouin Denis de Senneville, Jos\'e V. Manjon, Pierrick Coup\'e
- Abstract summary: A compact 3D convolutional neural network (CNN) is introduced to quantitatively predict the amplitude of an affine registration mismatch.
The robustness of the proposed RegQCNET is first analyzed on lifespan brain images undergoing various simulated spatial transformations.
Results show that the proposed deep learning QC is robust, fast and accurate to estimate affine registration error in processing pipeline.
- Score: 0.44533271775957767
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Affine registration of one or several brain image(s) onto a common reference
space is a necessary prerequisite for many image processing tasks, such as
brain segmentation or functional analysis. Manual assessment of registration
quality is a tedious and time-consuming task, especially in studies comprising
a large amount of data. An automated and reliable quality control (QC) becomes
mandatory. Moreover, the computation time of the QC must be also compatible
with the processing of massive datasets. Therefore, an automated deep neural
network approaches appear as a method of choice to automatically assess
registration quality.
In the current study, a compact 3D convolutional neural network (CNN),
referred to as RegQCNET, is introduced to quantitatively predict the amplitude
of an affine registration mismatch between a registered image and a reference
template. This quantitative estimation of registration error is expressed using
metric unit system. Therefore, a meaningful task-specific threshold can be
manually or automatically defined in order to distinguish usable and non-usable
images.
The robustness of the proposed RegQCNET is first analyzed on lifespan brain
images undergoing various simulated spatial transformations and intensity
variations between training and testing. Secondly, the potential of RegQCNET to
classify images as usable or non-usable is evaluated using both manual and
automatic thresholds. During our experiments, automatic thresholds are
estimated using several computer-assisted classification models through
cross-validation. To this end we used expert's visual quality control estimated
on a lifespan cohort of 3953 brains. Finally, the RegQCNET accuracy is compared
to usual image features.
Results show that the proposed deep learning QC is robust, fast and accurate
to estimate affine registration error in processing pipeline.
Related papers
- Reference-Free Image Quality Metric for Degradation and Reconstruction Artifacts [2.5282283486446753]
We develop a reference-free quality evaluation network, dubbed "Quality Factor (QF) Predictor"
Our QF Predictor is a lightweight, fully convolutional network comprising seven layers.
It receives JPEG compressed image patch with a random QF as input, is trained to accurately predict the corresponding QF.
arXiv Detail & Related papers (2024-05-01T22:28:18Z) - Automation of Quantum Dot Measurement Analysis via Explainable Machine Learning [0.0]
We propose an image vectorization approach that involves mathematical modeling of synthetic triangles to mimic the experimental data.
We show that this new method offers superior explainability of model prediction without sacrificing accuracy.
This work demonstrates the feasibility and advantages of applying explainable machine learning techniques to the analysis of quantum dot measurements.
arXiv Detail & Related papers (2024-02-21T11:00:23Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - CONVIQT: Contrastive Video Quality Estimator [63.749184706461826]
Perceptual video quality assessment (VQA) is an integral component of many streaming and video sharing platforms.
Here we consider the problem of learning perceptually relevant video quality representations in a self-supervised manner.
Our results indicate that compelling representations with perceptual bearing can be obtained using self-supervised learning.
arXiv Detail & Related papers (2022-06-29T15:22:01Z) - Automated Learning for Deformable Medical Image Registration by Jointly
Optimizing Network Architectures and Objective Functions [69.6849409155959]
This paper proposes an automated learning registration algorithm (AutoReg) that cooperatively optimize both architectures and their corresponding training objectives.
We conduct image registration experiments on multi-site volume datasets and various registration tasks.
Our results show that our AutoReg may automatically learn an optimal deep registration network for given volumes and achieve state-of-the-art performance.
arXiv Detail & Related papers (2022-03-14T01:54:38Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - Correlation between image quality metrics of magnetic resonance images
and the neural network segmentation accuracy [0.0]
In this study, we investigated the correlation between the image quality metrics of MR images with the neural network segmentation accuracy.
The difference in the segmentation accuracy between models based on the random training inputs with IQM based training inputs shed light on the role of image quality metrics on segmentation accuracy.
arXiv Detail & Related papers (2021-11-01T17:02:34Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - A Learning Framework for Diffeomorphic Image Registration based on
Quasi-conformal Geometry [1.2891210250935146]
We propose the quasi-conformal registration network (QCRegNet), an unsupervised learning framework, to obtain diffeomorphic 2D image registrations.
QCRegNet consists of the estimator network and the Beltrami solver network (BSNet)
Results show that the registration accuracy is comparable to state-of-the-art methods and diffeomorphism is to a great extent guaranteed.
arXiv Detail & Related papers (2021-10-20T14:23:24Z) - Task-Specific Normalization for Continual Learning of Blind Image
Quality Models [105.03239956378465]
We present a simple yet effective continual learning method for blind image quality assessment (BIQA)
The key step in our approach is to freeze all convolution filters of a pre-trained deep neural network (DNN) for an explicit promise of stability.
We assign each new IQA dataset (i.e., task) a prediction head, and load the corresponding normalization parameters to produce a quality score.
The final quality estimate is computed by black a weighted summation of predictions from all heads with a lightweight $K$-means gating mechanism.
arXiv Detail & Related papers (2021-07-28T15:21:01Z) - Test-Time Training for Deformable Multi-Scale Image Registration [15.523457398508263]
Deep learning-based registration approaches such as VoxelMorph have been emerging and achieve competitive performance.
We construct a test-time training for deep deformable image registration to improve the generalization ability of conventional learning-based registration model.
arXiv Detail & Related papers (2021-03-25T03:22:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.