Unsupervised Image Registration Towards Enhancing Performance and
Explainability in Cardiac And Brain Image Analysis
- URL: http://arxiv.org/abs/2203.03638v1
- Date: Mon, 7 Mar 2022 12:54:33 GMT
- Title: Unsupervised Image Registration Towards Enhancing Performance and
Explainability in Cardiac And Brain Image Analysis
- Authors: Chengjia Wang, Guang Yang, Giorgos Papanastasiou
- Abstract summary: Inter- and intra-modality affine and non-rigid image registration is an essential medical image analysis process in clinical imaging.
We present an un-supervised deep learning registration methodology which can accurately model affine and non-rigid trans-formations.
Our methodology performs bi-directional cross-modality image synthesis to learn modality-invariant latent rep-resentations.
- Score: 3.5718941645696485
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Magnetic Resonance Imaging (MRI) typically recruits multiple sequences
(defined here as "modalities"). As each modality is designed to offer different
anatomical and functional clinical information, there are evident disparities
in the imaging content across modalities. Inter- and intra-modality affine and
non-rigid image registration is an essential medical image analysis process in
clinical imaging, as for example before imaging biomarkers need to be derived
and clinically evaluated across different MRI modalities, time phases and
slices. Although commonly needed in real clinical scenarios, affine and
non-rigid image registration is not extensively investigated using a single
unsupervised model architecture. In our work, we present an un-supervised deep
learning registration methodology which can accurately model affine and
non-rigid trans-formations, simultaneously. Moreover, inverse-consistency is a
fundamental inter-modality registration property that is not considered in deep
learning registration algorithms. To address inverse-consistency, our
methodology performs bi-directional cross-modality image synthesis to learn
modality-invariant latent rep-resentations, while involves two factorised
transformation networks and an inverse-consistency loss to learn
topology-preserving anatomical transformations. Overall, our model (named
"FIRE") shows improved performances against the reference standard baseline
method on multi-modality brain 2D and 3D MRI and intra-modality cardiac 4D MRI
data experiments.
Related papers
- Intraoperative Registration by Cross-Modal Inverse Neural Rendering [61.687068931599846]
We present a novel approach for 3D/2D intraoperative registration during neurosurgery via cross-modal inverse neural rendering.
Our approach separates implicit neural representation into two components, handling anatomical structure preoperatively and appearance intraoperatively.
We tested our method on retrospective patients' data from clinical cases, showing that our method outperforms state-of-the-art while meeting current clinical standards for registration.
arXiv Detail & Related papers (2024-09-18T13:40:59Z) - QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv Detail & Related papers (2024-03-19T17:57:24Z) - Modality-Agnostic Structural Image Representation Learning for Deformable Multi-Modality Medical Image Registration [22.157402663162877]
We propose a modality-agnostic structural representation learning method to learn discriminative and contrast-invariance deep structural image representations.
Our method is superior to the conventional local structural representation and statistical-based similarity measures in terms of discriminability and accuracy.
arXiv Detail & Related papers (2024-02-29T08:01:31Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - Explainable unsupervised multi-modal image registration using deep
networks [2.197364252030876]
MRI image registration aims to geometrically 'pair' diagnoses from different modalities, time points and slices.
In this work, we show that our DL model becomes fully explainable, setting the framework to generalise our approach on further medical imaging data.
arXiv Detail & Related papers (2023-08-03T19:13:48Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - The Brain Tumor Segmentation (BraTS) Challenge 2023: Brain MR Image
Synthesis for Tumor Segmentation (BraSyn) [5.399839183476989]
We present the establishment of the Brain MR Image Synthesis Benchmark (BraSyn) in conjunction with the Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2023.
The primary objective of this challenge is to evaluate image synthesis methods that can realistically generate missing MRI modalities when multiple available images are provided.
arXiv Detail & Related papers (2023-05-15T20:49:58Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain
Adaptation [9.659642285903418]
Cross-modality synthesis of medical images to reduce the costly annotation burden by radiologists.
We present a novel approach for image-to-image translation in medical images, capable of supervised or unsupervised (unpaired image data) setups.
arXiv Detail & Related papers (2021-03-05T16:22:31Z) - Robust Image Reconstruction with Misaligned Structural Information [0.27074235008521236]
We propose a variational framework which jointly performs reconstruction and registration.
Our approach is the first to achieve this for different modalities and outranks established approaches in terms of accuracy of both reconstruction and registration.
arXiv Detail & Related papers (2020-04-01T17:21:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.