3D Inception-Based TransMorph: Pre- and Post-operative Multi-contrast
MRI Registration in Brain Tumors
- URL: http://arxiv.org/abs/2212.04579v1
- Date: Thu, 8 Dec 2022 22:00:07 GMT
- Title: 3D Inception-Based TransMorph: Pre- and Post-operative Multi-contrast
MRI Registration in Brain Tumors
- Authors: Javid Abderezaei, Aymeric Pionteck, Agamdeep Chopra, Mehmet Kurt
- Abstract summary: We propose a two-stage cascaded network based on the Inception and TransMorph models.
Loss function was composed of a standard image similarity measure, a diffusion regularizer, and an edge-map similarity measure added to overcome intensity dependence.
We achieved 6th place at the time of model submission in the final testing phase of the BraTS-Reg challenge.
- Score: 1.2234742322758418
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deformable image registration is a key task in medical image analysis. The
Brain Tumor Sequence Registration challenge (BraTS-Reg) aims at establishing
correspondences between pre-operative and follow-up scans of the same patient
diagnosed with an adult brain diffuse high-grade glioma and intends to address
the challenging task of registering longitudinal data with major tissue
appearance changes. In this work, we proposed a two-stage cascaded network
based on the Inception and TransMorph models. The dataset for each patient was
comprised of a native pre-contrast (T1), a contrast-enhanced T1-weighted
(T1-CE), a T2-weighted (T2), and a Fluid Attenuated Inversion Recovery (FLAIR).
The Inception model was used to fuse the 4 image modalities together and
extract the most relevant information. Then, a variant of the TransMorph
architecture was adapted to generate the displacement fields. The Loss function
was composed of a standard image similarity measure, a diffusion regularizer,
and an edge-map similarity measure added to overcome intensity dependence and
reinforce correct boundary deformation. We observed that the addition of the
Inception module substantially increased the performance of the network.
Additionally, performing an initial affine registration before training the
model showed improved accuracy in the landmark error measurements between pre
and post-operative MRIs. We observed that our best model composed of the
Inception and TransMorph architectures while using an initially affine
registered dataset had the best performance with a median absolute error of
2.91 (initial error = 7.8). We achieved 6th place at the time of model
submission in the final testing phase of the BraTS-Reg challenge.
Related papers
- NestedMorph: Enhancing Deformable Medical Image Registration with Nested Attention Mechanisms [0.0]
Deformable image registration is crucial for aligning medical images in a non-linear fashion across different modalities.
This paper presents NestedMorph, a novel network utilizing a Nested Attention Fusion approach to improve intra-subject deformable registration.
arXiv Detail & Related papers (2024-10-03T14:53:42Z) - Deformation-aware GAN for Medical Image Synthesis with Substantially Misaligned Pairs [0.0]
We propose a novel Deformation-aware GAN (DA-GAN) to dynamically correct the misalignment during the image synthesis based on inverse consistency.
Experimental results show that DA-GAN achieved superior performance on a public dataset with simulated misalignments and a real-world lung MRI-CT dataset with respiratory motion misalignment.
arXiv Detail & Related papers (2024-08-18T10:29:35Z) - Improving Misaligned Multi-modality Image Fusion with One-stage
Progressive Dense Registration [67.23451452670282]
Misalignments between multi-modality images pose challenges in image fusion.
We propose a Cross-modality Multi-scale Progressive Dense Registration scheme.
This scheme accomplishes the coarse-to-fine registration exclusively using a one-stage optimization.
arXiv Detail & Related papers (2023-08-22T03:46:24Z) - GSMorph: Gradient Surgery for cine-MRI Cardiac Deformable Registration [62.41725951450803]
Learning-based deformable registration relies on weighted objective functions trading off registration accuracy and smoothness of the field.
We construct a registration model based on the gradient surgery mechanism, named GSMorph, to achieve a hyper parameter-free balance on multiple losses.
Our method is model-agnostic and can be merged into any deep registration network without introducing extra parameters or slowing down inference.
arXiv Detail & Related papers (2023-06-26T13:32:09Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Robust Image Registration with Absent Correspondences in Pre-operative
and Follow-up Brain MRI Scans of Diffuse Glioma Patients [11.4219428942199]
We propose a 3-step registration pipeline for pre-operative and follow-up brain MRI scans.
Our method achieves a median absolute error of 1.64 mm and 88% of successful registration rate in the validation set of BraTS-Reg challenge.
arXiv Detail & Related papers (2022-10-20T06:37:40Z) - Adaptive Diffusion Priors for Accelerated MRI Reconstruction [0.9895793818721335]
Deep MRI reconstruction is commonly performed with conditional models that de-alias undersampled acquisitions to recover images consistent with fully-sampled data.
Unconditional models instead learn generative image priors decoupled from the operator to improve reliability against domain shifts related to the imaging operator.
Here we propose the first adaptive diffusion prior for MRI reconstruction, AdaDiff, to improve performance and reliability against domain shifts.
arXiv Detail & Related papers (2022-07-12T22:45:08Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - Efficient Learning and Decoding of the Continuous-Time Hidden Markov
Model for Disease Progression Modeling [119.50438407358862]
We present the first complete characterization of efficient EM-based learning methods for CT-HMM models.
We show that EM-based learning consists of two challenges: the estimation of posterior state probabilities and the computation of end-state conditioned statistics.
We demonstrate the use of CT-HMMs with more than 100 states to visualize and predict disease progression using a glaucoma dataset and an Alzheimer's disease dataset.
arXiv Detail & Related papers (2021-10-26T20:06:05Z) - Learning Multi-Modal Volumetric Prostate Registration with Weak
Inter-Subject Spatial Correspondence [2.6894568533991543]
We introduce an auxiliary input to the neural network for the prior information about the prostate location in the MR sequence.
With weakly labelled MR-TRUS prostate data, we showed registration quality comparable to the state-of-the-art deep learning-based method.
arXiv Detail & Related papers (2021-02-09T16:48:59Z) - Bilateral Asymmetry Guided Counterfactual Generating Network for
Mammogram Classification [48.4619620405991]
Mammogram benign or malignant classification with only image-level labels is challenging due to the absence of lesion annotations.
Motivated by the symmetric prior, we can explore a counterfactual problem that how would the features have behaved if there were no lesions in the image.
We derive a new theoretical result for counterfactual generation based on the symmetric prior.
arXiv Detail & Related papers (2020-09-30T03:15:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.