Groupwise Multimodal Image Registration using Joint Total Variation
- URL: http://arxiv.org/abs/2005.02933v1
- Date: Wed, 6 May 2020 16:11:32 GMT
- Title: Groupwise Multimodal Image Registration using Joint Total Variation
- Authors: Mikael Brudfors, Ya\"el Balbastre, John Ashburner
- Abstract summary: We introduce a cost function based on joint total variation for such multimodal image registration.
We evaluate our algorithm on rigidly aligning both simulated and real 3D brain scans.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In medical imaging it is common practice to acquire a wide range of
modalities (MRI, CT, PET, etc.), to highlight different structures or
pathologies. As patient movement between scans or scanning session is
unavoidable, registration is often an essential step before any subsequent
image analysis. In this paper, we introduce a cost function based on joint
total variation for such multimodal image registration. This cost function has
the advantage of enabling principled, groupwise alignment of multiple images,
whilst being insensitive to strong intensity non-uniformities. We evaluate our
algorithm on rigidly aligning both simulated and real 3D brain scans. This
validation shows robustness to strong intensity non-uniformities and low
registration errors for CT/PET to MRI alignment. Our implementation is publicly
available at https://github.com/brudfors/coregistration-njtv.
Related papers
- Unifying Subsampling Pattern Variations for Compressed Sensing MRI with Neural Operators [72.79532467687427]
Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled and compressed measurements.
Deep neural networks have shown great potential for reconstructing high-quality images from highly undersampled measurements.
We propose a unified model that is robust to different subsampling patterns and image resolutions in CS-MRI.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Autoregressive Sequence Modeling for 3D Medical Image Representation [48.706230961589924]
We introduce a pioneering method for learning 3D medical image representations through an autoregressive sequence pre-training framework.
Our approach various 3D medical images based on spatial, contrast, and semantic correlations, treating them as interconnected visual tokens within a token sequence.
arXiv Detail & Related papers (2024-09-13T10:19:10Z) - Explainable unsupervised multi-modal image registration using deep
networks [2.197364252030876]
MRI image registration aims to geometrically 'pair' diagnoses from different modalities, time points and slices.
In this work, we show that our DL model becomes fully explainable, setting the framework to generalise our approach on further medical imaging data.
arXiv Detail & Related papers (2023-08-03T19:13:48Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Cross-Modality Image Registration using a Training-Time Privileged Third
Modality [5.78335050301421]
We propose a learning from privileged modality algorithm to support the challenging multi-modality registration problems.
We present experimental results based on 369 sets of 3D multiparametric MRI images from 356 prostate cancer patients.
arXiv Detail & Related papers (2022-07-26T13:50:30Z) - Self-Supervised Multi-Modal Alignment for Whole Body Medical Imaging [70.52819168140113]
We use a dataset of over 20,000 subjects from the UK Biobank with both whole body Dixon technique magnetic resonance (MR) scans and also dual-energy x-ray absorptiometry (DXA) scans.
We introduce a multi-modal image-matching contrastive framework, that is able to learn to match different-modality scans of the same subject with high accuracy.
Without any adaption, we show that the correspondences learnt during this contrastive training step can be used to perform automatic cross-modal scan registration.
arXiv Detail & Related papers (2021-07-14T12:35:05Z) - A Deep Discontinuity-Preserving Image Registration Network [73.03885837923599]
Most deep learning-based registration methods assume that the desired deformation fields are globally smooth and continuous.
We propose a weakly-supervised Deep Discontinuity-preserving Image Registration network (DDIR) to obtain better registration performance and realistic deformation fields.
We demonstrate that our method achieves significant improvements in registration accuracy and predicts more realistic deformations, in registration experiments on cardiac magnetic resonance (MR) images.
arXiv Detail & Related papers (2021-07-09T13:35:59Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Patch-based field-of-view matching in multi-modal images for
electroporation-based ablations [0.6285581681015912]
Multi-modal imaging sensors are currently involved at different steps of an interventional therapeutic work-flow.
Merging this information relies on a correct spatial alignment of the observed anatomy between the acquired images.
We show that a regional registration approach using voxel patches provides a good structural compromise between the voxel-wise and "global shifts" approaches.
arXiv Detail & Related papers (2020-11-09T11:27:45Z) - JSSR: A Joint Synthesis, Segmentation, and Registration System for 3D
Multi-Modal Image Alignment of Large-scale Pathological CT Scans [27.180136688977512]
We propose a novel multi-task learning system, JSSR, based on an end-to-end 3D convolutional neural network.
The system is optimized to satisfy the implicit constraints between different tasks in an unsupervised manner.
It consistently outperforms conventional state-of-the-art multi-modal registration methods.
arXiv Detail & Related papers (2020-05-25T16:30:02Z) - SynthMorph: learning contrast-invariant registration without acquired
images [8.0963891430422]
We introduce a strategy for learning image registration without acquired imaging data.
We show that this strategy enables robust and accurate registration of arbitrary MRI contrasts.
arXiv Detail & Related papers (2020-04-21T20:29:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.