Mutual information neural estimation for unsupervised multi-modal
registration of brain images
- URL: http://arxiv.org/abs/2201.10305v1
- Date: Tue, 25 Jan 2022 13:22:34 GMT
- Title: Mutual information neural estimation for unsupervised multi-modal
registration of brain images
- Authors: Gerard Snaauw (1), Michele Sasdelli (1), Gabriel Maicas (1), Stephan
Lau (1 and 2), Johan Verjans (1 and 2), Mark Jenkinson (1 and 2), Gustavo
Carneiro (1) ((1) Australian Institute for Machine Learning (AIML),
University of Adelaide, Adelaide, Australia, (2) South Australian Health and
Medical Research Institute (SAHMRI), Adelaide, Australia)
- Abstract summary: We propose guiding the training of a deep learning-based registration method with MI estimation between an image-pair in an end-to-end trainable network.
Our results show that a small, 2-layer network produces competitive results in both mono- and multimodal registration, with sub-second run-times.
Real-time clinical application will benefit from a better visual matching of anatomical structures and less registration failures/outliers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many applications in image-guided surgery and therapy require fast and
reliable non-linear, multi-modal image registration. Recently proposed
unsupervised deep learning-based registration methods have demonstrated
superior performance compared to iterative methods in just a fraction of the
time. Most of the learning-based methods have focused on mono-modal image
registration. The extension to multi-modal registration depends on the use of
an appropriate similarity function, such as the mutual information (MI). We
propose guiding the training of a deep learning-based registration method with
MI estimation between an image-pair in an end-to-end trainable network. Our
results show that a small, 2-layer network produces competitive results in both
mono- and multimodal registration, with sub-second run-times. Comparisons to
both iterative and deep learning-based methods show that our MI-based method
produces topologically and qualitatively superior results with an extremely low
rate of non-diffeomorphic transformations. Real-time clinical application will
benefit from a better visual matching of anatomical structures and less
registration failures/outliers.
Related papers
- PMT: Progressive Mean Teacher via Exploring Temporal Consistency for Semi-Supervised Medical Image Segmentation [51.509573838103854]
We propose a semi-supervised learning framework, termed Progressive Mean Teachers (PMT), for medical image segmentation.
Our PMT generates high-fidelity pseudo labels by learning robust and diverse features in the training process.
Experimental results on two datasets with different modalities, i.e., CT and MRI, demonstrate that our method outperforms the state-of-the-art medical image segmentation approaches.
arXiv Detail & Related papers (2024-09-08T15:02:25Z) - ConKeD: Multiview contrastive descriptor learning for keypoint-based retinal image registration [6.618504904743609]
We propose ConKeD, a novel deep learning approach to learn descriptors for retinal image registration.
In contrast to current registration methods, our approach employs a novel multi-positive multi-negative contrastive learning strategy.
Our experimental results demonstrate the benefits of the novel multi-positive multi-negative strategy.
arXiv Detail & Related papers (2024-01-11T13:22:54Z) - Convolutional autoencoder-based multimodal one-class classification [80.52334952912808]
One-class classification refers to approaches of learning using data from a single class only.
We propose a deep learning one-class classification method suitable for multimodal data.
arXiv Detail & Related papers (2023-09-25T12:31:18Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Is Image-to-Image Translation the Panacea for Multimodal Image
Registration? A Comparative Study [4.00906288611816]
We conduct an empirical study of the applicability of modern I2I translation methods for the task of multimodal biomedical image registration.
We compare the performance of four Generative Adrial Network (GAN)-based methods and one contrastive representation learning method.
Our results suggest that, although I2I translation may be helpful when the modalities to register are clearly correlated, registration of modalities which express distinctly different properties of the sample are not well handled by the I2I translation approach.
arXiv Detail & Related papers (2021-03-30T11:28:21Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - F3RNet: Full-Resolution Residual Registration Network for Deformable
Image Registration [21.99118499516863]
Deformable image registration (DIR) is essential for many image-guided therapies.
We propose a novel unsupervised registration network, namely the Full-Resolution Residual Registration Network (F3RNet)
One stream takes advantage of the full-resolution information that facilitates accurate voxel-level registration.
The other stream learns the deep multi-scale residual representations to obtain robust recognition.
arXiv Detail & Related papers (2020-09-15T15:05:54Z) - Adversarial Uni- and Multi-modal Stream Networks for Multimodal Image
Registration [20.637787406888478]
Deformable image registration between Computed Tomography (CT) images and Magnetic Resonance (MR) imaging is essential for many image-guided therapies.
In this paper, we propose a novel translation-based unsupervised deformable image registration method.
Our method has been evaluated on two clinical datasets and demonstrates promising results compared to state-of-the-art traditional and learning-based methods.
arXiv Detail & Related papers (2020-07-06T14:44:06Z) - MvMM-RegNet: A new image registration framework based on multivariate
mixture model and neural network estimation [14.36896617430302]
We propose a new image registration framework based on generative model (MvMM) and neural network estimation.
A generative model consolidating both appearance and anatomical information is established to derive a novel loss function capable of implementing groupwise registration.
We highlight the versatility of the proposed framework for various applications on multimodal cardiac images.
arXiv Detail & Related papers (2020-06-28T11:19:15Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.