MvMM-RegNet: A new image registration framework based on multivariate
mixture model and neural network estimation
- URL: http://arxiv.org/abs/2006.15573v2
- Date: Tue, 14 Jul 2020 04:38:16 GMT
- Title: MvMM-RegNet: A new image registration framework based on multivariate
mixture model and neural network estimation
- Authors: Xinzhe Luo and Xiahai Zhuang
- Abstract summary: We propose a new image registration framework based on generative model (MvMM) and neural network estimation.
A generative model consolidating both appearance and anatomical information is established to derive a novel loss function capable of implementing groupwise registration.
We highlight the versatility of the proposed framework for various applications on multimodal cardiac images.
- Score: 14.36896617430302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current deep-learning-based registration algorithms often exploit
intensity-based similarity measures as the loss function, where dense
correspondence between a pair of moving and fixed images is optimized through
backpropagation during training. However, intensity-based metrics can be
misleading when the assumption of intensity class correspondence is violated,
especially in cross-modality or contrast-enhanced images. Moreover, existing
learning-based registration methods are predominantly applicable to pairwise
registration and are rarely extended to groupwise registration or simultaneous
registration with multiple images. In this paper, we propose a new image
registration framework based on multivariate mixture model (MvMM) and neural
network estimation. A generative model consolidating both appearance and
anatomical information is established to derive a novel loss function capable
of implementing groupwise registration. We highlight the versatility of the
proposed framework for various applications on multimodal cardiac images,
including single-atlas-based segmentation (SAS) via pairwise registration and
multi-atlas segmentation (MAS) unified by groupwise registration. We evaluated
performance on two publicly available datasets, i.e. MM-WHS-2017 and
MS-CMRSeg-2019. The results show that the proposed framework achieved an
average Dice score of $0.871\pm 0.025$ for whole-heart segmentation on MR
images and $0.783\pm 0.082$ for myocardium segmentation on LGE MR images.
Related papers
- SAMReg: SAM-enabled Image Registration with ROI-based Correspondence [12.163299991979574]
This paper describes a new spatial correspondence representation based on paired regions-of-interest (ROIs) for medical image registration.
We develop a new registration algorithm SAMReg, which does not require any training (or training data), gradient-based fine-tuning or prompt engineering.
The proposed methods outperform both intensity-based iterative algorithms and DDF-predicting learning-based networks across tested metrics.
arXiv Detail & Related papers (2024-10-17T23:23:48Z) - Bayesian Unsupervised Disentanglement of Anatomy and Geometry for Deep Groupwise Image Registration [50.62725807357586]
This article presents a general Bayesian learning framework for multi-modal groupwise image registration.
We propose a novel hierarchical variational auto-encoding architecture to realise the inference procedure of the latent variables.
Experiments were conducted to validate the proposed framework, including four different datasets from cardiac, brain, and abdominal medical images.
arXiv Detail & Related papers (2024-01-04T08:46:39Z) - Rotated Multi-Scale Interaction Network for Referring Remote Sensing Image Segmentation [63.15257949821558]
Referring Remote Sensing Image (RRSIS) is a new challenge that combines computer vision and natural language processing.
Traditional Referring Image (RIS) approaches have been impeded by the complex spatial scales and orientations found in aerial imagery.
We introduce the Rotated Multi-Scale Interaction Network (RMSIN), an innovative approach designed for the unique demands of RRSIS.
arXiv Detail & Related papers (2023-12-19T08:14:14Z) - Mutual information neural estimation for unsupervised multi-modal
registration of brain images [0.0]
We propose guiding the training of a deep learning-based registration method with MI estimation between an image-pair in an end-to-end trainable network.
Our results show that a small, 2-layer network produces competitive results in both mono- and multimodal registration, with sub-second run-times.
Real-time clinical application will benefit from a better visual matching of anatomical structures and less registration failures/outliers.
arXiv Detail & Related papers (2022-01-25T13:22:34Z) - Deep Relational Metric Learning [84.95793654872399]
This paper presents a deep relational metric learning framework for image clustering and retrieval.
We learn an ensemble of features that characterizes an image from different aspects to model both interclass and intraclass distributions.
Experiments on the widely-used CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate that our framework improves existing deep metric learning methods and achieves very competitive results.
arXiv Detail & Related papers (2021-08-23T09:31:18Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Deep Group-wise Variational Diffeomorphic Image Registration [3.0022455491411653]
We propose to extend current learning-based image registration to allow simultaneous registration of multiple images.
We present a general mathematical framework that enables both registration of multiple images to their viscous geodesic average and registration in which any of the available images can be used as a fixed image.
arXiv Detail & Related papers (2020-10-01T07:37:28Z) - Cross-Modality Multi-Atlas Segmentation Using Deep Neural Networks [20.87045880678701]
High-level structure information can provide reliable similarity measurement for cross-modality images.
This work presents a new MAS framework for cross-modality images, where both image registration and label fusion are achieved by deep neural networks (DNNs)
For image registration, we propose a consistent registration network, which can jointly estimate forward and backward dense displacement fields (DDFs)
For label fusion, we adapt a few-shot learning network to measure the similarity of atlas and target patches.
arXiv Detail & Related papers (2020-08-15T02:57:23Z) - Prototype Mixture Models for Few-shot Semantic Segmentation [50.866870384596446]
Few-shot segmentation is challenging because objects within the support and query images could significantly differ in appearance and pose.
We propose prototype mixture models (PMMs), which correlate diverse image regions with multiple prototypes to enforce the prototype-based semantic representation.
PMMs improve 5-shot segmentation performance on MS-COCO by up to 5.82% with only a moderate cost for model size and inference speed.
arXiv Detail & Related papers (2020-08-10T04:33:17Z) - CoMIR: Contrastive Multimodal Image Representation for Registration [4.543268895439618]
We propose contrastive coding to learn shared, dense image representations, referred to as CoMIRs (Contrastive Multimodal Image Representations)
CoMIRs enable the registration of multimodal images where existing registration methods often fail due to a lack of sufficiently similar image structures.
arXiv Detail & Related papers (2020-06-11T10:51:33Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.