Bayesian intrinsic groupwise registration via explicit hierarchical
disentanglement
- URL: http://arxiv.org/abs/2206.02377v1
- Date: Mon, 6 Jun 2022 06:13:24 GMT
- Title: Bayesian intrinsic groupwise registration via explicit hierarchical
disentanglement
- Authors: Xin Wang, Xinzhe Luo, Xiahai Zhuang
- Abstract summary: We propose a general framework which formulates groupwise registration as a procedure of hierarchical Bayesian inference.
Here, we propose a novel variational posterior and network architecture that facilitate joint learning of the common structural representation.
Results have demonstrated the efficacy of our framework in realizing multimodal groupwise registration in an end-to-end fashion.
- Score: 18.374535632681884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Previous methods on multimodal groupwise registration typically require
certain highly specialized similarity metrics with restrained applicability. In
this work, we instead propose a general framework which formulates groupwise
registration as a procedure of hierarchical Bayesian inference. Here, the
imaging process of multimodal medical images, including shape transition and
appearance variation, is characterized by a disentangled variational
auto-encoder. To this end, we propose a novel variational posterior and network
architecture that facilitate joint learning of the common structural
representation and the desired spatial correspondences. The performance of the
proposed model was validated on two publicly available multimodal datasets,
i.e., BrainWeb and MS-CMR of the heart. Results have demonstrated the efficacy
of our framework in realizing multimodal groupwise registration in an
end-to-end fashion.
Related papers
- Explaining Modern Gated-Linear RNNs via a Unified Implicit Attention Formulation [54.50526986788175]
Recent advances in efficient sequence modeling have led to attention-free layers, such as Mamba, RWKV, and various gated RNNs.
We present a unified view of these models, formulating such layers as implicit causal self-attention layers.
Our framework compares the underlying mechanisms on similar grounds for different layers and provides a direct means for applying explainability methods.
arXiv Detail & Related papers (2024-05-26T09:57:45Z) - Bayesian Unsupervised Disentanglement of Anatomy and Geometry for Deep Groupwise Image Registration [50.62725807357586]
This article presents a general Bayesian learning framework for multi-modal groupwise image registration.
We propose a novel hierarchical variational auto-encoding architecture to realise the inference procedure of the latent variables.
Experiments were conducted to validate the proposed framework, including four different datasets from cardiac, brain, and abdominal medical images.
arXiv Detail & Related papers (2024-01-04T08:46:39Z) - Diagonal Hierarchical Consistency Learning for Semi-supervised Medical Image Segmentation [0.0]
We propose a novel framework for robust semi-supervised medical image segmentation using diagonal hierarchical consistency learning (DiHC-Net)
It is composed of multiple sub-models with identical multi-scale architecture but with distinct sub-layers, such as up-sampling and normalisation layers.
A series of experiments verifies the efficacy of our simple framework, outperforming all previous approaches on public benchmark dataset covering organ and tumour.
arXiv Detail & Related papers (2023-11-10T12:38:16Z) - Unified Multi-modal Unsupervised Representation Learning for
Skeleton-based Action Understanding [62.70450216120704]
Unsupervised pre-training has shown great success in skeleton-based action understanding.
We propose a Unified Multimodal Unsupervised Representation Learning framework, called UmURL.
UmURL exploits an efficient early-fusion strategy to jointly encode the multi-modal features in a single-stream manner.
arXiv Detail & Related papers (2023-11-06T13:56:57Z) - A Simple and Robust Framework for Cross-Modality Medical Image
Segmentation applied to Vision Transformers [0.0]
We propose a simple framework to achieve fair image segmentation of multiple modalities using a single conditional model.
We show that our framework outperforms other cross-modality segmentation methods on the Multi-Modality Whole Heart Conditional Challenge.
arXiv Detail & Related papers (2023-10-09T09:51:44Z) - Unified Frequency-Assisted Transformer Framework for Detecting and
Grounding Multi-Modal Manipulation [109.1912721224697]
We present the Unified Frequency-Assisted transFormer framework, named UFAFormer, to address the DGM4 problem.
By leveraging the discrete wavelet transform, we decompose images into several frequency sub-bands, capturing rich face forgery artifacts.
Our proposed frequency encoder, incorporating intra-band and inter-band self-attentions, explicitly aggregates forgery features within and across diverse sub-bands.
arXiv Detail & Related papers (2023-09-18T11:06:42Z) - DISA: DIfferentiable Similarity Approximation for Universal Multimodal
Registration [39.44133108254786]
We propose a generic framework for creating expressive cross-modal descriptors.
We achieve this by approximating existing metrics with a dot-product in the feature space of a small convolutional neural network.
Our method is several orders of magnitude faster than local patch-based metrics and can be directly applied in clinical settings.
arXiv Detail & Related papers (2023-07-19T12:12:17Z) - Understanding and Constructing Latent Modality Structures in Multi-modal
Representation Learning [53.68371566336254]
We argue that the key to better performance lies in meaningful latent modality structures instead of perfect modality alignment.
Specifically, we design 1) a deep feature separation loss for intra-modality regularization; 2) a Brownian-bridge loss for inter-modality regularization; and 3) a geometric consistency loss for both intra- and inter-modality regularization.
arXiv Detail & Related papers (2023-03-10T14:38:49Z) - MvMM-RegNet: A new image registration framework based on multivariate
mixture model and neural network estimation [14.36896617430302]
We propose a new image registration framework based on generative model (MvMM) and neural network estimation.
A generative model consolidating both appearance and anatomical information is established to derive a novel loss function capable of implementing groupwise registration.
We highlight the versatility of the proposed framework for various applications on multimodal cardiac images.
arXiv Detail & Related papers (2020-06-28T11:19:15Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.