Unsupervised MRI-US Multimodal Image Registration with Multilevel Correlation Pyramidal Optimization
- URL: http://arxiv.org/abs/2602.06288v1
- Date: Fri, 06 Feb 2026 01:03:57 GMT
- Title: Unsupervised MRI-US Multimodal Image Registration with Multilevel Correlation Pyramidal Optimization
- Authors: Jiazheng Wang, Zeyu Liu, Min Liu, Xiang Chen, Hang Zhang,
- Abstract summary: We propose an unsupervised multimodal medical image registration method based on multilevel correlation pyramidal optimization (MCPO)<n>Our method achieves the first place in the validation phase and test phase of ReMIND2Reg.<n>This demonstrates the broad applicability of our method in preoperative-to-intraoperative image registration.
- Score: 14.509109797489499
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Surgical navigation based on multimodal image registration has played a significant role in providing intraoperative guidance to surgeons by showing the relative position of the target area to critical anatomical structures during surgery. However, due to the differences between multimodal images and intraoperative image deformation caused by tissue displacement and removal during the surgery, effective registration of preoperative and intraoperative multimodal images faces significant challenges. To address the multimodal image registration challenges in Learn2Reg 2025, an unsupervised multimodal medical image registration method based on multilevel correlation pyramidal optimization (MCPO) is designed to solve these problems. First, the features of each modality are extracted based on the modality independent neighborhood descriptor, and the multimodal images is mapped to the feature space. Second, a multilevel pyramidal fusion optimization mechanism is designed to achieve global optimization and local detail complementation of the displacement field through dense correlation analysis and weight-balanced coupled convex optimization for input features at different scales. Our method focuses on the ReMIND2Reg task in Learn2Reg 2025. Based on the results, our method achieved the first place in the validation phase and test phase of ReMIND2Reg. The MCPO is also validated on the Resect dataset, achieving an average TRE of 1.798 mm. This demonstrates the broad applicability of our method in preoperative-to-intraoperative image registration. The code is avaliable at https://github.com/wjiazheng/MCPO.
Related papers
- IMPACT: A Generic Semantic Loss for Multimodal Medical Image Registration [0.46904601975060667]
IMPACT (Image Metric with Pretrained model-Agnostic Comparison for Transmodality registration) is a novel similarity metric designed for robust multimodal image registration.<n>It defines a semantic similarity measure based on the comparison of deep features extracted from large-scale pretrained segmentation models.<n>It was evaluated on five challenging 3D registration tasks involving thoracic CT/CBCT and pelvic MR/CT datasets.
arXiv Detail & Related papers (2025-03-31T14:08:21Z) - Unsupervised Multimodal 3D Medical Image Registration with Multilevel Correlation Balanced Optimization [22.633633605566214]
We propose an unsupervised multimodal medical image registration method based on multilevel correlation balanced optimization.<n>For preoperative medical images in different modalities, the alignment and stacking of valid information is achieved by the maximum fusion between deformation fields.
arXiv Detail & Related papers (2024-09-08T09:38:59Z) - Weakly supervised alignment and registration of MR-CT for cervical cancer radiotherapy [9.060365057476133]
Cervical cancer is one of the leading causes of death in women.
We propose a preliminary spatial alignment algorithm and a weakly supervised multimodal registration network.
arXiv Detail & Related papers (2024-05-21T15:05:51Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - M$^{2}$SNet: Multi-scale in Multi-scale Subtraction Network for Medical Image Segmentation [66.89632406480949]
We propose a general multi-scale in multi-scale subtraction network (M$2$SNet) to finish diverse segmentation from medical image.<n>Our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks.
arXiv Detail & Related papers (2023-03-20T06:26:49Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Unsupervised Image Registration Towards Enhancing Performance and
Explainability in Cardiac And Brain Image Analysis [3.5718941645696485]
Inter- and intra-modality affine and non-rigid image registration is an essential medical image analysis process in clinical imaging.
We present an un-supervised deep learning registration methodology which can accurately model affine and non-rigid trans-formations.
Our methodology performs bi-directional cross-modality image synthesis to learn modality-invariant latent rep-resentations.
arXiv Detail & Related papers (2022-03-07T12:54:33Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain
Adaptation [9.659642285903418]
Cross-modality synthesis of medical images to reduce the costly annotation burden by radiologists.
We present a novel approach for image-to-image translation in medical images, capable of supervised or unsupervised (unpaired image data) setups.
arXiv Detail & Related papers (2021-03-05T16:22:31Z) - Unsupervised Multimodal Image Registration with Adaptative Gradient
Guidance [23.461130560414805]
Unsupervised learning-based methods have demonstrated promising performance over accuracy and efficiency in deformable image registration.
The estimated deformation fields of the existing methods fully rely on the to-be-registered image pair.
We propose a novel multimodal registration framework, which leverages the deformation fields estimated from both.
arXiv Detail & Related papers (2020-11-12T05:47:20Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.