Zero-shot Multi-Contrast Brain MRI Registration by Intensity Randomizing T1-weighted MRI (LUMIR25)
- URL: http://arxiv.org/abs/2602.06292v1
- Date: Fri, 06 Feb 2026 01:17:49 GMT
- Title: Zero-shot Multi-Contrast Brain MRI Registration by Intensity Randomizing T1-weighted MRI (LUMIR25)
- Authors: Hengjie Liu, Yimeng Dou, Di Xu, Xinyi Fu, Dan Ruan, Ke Sheng,
- Abstract summary: We submit to the LUMIR25 challenge in Learn2Reg 2025, which achieved 1st place overall on the test set.<n>This year's task focuses on zero-shot registration under domain shifts (high-field MRI, pathological brains, and various MRI contrasts)<n>We employ three simple but effective strategies to achieve good generalization with diverse contrasts from a model trained with T1-weighted MRI only.
- Score: 9.111478367494433
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we summarize the methods and results of our submission to the LUMIR25 challenge in Learn2Reg 2025, which achieved 1st place overall on the test set. Extended from LUMIR24, this year's task focuses on zero-shot registration under domain shifts (high-field MRI, pathological brains, and various MRI contrasts), while the training data comprise only in-domain T1-weighted brain MRI. We start with a meticulous analysis of LUMIR24 winners to identify the main contributors to good monomodal registration performance. To achieve good generalization with diverse contrasts from a model trained with T1-weighted MRI only, we employ three simple but effective strategies: (i) a multimodal loss based on the modality-independent neighborhood descriptor (MIND), (ii) intensity randomization for appearance augmentation, and (iii) lightweight instance-specific optimization (ISO) on feature encoders at inference time. On the validation set, our approach achieves reasonable T1-T2 registration accuracy while maintaining good deformation regularity.
Related papers
- Self-Supervised Weighted Image Guided Quantitative MRI Super-Resolution [0.4757311250629737]
High-resolution (HR) quantitative MRI (qMRI) relaxometry provides objective tissue characterization but remains clinically underutilized due to lengthy acquisition times.<n>We propose a physics-informed, self-supervised framework for qMRI super-resolution that uses routinely acquired HR weighted MRI (wMRI) scans as guidance.
arXiv Detail & Related papers (2025-12-19T14:15:31Z) - The Brain Resection Multimodal Image Registration (ReMIND2Reg) 2025 Challenge [42.51640997446028]
The ReMIND2Reg 2025 Challenge provides the largest public benchmark for this task, built upon the ReMIND dataset.<n>It offers 99 training cases, 5 validation cases, and 10 private test cases comprising paired 3D ceT1 MRI, T2 MRI, and post-resection 3D iUS volumes.<n>Data are provided without annotations for training, while validation and test performance are evaluated on manually annotated anatomical landmarks.
arXiv Detail & Related papers (2025-08-13T09:31:06Z) - Large-scale Multi-sequence Pretraining for Generalizable MRI Analysis in Versatile Clinical Applications [15.846703688846086]
In this study, we present PRISM, a foundation model PRe-trained with large-scale multI-Sequence MRI.<n>We propose a novel pretraining paradigm that disentangles anatomically invariant features from sequence-specific variations in MRI.<n>PRISM consistently outperformed both non-pretrained models and existing foundation models.
arXiv Detail & Related papers (2025-08-10T03:31:46Z) - A 3D Cross-modal Keypoint Descriptor for MR-US Matching and Registration [0.053801353100098995]
Intraoperative registration of real-time ultrasound to preoperative Magnetic Resonance Imaging (MRI) remains an unsolved problem.<n>We propose a novel 3D cross-modal keypoint descriptor for MRI-iUS matching and registration.<n>Our approach employs a patient-specific matching-by-synthesis approach, generating synthetic iUS volumes from preoperative MRI.
arXiv Detail & Related papers (2025-07-24T16:19:08Z) - Beyond the LUMIR challenge: The pathway to foundational registration models [25.05315856123745]
The Large-scale Unsupervised Brain MRI Image Registration (LUMIR) challenge is a next-generation benchmark designed to assess and advance unsupervised brain MRI registration.<n>LUMIR provides over 4,000 preprocessed T1-weighted brain MRIs for training without any label maps, encouraging biologically plausible deformation modeling.<n>A total of 1,158 subjects and over 4,000 image pairs were included for evaluation.
arXiv Detail & Related papers (2025-05-30T03:07:58Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Fast T2w/FLAIR MRI Acquisition by Optimal Sampling of Information
Complementary to Pre-acquired T1w MRI [52.656075914042155]
We propose an iterative framework to optimize the under-sampling pattern for MRI acquisition of another modality.
We have demonstrated superior performance of our learned under-sampling patterns on a public dataset.
arXiv Detail & Related papers (2021-11-11T04:04:48Z) - A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced
Cardiac Magnetic Resonance Imaging [90.29017019187282]
" 2018 Left Atrium Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset.
Analyse of the submitted algorithms using technical and biological metrics was performed.
Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm.
arXiv Detail & Related papers (2020-04-26T08:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.