Beyond the LUMIR challenge: The pathway to foundational registration models
- URL: http://arxiv.org/abs/2505.24160v1
- Date: Fri, 30 May 2025 03:07:58 GMT
- Title: Beyond the LUMIR challenge: The pathway to foundational registration models
- Authors: Junyu Chen, Shuwen Wei, Joel Honkamaa, Pekka Marttinen, Hang Zhang, Min Liu, Yichao Zhou, Zuopeng Tan, Zhuoyuan Wang, Yi Wang, Hongchao Zhou, Shunbo Hu, Yi Zhang, Qian Tao, Lukas Förner, Thomas Wendler, Bailiang Jian, Benedikt Wiestler, Tim Hable, Jin Kim, Dan Ruan, Frederic Madesta, Thilo Sentker, Wiebke Heyer, Lianrui Zuo, Yuwei Dai, Jing Wu, Jerry L. Prince, Harrison Bai, Yong Du, Yihao Liu, Alessa Hering, Reuben Dorent, Lasse Hansen, Mattias P. Heinrich, Aaron Carass,
- Abstract summary: The Large-scale Unsupervised Brain MRI Image Registration (LUMIR) challenge is a next-generation benchmark designed to assess and advance unsupervised brain MRI registration.<n>LUMIR provides over 4,000 preprocessed T1-weighted brain MRIs for training without any label maps, encouraging biologically plausible deformation modeling.<n>A total of 1,158 subjects and over 4,000 image pairs were included for evaluation.
- Score: 25.05315856123745
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical image challenges have played a transformative role in advancing the field, catalyzing algorithmic innovation and establishing new performance standards across diverse clinical applications. Image registration, a foundational task in neuroimaging pipelines, has similarly benefited from the Learn2Reg initiative. Building on this foundation, we introduce the Large-scale Unsupervised Brain MRI Image Registration (LUMIR) challenge, a next-generation benchmark designed to assess and advance unsupervised brain MRI registration. Distinct from prior challenges that leveraged anatomical label maps for supervision, LUMIR removes this dependency by providing over 4,000 preprocessed T1-weighted brain MRIs for training without any label maps, encouraging biologically plausible deformation modeling through self-supervision. In addition to evaluating performance on 590 held-out test subjects, LUMIR introduces a rigorous suite of zero-shot generalization tasks, spanning out-of-domain imaging modalities (e.g., FLAIR, T2-weighted, T2*-weighted), disease populations (e.g., Alzheimer's disease), acquisition protocols (e.g., 9.4T MRI), and species (e.g., macaque brains). A total of 1,158 subjects and over 4,000 image pairs were included for evaluation. Performance was assessed using both segmentation-based metrics (Dice coefficient, 95th percentile Hausdorff distance) and landmark-based registration accuracy (target registration error). Across both in-domain and zero-shot tasks, deep learning-based methods consistently achieved state-of-the-art accuracy while producing anatomically plausible deformation fields. The top-performing deep learning-based models demonstrated diffeomorphic properties and inverse consistency, outperforming several leading optimization-based methods, and showing strong robustness to most domain shifts, the exception being a drop in performance on out-of-domain contrasts.
Related papers
- Extreme Cardiac MRI Analysis under Respiratory Motion: Results of the CMRxMotion Challenge [56.28872161153236]
Deep learning models have achieved state-of-the-art performance in automated Cardiac Magnetic Resonance (CMR) analysis.<n>The efficacy of these models is highly dependent on the availability of high-quality, artifact-free images.<n>To promote research in this domain, we organized the MICCAI CMRxMotion challenge.
arXiv Detail & Related papers (2025-07-25T11:12:21Z) - Towards a general-purpose foundation model for fMRI analysis [58.06455456423138]
We introduce NeuroSTORM, a framework that learns from 4D fMRI volumes and enables efficient knowledge transfer across diverse applications.<n>NeuroSTORM is pre-trained on 28.65 million fMRI frames (>9,000 hours) from over 50,000 subjects across multiple centers and ages 5 to 100.<n>It outperforms existing methods across five tasks: age/gender prediction, phenotype prediction, disease diagnosis, fMRI-to-image retrieval, and task-based fMRI.
arXiv Detail & Related papers (2025-06-11T23:51:01Z) - PathVLM-R1: A Reinforcement Learning-Driven Reasoning Model for Pathology Visual-Language Tasks [15.497221591506625]
We have proposed PathVLM-R1, a visual language model designed specifically for pathological images.<n>We have based our model on Qwen2.5-VL-7B-Instruct and enhanced its performance for pathological tasks through meticulously designed post-training strategies.
arXiv Detail & Related papers (2025-04-12T15:32:16Z) - Large Scale Supervised Pretraining For Traumatic Brain Injury Segmentation [1.1203032569015594]
segmentation of lesions in msTBI presents a significant challenge due to the diverse characteristics of these lesions.<n>AIMS-TBI Challenge 2024 aims to advance innovative segmentation algorithms specifically designed for T1-weighted MRI data.<n>We train a Resenc L network on a comprehensive collection of datasets covering various anatomical and pathological structures.<n>Following this, the model is fine-tuned on msTBI-specific data to optimize its performance for the unique characteristics of T1-weighted MRI scans.
arXiv Detail & Related papers (2025-04-09T09:52:45Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - Significantly improving zero-shot X-ray pathology classification via fine-tuning pre-trained image-text encoders [50.689585476660554]
We propose a new fine-tuning strategy that includes positive-pair loss relaxation and random sentence sampling.
Our approach consistently improves overall zero-shot pathology classification across four chest X-ray datasets and three pre-trained models.
arXiv Detail & Related papers (2022-12-14T06:04:18Z) - 3D Inception-Based TransMorph: Pre- and Post-operative Multi-contrast
MRI Registration in Brain Tumors [1.2234742322758418]
We propose a two-stage cascaded network based on the Inception and TransMorph models.
Loss function was composed of a standard image similarity measure, a diffusion regularizer, and an edge-map similarity measure added to overcome intensity dependence.
We achieved 6th place at the time of model submission in the final testing phase of the BraTS-Reg challenge.
arXiv Detail & Related papers (2022-12-08T22:00:07Z) - Evaluating U-net Brain Extraction for Multi-site and Longitudinal
Preclinical Stroke Imaging [0.4310985013483366]
Convolutional neural networks (CNNs) can improve accuracy and reduce operator time.
We developed a deep-learning mouse brain extraction tool by using a U-net CNN.
We trained, validated, and tested a typical U-net model on 240 multimodal MRI datasets.
arXiv Detail & Related papers (2022-03-11T02:00:27Z) - The Brain Tumor Sequence Registration (BraTS-Reg) Challenge: Establishing Correspondence Between Pre-Operative and Follow-up MRI Scans of Diffuse Glioma Patients [31.567542945171834]
We describe the Brain Tumor Sequence Registration (BraTS-Reg) challenge.
BraTS-Reg is the first public benchmark environment for deformable registration algorithms.
The aim of BraTS-Reg is to continue to serve as an active resource for research.
arXiv Detail & Related papers (2021-12-13T19:25:16Z) - Data-driven generation of plausible tissue geometries for realistic
photoacoustic image synthesis [53.65837038435433]
Photoacoustic tomography (PAT) has the potential to recover morphological and functional tissue properties.
We propose a novel approach to PAT data simulation, which we refer to as "learning to simulate"
We leverage the concept of Generative Adversarial Networks (GANs) trained on semantically annotated medical imaging data to generate plausible tissue geometries.
arXiv Detail & Related papers (2021-03-29T11:30:18Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.