Learn2Reg 2024: New Benchmark Datasets Driving Progress on New Challenges
- URL: http://arxiv.org/abs/2509.01217v2
- Date: Mon, 08 Sep 2025 11:50:29 GMT
- Title: Learn2Reg 2024: New Benchmark Datasets Driving Progress on New Challenges
- Authors: Lasse Hansen, Wiebke Heyer, Christoph Großbröhmer, Frederic Madesta, Thilo Sentker, Wang Jiazheng, Yuxi Zhang, Hang Zhang, Min Liu, Junyi Wang, Xi Zhu, Yuhua Li, Liwen Wang, Daniil Morozov, Nazim Haouchine, Joel Honkamaa, Pekka Marttinen, Yichao Zhou, Zuopeng Tan, Zhuoyuan Wang, Yi Wang, Hongchao Zhou, Shunbo Hu, Yi Zhang, Qian Tao, Lukas Förner, Thomas Wendler, Bailiang Jian, Christian Wachinger, Jin Kim, Dan Ruan, Marek Wodzinski, Henning Müller, Tony C. W. Mok, Xi Jia, Jinming Duan, Mikael Brudfors, Seyed-Ahmad Ahmadi, Yunzheng Zhu, William Hsu, Tina Kapur, William M. Wells, Alexandra Golby, Aaron Carass, Harrison Bai, Yihao Liu, Perrine Paul-Gilloteaux, Joakim Lindblad, Nataša Sladoje, Andreas Walter, Junyu Chen, Reuben Dorent, Alessa Hering, Mattias P. Heinrich,
- Abstract summary: Learn2Reg 2024 introduces large-scale multi-modal registration and unsupervised inter-subject brain registration.<n>It also introduces the first microscopy-focused benchmark within Learn2Reg.<n>The new datasets inspired new method developments, including invertibility constraints, pyramid features, keypoints alignment and instance optimisation.
- Score: 63.93336136048566
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Medical image registration is critical for clinical applications, and fair benchmarking of different methods is essential for monitoring ongoing progress. To date, the Learn2Reg 2020-2023 challenges have released several complementary datasets and established metrics for evaluations. However, these editions did not capture all aspects of the registration problem, particularly in terms of modality diversity and task complexity. To address these limitations, the 2024 edition introduces three new tasks, including large-scale multi-modal registration and unsupervised inter-subject brain registration, as well as the first microscopy-focused benchmark within Learn2Reg. The new datasets also inspired new method developments, including invertibility constraints, pyramid features, keypoints alignment and instance optimisation.
Related papers
- Standardized Evaluation of Automatic Methods for Perivascular Spaces Segmentation in MRI -- MICCAI 2024 Challenge Results [11.040060608562362]
This paper presents the EPVS Challenge organized at MICCAI 2024.<n>It aims to advance the development of automated algorithms for EPVS segmentation across multi-site data.<n>The winning method employed MedNeXt architecture with a dual 2D/3D strategy for handling varying slice thicknesses.
arXiv Detail & Related papers (2025-12-20T03:45:14Z) - TUS-REC2024: A Challenge to Reconstruct 3D Freehand Ultrasound Without External Tracker [25.14284964227897]
TUS-REC2024 Challenge was established to benchmark and accelerate progress in trackerless 3D ultrasound reconstruction.<n>Challenge attracted over 43 registered teams, of which 6 teams submitted 21 valid dockerized solutions.<n>Results highlight both the progress and current limitations of state-of-the-art approaches in this domain.
arXiv Detail & Related papers (2025-06-26T20:52:18Z) - DailyDVS-200: A Comprehensive Benchmark Dataset for Event-Based Action Recognition [51.96660522869841]
DailyDVS-200 is a benchmark dataset tailored for the event-based action recognition community.
It covers 200 action categories across real-world scenarios, recorded by 47 participants, and comprises more than 22,000 event sequences.
DailyDVS-200 is annotated with 14 attributes, ensuring a detailed characterization of the recorded actions.
arXiv Detail & Related papers (2024-07-06T15:25:10Z) - Multi-Label Continual Learning for the Medical Domain: A Novel Benchmark [47.52603262576663]
We propose a novel benchmark combining the challenges of new class arrivals and domain shifts in a single framework.
This benchmark aims to model a realistic CL setting for the multi-label classification problem in medical imaging.
arXiv Detail & Related papers (2024-04-10T09:35:36Z) - The KiTS21 Challenge: Automatic segmentation of kidneys, renal tumors,
and renal cysts in corticomedullary-phase CT [50.41526598153698]
This paper presents the challenge report for the 2021 Kidney and Kidney Tumor Challenge (KiTS21)
KiTS21 is a sequel to its first edition in 2019, and it features a variety of innovations in how the challenge was designed.
The top-performing teams achieved a significant improvement over the state of the art set in 2019, and this performance is shown to inch ever closer to human-level performance.
arXiv Detail & Related papers (2023-07-05T02:00:14Z) - MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised
Learning [90.17500229142755]
The first Multimodal Emotion Recognition Challenge (MER 2023) was successfully held at ACM Multimedia.
This paper introduces the motivation behind this challenge, describe the benchmark dataset, and provide some statistics about participants.
We believe this high-quality dataset can become a new benchmark in multimodal emotion recognition, especially for the Chinese research community.
arXiv Detail & Related papers (2023-04-18T13:23:42Z) - NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision
Research [96.53307645791179]
We introduce the Never-Ending VIsual-classification Stream (NEVIS'22), a benchmark consisting of a stream of over 100 visual classification tasks.
Despite being limited to classification, the resulting stream has a rich diversity of tasks from OCR, to texture analysis, scene recognition, and so forth.
Overall, NEVIS'22 poses an unprecedented challenge for current sequential learning approaches due to the scale and diversity of tasks.
arXiv Detail & Related papers (2022-11-15T18:57:46Z) - An Informative Tracking Benchmark [133.0931262969931]
We develop a small and informative tracking benchmark (ITB) with 7% out of 1.2 M frames of existing and newly collected datasets.
We select the most informative sequences from existing benchmarks taking into account 1) challenging level, 2) discriminative strength, 3) and density of appearance variations.
By analyzing the results of 15 state-of-the-art trackers re-trained on the same data, we determine the effective methods for robust tracking under each scenario.
arXiv Detail & Related papers (2021-12-13T07:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.