MICCAI STS 2024 Challenge: Semi-Supervised Instance-Level Tooth Segmentation in Panoramic X-ray and CBCT Images
- URL: http://arxiv.org/abs/2511.22911v1
- Date: Fri, 28 Nov 2025 06:33:55 GMT
- Title: MICCAI STS 2024 Challenge: Semi-Supervised Instance-Level Tooth Segmentation in Panoramic X-ray and CBCT Images
- Authors: Yaqi Wang, Zhi Li, Chengyu Wu, Jun Liu, Yifan Zhang, Jiaxue Ni, Qian Luo, Jialuo Chen, Hongyuan Zhang, Jin Liu, Can Han, Kaiwen Fu, Changkai Ji, Xinxu Cai, Jing Hao, Zhihao Zheng, Shi Xu, Junqiang Chen, Qianni Zhang, Dahong Qian, Shuai Wang, Huiyu Zhou,
- Abstract summary: This research aimed to benchmark and advance semi-supervised learning (SSL)<n>We organized the 2nd Semi-supervised Teeth (STS 2024) Challenge at MII 2024.<n>We provided a large-scale dataset comprising over 90,000 2D images and 3D axial slices, which includes 2,380 OPG images and 330 CBCT scans.<n>The winning semi-supervised models demonstrated impressive performance gains over a fully-supervised nnU-Net baseline trained only on the labeled data.
- Score: 33.12982357985314
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Orthopantomogram (OPGs) and Cone-Beam Computed Tomography (CBCT) are vital for dentistry, but creating large datasets for automated tooth segmentation is hindered by the labor-intensive process of manual instance-level annotation. This research aimed to benchmark and advance semi-supervised learning (SSL) as a solution for this data scarcity problem. We organized the 2nd Semi-supervised Teeth Segmentation (STS 2024) Challenge at MICCAI 2024. We provided a large-scale dataset comprising over 90,000 2D images and 3D axial slices, which includes 2,380 OPG images and 330 CBCT scans, all featuring detailed instance-level FDI annotations on part of the data. The challenge attracted 114 (OPG) and 106 (CBCT) registered teams. To ensure algorithmic excellence and full transparency, we rigorously evaluated the valid, open-source submissions from the top 10 (OPG) and top 5 (CBCT) teams, respectively. All successful submissions were deep learning-based SSL methods. The winning semi-supervised models demonstrated impressive performance gains over a fully-supervised nnU-Net baseline trained only on the labeled data. For the 2D OPG track, the top method improved the Instance Affinity (IA) score by over 44 percentage points. For the 3D CBCT track, the winning approach boosted the Instance Dice score by 61 percentage points. This challenge confirms the substantial benefit of SSL for complex, instance-level medical image segmentation tasks where labeled data is scarce. The most effective approaches consistently leveraged hybrid semi-supervised frameworks that combined knowledge from foundational models like SAM with multi-stage, coarse-to-fine refinement pipelines. Both the challenge dataset and the participants' submitted code have been made publicly available on GitHub (https://github.com/ricoleehduu/STS-Challenge-2024), ensuring transparency and reproducibility.
Related papers
- Entropy-Guided Agreement-Diversity: A Semi-Supervised Active Learning Framework for Fetal Head Segmentation in Ultrasound [4.594829845106234]
We propose a two-stage Active Learning sampler, Entropy-Guided Agreement-Diversity (EGAD) for fetal head segmentation.<n>In experiments, SSL-EGAD achieves an average Dice score of 94.57% and 96.32% on two public datasets for fetal head segmentation.
arXiv Detail & Related papers (2026-01-24T13:23:18Z) - FUGC: Benchmarking Semi-Supervised Learning Methods for Cervical Segmentation [63.7829089874007]
This paper introduces the Fetal Ultrasound Grand Challenge (FUGC), the first benchmark for semi-supervised learning in cervical segmentation.<n>FUGC provides a dataset of 890 TVS images, including 500 training images, 90 validation images, and 300 test images.<n> Methods were evaluated using the Dice Similarity Coefficient (DSC), Hausdorff Distance (HD), and runtime (RT), with a weighted combination of 0.4/0.4/0.2.
arXiv Detail & Related papers (2026-01-22T01:34:39Z) - MICCAI STSR 2025 Challenge: Semi-Supervised Teeth and Pulp Segmentation and CBCT-IOS Registration [30.64602516984303]
Cone-Beam Computed Tomography (CBCT) and Intraoral Scanning (IOS) are essential for digital dentistry, but data scarcity limits automated solutions.<n>We organized the STSR 2025 Challenge at MICCAI 2025 to benchmark semi-supervised learning (SSL) in this domain.<n>We provided 60 labeled and 640 unlabeled IOS samples, plus 30 labeled and 250 unlabeled CBCT scans with varying resolutions and fields of view.<n>The challenge attracted strong community participation, with top teams submitting open-source deep learning-based SSL solutions.
arXiv Detail & Related papers (2025-12-02T15:29:04Z) - Multi-Class Segmentation of Aortic Branches and Zones in Computed Tomography Angiography: The AortaSeg24 Challenge [55.252714550918824]
AortaSeg24 MICCAI Challenge introduced the first dataset of 100 CTA volumes annotated for 23 clinically relevant aortic branches and zones.<n>This paper presents the challenge design, dataset details, evaluation metrics, and an in-depth analysis of the top-performing algorithms.
arXiv Detail & Related papers (2025-02-07T21:09:05Z) - Wound Tissue Segmentation in Diabetic Foot Ulcer Images Using Deep Learning: A Pilot Study [5.397013836968946]
We have created a DFUTissue dataset for the research community to evaluate wound tissue segmentation algorithms.
The dataset contains 110 images with tissues labeled by wound experts and 600 unlabeled images.
Due to the limited amount of annotated data, our framework consists of both supervised learning (SL) and semi-supervised learning (SSL) phases.
arXiv Detail & Related papers (2024-06-23T05:01:51Z) - Label-efficient multi-organ segmentation with a diffusion model [10.470918676663405]
We propose a label-efficient framework using knowledge transfer from a pre-trained diffusion model for CT multi-organ segmentation.<n>In fine-tuning, two fine-tuning strategies, linear classification and fine-tuning decoder, are employed to enhance segmentation performance.<n>Compared to state-of-the-art methods for multi-organ segmentation, our method achieves competitive performance on the FLARE 2022 dataset.
arXiv Detail & Related papers (2024-02-23T09:25:57Z) - Iterative Semi-Supervised Learning for Abdominal Organs and Tumor
Segmentation [4.952008176585512]
The FLARE23 challenge provides a large-scale dataset with both partially and fully annotated data.
We propose to use the strategy of Semi-Supervised Learning (SSL) and iterative pseudo labeling to address FLARE23.
Our approach achieves an average DSC score of 89.63% for organs and 46.07% for tumors on online validation leaderboard.
arXiv Detail & Related papers (2023-10-02T12:45:13Z) - The Second-place Solution for CVPR VISION 23 Challenge Track 1 -- Data
Effificient Defect Detection [3.4853769431047907]
The Vision Challenge Track 1 for Data-Effificient Defect Detection requires competitors to instance segment 14 industrial inspection datasets in a data-defificient setting.
This report introduces the technical details of the team Aoi-overfifitting-Team for this challenge.
arXiv Detail & Related papers (2023-06-25T03:37:02Z) - The Devil is in the Points: Weakly Semi-Supervised Instance Segmentation
via Point-Guided Mask Representation [61.027468209465354]
We introduce a novel learning scheme named weakly semi-supervised instance segmentation (WSSIS) with point labels.
We propose a method for WSSIS that can effectively leverage the budget-friendly point labels as a powerful weak supervision source.
We conduct extensive experiments on COCO and BDD100K datasets, and the proposed method achieves promising results comparable to those of the fully-supervised model.
arXiv Detail & Related papers (2023-03-27T10:11:22Z) - Improving Semi-Supervised and Domain-Adaptive Semantic Segmentation with
Self-Supervised Depth Estimation [94.16816278191477]
We present a framework for semi-adaptive and domain-supervised semantic segmentation.
It is enhanced by self-supervised monocular depth estimation trained only on unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset.
arXiv Detail & Related papers (2021-08-28T01:33:38Z) - Large-scale Unsupervised Semantic Segmentation [163.3568726730319]
We propose a new problem of large-scale unsupervised semantic segmentation (LUSS) with a newly created benchmark dataset to track the research progress.
Based on the ImageNet dataset, we propose the ImageNet-S dataset with 1.2 million training images and 40k high-quality semantic segmentation annotations for evaluation.
arXiv Detail & Related papers (2021-06-06T15:02:11Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.