U-Mamba2-SSL for Semi-Supervised Tooth and Pulp Segmentation in CBCT
- URL: http://arxiv.org/abs/2509.20154v2
- Date: Tue, 30 Sep 2025 12:23:13 GMT
- Title: U-Mamba2-SSL for Semi-Supervised Tooth and Pulp Segmentation in CBCT
- Authors: Zhi Qin Tan, Xiatian Zhu, Owen Addison, Yunpeng Li,
- Abstract summary: We propose U-Mamba2-SSL, a novel semi-supervised learning framework that builds on the U-Mamba2 model and employs a multi-stage training strategy.<n>U-Mamba2-SSL achieved an average score of 0.789 and a DSC of 0.917 on the hidden test set, achieving first place in Task 1 of the STSR 2025 challenge.
- Score: 44.3806898357896
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Accurate segmentation of teeth and pulp in Cone-Beam Computed Tomography (CBCT) is vital for clinical applications like treatment planning and diagnosis. However, this process requires extensive expertise and is exceptionally time-consuming, highlighting the critical need for automated algorithms that can effectively utilize unlabeled data. In this paper, we propose U-Mamba2-SSL, a novel semi-supervised learning framework that builds on the U-Mamba2 model and employs a multi-stage training strategy. The framework first pre-trains U-Mamba2 in a self-supervised manner using a disruptive autoencoder. It then leverages unlabeled data through consistency regularization, where we introduce input and feature perturbations to ensure stable model outputs. Finally, a pseudo-labeling strategy is implemented with a reduced loss weighting to minimize the impact of potential errors. U-Mamba2-SSL achieved an average score of 0.789 and a DSC of 0.917 on the hidden test set, achieving first place in Task 1 of the STSR 2025 challenge. The code is available at https://github.com/zhiqin1998/UMamba2.
Related papers
- MICCAI STSR 2025 Challenge: Semi-Supervised Teeth and Pulp Segmentation and CBCT-IOS Registration [30.64602516984303]
Cone-Beam Computed Tomography (CBCT) and Intraoral Scanning (IOS) are essential for digital dentistry, but data scarcity limits automated solutions.<n>We organized the STSR 2025 Challenge at MICCAI 2025 to benchmark semi-supervised learning (SSL) in this domain.<n>We provided 60 labeled and 640 unlabeled IOS samples, plus 30 labeled and 250 unlabeled CBCT scans with varying resolutions and fields of view.<n>The challenge attracted strong community participation, with top teams submitting open-source deep learning-based SSL solutions.
arXiv Detail & Related papers (2025-12-02T15:29:04Z) - Hierarchical Self-Supervised Representation Learning for Depression Detection from Speech [51.14752758616364]
Speech-based depression detection (SDD) is a promising, non-invasive alternative to traditional clinical assessments.<n>We propose HAREN-CTC, a novel architecture that integrates multi-layer SSL features using cross-attention within a multitask learning framework.<n>The model achieves state-of-the-art macro F1-scores of 0.81 on DAIC-WOZ and 0.82 on MODMA, outperforming prior methods across both evaluation scenarios.
arXiv Detail & Related papers (2025-10-05T09:32:12Z) - ConMamba: Contrastive Vision Mamba for Plant Disease Detection [3.60543005189868]
Plant Disease Detection (PDD) is a key aspect of precision agriculture.<n>Existing deep learning methods often rely on extensively annotated datasets.<n>We propose ConMamba, a novel framework specially designed for PDD.
arXiv Detail & Related papers (2025-06-03T03:01:38Z) - STAL: Spike Threshold Adaptive Learning Encoder for Classification of Pain-Related Biosignal Data [2.0738462952016232]
This paper presents the first application of spiking neural networks (SNNs) for the classification of chronic lower back pain (CLBP) using the EmoPain dataset.
We introduce Spike Threshold Adaptive Learning (STAL), a trainable encoder that effectively converts continuous biosignals into spike trains.
We also propose an ensemble of Spiking Recurrent Neural Network (SRNN) classifiers for the multi-stream processing of sEMG and IMU data.
arXiv Detail & Related papers (2024-07-11T10:15:52Z) - Incremental Self-training for Semi-supervised Learning [56.57057576885672]
IST is simple yet effective and fits existing self-training-based semi-supervised learning methods.
We verify the proposed IST on five datasets and two types of backbone, effectively improving the recognition accuracy and learning speed.
arXiv Detail & Related papers (2024-04-14T05:02:00Z) - Semi-Supervised Semantic Segmentation using Redesigned Self-Training for
White Blood Cells [3.957784193707817]
We propose a semi-supervised learning framework to efficiently capitalize on the scarcity of the dataset available.
Self-training is a technique that utilizes the model trained on labeled data to generate pseudo-labels for the unlabeled data and then re-train on both of them.
We discover that by incorporating FixMatch in the self-training pipeline, the performance improves in the majority of cases.
arXiv Detail & Related papers (2024-01-14T12:22:34Z) - Semi-Supervised Class-Agnostic Motion Prediction with Pseudo Label
Regeneration and BEVMix [59.55173022987071]
We study the potential of semi-supervised learning for class-agnostic motion prediction.
Our framework adopts a consistency-based self-training paradigm, enabling the model to learn from unlabeled data.
Our method exhibits comparable performance to weakly and some fully supervised methods.
arXiv Detail & Related papers (2023-12-13T09:32:50Z) - CroSSL: Cross-modal Self-Supervised Learning for Time-series through
Latent Masking [11.616031590118014]
CroSSL allows for handling missing modalities and end-to-end cross-modal learning.
We evaluate our method on a wide range of data, including motion sensors.
arXiv Detail & Related papers (2023-07-31T17:10:10Z) - Uncertainty-Aware Semi-Supervised Learning for Prostate MRI Zonal
Segmentation [0.9176056742068814]
We propose a novel semi-supervised learning (SSL) approach that requires only a relatively small number of annotations.
Our method uses a pseudo-labeling technique that employs recent deep learning uncertainty estimation models.
Our proposed model outperformed the semi-supervised model in experiments with the ProstateX dataset and an external test set.
arXiv Detail & Related papers (2023-05-10T08:50:04Z) - ProtoCon: Pseudo-label Refinement via Online Clustering and Prototypical
Consistency for Efficient Semi-supervised Learning [60.57998388590556]
ProtoCon is a novel method for confidence-based pseudo-labeling.
Online nature of ProtoCon allows it to utilise the label history of the entire dataset in one training cycle.
It delivers significant gains and faster convergence over state-of-the-art datasets.
arXiv Detail & Related papers (2023-03-22T23:51:54Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.