FUGC: Benchmarking Semi-Supervised Learning Methods for Cervical Segmentation
- URL: http://arxiv.org/abs/2601.15572v1
- Date: Thu, 22 Jan 2026 01:34:39 GMT
- Title: FUGC: Benchmarking Semi-Supervised Learning Methods for Cervical Segmentation
- Authors: Jieyun Bai, Yitong Tang, Zihao Zhou, Mahdi Islam, Musarrat Tabassum, Enrique Almar-Munoz, Hongyu Liu, Hui Meng, Nianjiang Lv, Bo Deng, Yu Chen, Zilun Peng, Yusong Xiao, Li Xiao, Nam-Khanh Tran, Dac-Phu Phan-Le, Hai-Dang Nguyen, Xiao Liu, Jiale Hu, Mingxu Huang, Jitao Liang, Chaolu Feng, Xuezhi Zhang, Lyuyang Tong, Bo Du, Ha-Hieu Pham, Thanh-Huy Nguyen, Min Xu, Juntao Jiang, Jiangning Zhang, Yong Liu, Md. Kamrul Hasan, Jie Gan, Zhuonan Liang, Weidong Cai, Yuxin Huang, Gongning Luo, Mohammad Yaqub, Karim Lekadir,
- Abstract summary: This paper introduces the Fetal Ultrasound Grand Challenge (FUGC), the first benchmark for semi-supervised learning in cervical segmentation.<n>FUGC provides a dataset of 890 TVS images, including 500 training images, 90 validation images, and 300 test images.<n> Methods were evaluated using the Dice Similarity Coefficient (DSC), Hausdorff Distance (HD), and runtime (RT), with a weighted combination of 0.4/0.4/0.2.
- Score: 63.7829089874007
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate segmentation of cervical structures in transvaginal ultrasound (TVS) is critical for assessing the risk of spontaneous preterm birth (PTB), yet the scarcity of labeled data limits the performance of supervised learning approaches. This paper introduces the Fetal Ultrasound Grand Challenge (FUGC), the first benchmark for semi-supervised learning in cervical segmentation, hosted at ISBI 2025. FUGC provides a dataset of 890 TVS images, including 500 training images, 90 validation images, and 300 test images. Methods were evaluated using the Dice Similarity Coefficient (DSC), Hausdorff Distance (HD), and runtime (RT), with a weighted combination of 0.4/0.4/0.2. The challenge attracted 10 teams with 82 participants submitting innovative solutions. The best-performing methods for each individual metric achieved 90.26\% mDSC, 38.88 mHD, and 32.85 ms RT, respectively. FUGC establishes a standardized benchmark for cervical segmentation, demonstrates the efficacy of semi-supervised methods with limited labeled data, and provides a foundation for AI-assisted clinical PTB risk assessment.
Related papers
- Beyond Benchmarks of IUGC: Rethinking Requirements of Deep Learning Methods for Intrapartum Ultrasound Biometry from Fetal Ultrasound Videos [58.71502465551297]
Intrapartum Ultrasound Grand Challenge (IUGC) co-hosted with MICCAI 2024 was launched.<n>IUGC introduces a clinically oriented multi-task automatic measurement framework that integrates standard plane classification, fetal head-pubic symphysis segmentation, and biometry.<n>The challenge releases the largest multi-center intrapartum ultrasound video dataset to date, comprising 774 videos (68,106 frames) collected from three hospitals.
arXiv Detail & Related papers (2026-02-13T13:28:22Z) - SegRap2025: A Benchmark of Gross Tumor Volume and Lymph Node Clinical Target Volume Segmentation for Radiotherapy Planning of Nasopharyngeal Carcinoma [37.82643168064292]
The SegRap2025 challenge aims to enhance the generalizability and robustness of segmentation models across imaging centers and modalities.<n>This paper presents the challenge setup and provides a comprehensive analysis of the solutions submitted by ten participating teams.
arXiv Detail & Related papers (2026-01-28T13:11:12Z) - The MICCAI Federated Tumor Segmentation (FeTS) Challenge 2024: Efficient and Robust Aggregation Methods for Federated Learning [9.202327404631289]
We present the design and results of the MICCAI Federated Tumor (FeTS) Challenge 2024.<n>It focuses on federated learning for glioma sub-region segmentation in multi-parametric MRI.<n>A PID-controller-based method achieved the top overall ranking.
arXiv Detail & Related papers (2025-12-05T22:59:57Z) - CURVETE: Curriculum Learning and Progressive Self-supervised Training for Medical Image Classification [1.8352113484137627]
This paper introduces a novel deep convolutional neural network, named Curriculum Learning and Progressive Self-supervised Training (CURVETE)<n>CurVETE addresses challenges related to limited samples, enhances model generalisability, and improves overall classification performance.<n>It achieves this by employing a curriculum learning strategy based on the granularity of sample decomposition during the training of generic unlabelled samples.
arXiv Detail & Related papers (2025-10-27T15:46:02Z) - A Novel Attention-Augmented Wavelet YOLO System for Real-time Brain Vessel Segmentation on Transcranial Color-coded Doppler [49.03919553747297]
We propose an AI-powered, real-time CoW auto-segmentation system capable of efficiently capturing cerebral arteries.<n>No prior studies have explored AI-driven cerebrovascular segmentation using Transcranial Color-coded Doppler (TCCD)<n>The proposed AAW-YOLO demonstrated strong performance in segmenting both ipsilateral and contralateral CoW vessels.
arXiv Detail & Related papers (2025-08-19T14:41:22Z) - SMILE-UHURA Challenge -- Small Vessel Segmentation at Mesoscopic Scale from Ultra-High Resolution 7T Magnetic Resonance Angiograms [60.35639972035727]
The lack of publicly available annotated datasets has impeded the development of robust, machine learning-driven segmentation algorithms.
The SMILE-UHURA challenge addresses the gap in publicly available annotated datasets by providing an annotated dataset of Time-of-Flight angiography acquired with 7T MRI.
Dice scores reached up to 0.838 $pm$ 0.066 and 0.716 $pm$ 0.125 on the respective datasets, with an average performance of up to 0.804 $pm$ 0.15.
arXiv Detail & Related papers (2024-11-14T17:06:00Z) - The state-of-the-art 3D anisotropic intracranial hemorrhage segmentation
on non-contrast head CT: The INSTANCE challenge [19.72232714668029]
The INSTANCE 2022 was a grand challenge held in conjunction with the 2022 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI)
It is intended to resolve the above-mentioned problems and promote the development of both intracranial hemorrhage segmentation and anisotropic data processing.
The winner method achieved an average DSC of 0.6925, demonstrating a significant growth over our proposed baseline method.
arXiv Detail & Related papers (2023-01-09T11:48:05Z) - Self-supervised contrastive learning of echocardiogram videos enables
label-efficient cardiac disease diagnosis [48.64462717254158]
We developed a self-supervised contrastive learning approach, EchoCLR, to catered to echocardiogram videos.
When fine-tuned on small portions of labeled data, EchoCLR pretraining significantly improved classification performance for left ventricular hypertrophy (LVH) and aortic stenosis (AS)
EchoCLR is unique in its ability to learn representations of medical videos and demonstrates that SSL can enable label-efficient disease classification from small, labeled datasets.
arXiv Detail & Related papers (2022-07-23T19:17:26Z) - WSSS4LUAD: Grand Challenge on Weakly-supervised Tissue Semantic
Segmentation for Lung Adenocarcinoma [51.50991881342181]
This challenge includes 10,091 patch-level annotations and over 130 million labeled pixels.
First place team achieved mIoU of 0.8413 (tumor: 0.8389, stroma: 0.7931, normal: 0.8919)
arXiv Detail & Related papers (2022-04-13T15:27:05Z) - Cervical Optical Coherence Tomography Image Classification Based on
Contrastive Self-Supervised Texture Learning [2.674926127069043]
This study aims to develop a computer-aided diagnosis (CADx) approach to classifying in-vivo cervical OCT images based on self-supervised learning.
Besides high-level semantic features extracted by a convolutional neural network (CNN), the proposed CADx approach leverages unlabeled cervical OCT images' texture features learned by contrastive texture learning.
arXiv Detail & Related papers (2021-08-11T07:52:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.