Unsupervised Semantic Segmentation in Synchrotron Computed Tomography with Self-Correcting Pseudo Labels
- URL: http://arxiv.org/abs/2603.00372v1
- Date: Fri, 27 Feb 2026 23:15:41 GMT
- Title: Unsupervised Semantic Segmentation in Synchrotron Computed Tomography with Self-Correcting Pseudo Labels
- Authors: Austin Yunker, Peter Kenesei, Hemant Sharma, Jun-Sang Park, Antonino Miceli, Rajkumar Kettimuthu,
- Abstract summary: Deep learning has emerged as a powerful tool capable of providing a wide range of purely data-driven solutions.<n>We introduce a novel framework that enables automatic segmentation of large, high-resolution SR-CT datasets.<n>We find our approach improves pixel-wise accuracy and mIoU by 13.31% and 15.94%, respectively, over the baseline pseudo labels.
- Score: 2.3100447881717345
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: X-ray computed tomography (CT) is a widely used imaging technique that provides detailed examinations into the internal structure of an object with synchrotron CT (SR-CT) enabling improved data quality by using higher energy, monochromatic X-rays. While SR-CT allows for improved resolution, time-resolved experimentation, and reduced imaging artifacts, it also produces significantly larger datasets than conventional CT. Accurate and efficient evaluation of these datasets is a critical component of these workflows; yet is often done manually representing a major bottleneck in the analysis phase. While deep learning has emerged as a powerful tool capable of providing a wide range of purely data-driven solutions, it requires a substantial amount of labeled data for training and manual annotation of SR-CT datasets is impractical in practice. In this paper, we introduce a novel framework that enables automatic segmentation of large, high-resolution SR-CT datasets by eliminating the need to hand label images for deep learning training. First, we generate pseudo labels by clustering on the voxel values identifying regions in the volume with similar attenuation coefficients producing an initial semantic map. Afterwards, we train a segmentation model on the pseudo labels before utilizing the Unbiased Teacher approach to self-correct them ensuring accurate final segmentations. We find our approach improves pixel-wise accuracy and mIoU by 13.31% and 15.94%, respectively, over the baseline pseudo labels when using a magnesium crystal SR-CT sample. Additionally, we extensively evaluate the different components of our workflow including segmentation model, loss function, pseudo labeling strategy, and input type. Finally, we evaluate our approach on to two additional samples highlighting our frameworks ability to produce segmentations that are considerably better than the original pseudo labels.
Related papers
- Subcortical Masks Generation in CT Images via Ensemble-Based Cross-Domain Label Transfer [1.312727273368205]
Subcortical segmentation in neuroimages plays an important role in understanding brain anatomy and facilitating computer-aided diagnosis of traumatic brain injuries and neurodegenerative disorders.<n>Despite the availability of publicly available subcortical segmentation datasets for Magnetic Resonance Imaging (MRI), a significant gap exists for Computed Tomography (CT)<n>This paper proposes an automatic ensemble framework to generate high-quality subcortical segmentation labels for CT scans by leveraging existing MRI-based models.
arXiv Detail & Related papers (2025-08-15T12:57:35Z) - SingleStrip: learning skull-stripping from a single labeled example [1.54032564881154]
We use domain randomization and self-training to train three-dimensional skull-stripping networks.<n>We select the top-ranking pseudo-labels to fine-tune the network.<n>This strategy may ease the labeling burden that slows progress in studies involving new anatomical structures or emerging imaging techniques.
arXiv Detail & Related papers (2025-08-14T09:05:19Z) - A label-free and data-free training strategy for vasculature segmentation in serial sectioning OCT data [4.746694624239095]
Serial sectioning Optical Coherence Tomography (s OCT) is becoming increasingly popular to study post-mortem neurovasculature.
Here, we leverage synthetic datasets of vessels to train a deep learning segmentation model.
Both approaches yield similar Dice scores, although with very different false positive and false negative rates.
arXiv Detail & Related papers (2024-05-22T15:39:31Z) - SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - Diffusion-based Data Augmentation for Nuclei Image Segmentation [68.28350341833526]
We introduce the first diffusion-based augmentation method for nuclei segmentation.
The idea is to synthesize a large number of labeled images to facilitate training the segmentation model.
The experimental results show that by augmenting 10% labeled real dataset with synthetic samples, one can achieve comparable segmentation results.
arXiv Detail & Related papers (2023-10-22T06:16:16Z) - Enhancing Point Annotations with Superpixel and Confidence Learning
Guided for Improving Semi-Supervised OCT Fluid Segmentation [17.85298271262749]
Superpixel and Confident Learning Guide Point s Network (SCLGPA-Net) based on the teacher-student architecture.
Superpixel-Guided Pseudo-Label Generation (SGPLG) module generates pseudo-labels and pixel-level label trust maps.
Confident Learning Guided Label Refinement (CLGLR) module identifies error information in the pseudo-labels and leads to further refinement.
arXiv Detail & Related papers (2023-06-05T04:21:00Z) - A Knowledge Distillation framework for Multi-Organ Segmentation of
Medaka Fish in Tomographic Image [5.881800919492064]
We propose a self-training framework for multi-organ segmentation in tomographic images of Medaka fish.
We utilize the pseudo-labeled data from a pretrained model and adopt a Quality Teacher to refine the pseudo-labeled data.
The experimental results demonstrate that our method improves mean Intersection over Union (IoU) by 5.9% on the full dataset.
arXiv Detail & Related papers (2023-02-24T10:31:29Z) - Improving Semi-Supervised and Domain-Adaptive Semantic Segmentation with
Self-Supervised Depth Estimation [94.16816278191477]
We present a framework for semi-adaptive and domain-supervised semantic segmentation.
It is enhanced by self-supervised monocular depth estimation trained only on unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset.
arXiv Detail & Related papers (2021-08-28T01:33:38Z) - Cascaded Robust Learning at Imperfect Labels for Chest X-ray
Segmentation [61.09321488002978]
We present a novel cascaded robust learning framework for chest X-ray segmentation with imperfect annotation.
Our model consists of three independent network, which can effectively learn useful information from the peer networks.
Our methods could achieve a significant improvement on the accuracy in segmentation tasks compared to the previous methods.
arXiv Detail & Related papers (2021-04-05T15:50:16Z) - Weakly Supervised Deep Nuclei Segmentation Using Partial Points
Annotation in Histopathology Images [51.893494939675314]
We propose a novel weakly supervised segmentation framework based on partial points annotation.
We show that our method can achieve competitive performance compared to the fully supervised counterpart and the state-of-the-art methods.
arXiv Detail & Related papers (2020-07-10T15:41:29Z) - ATSO: Asynchronous Teacher-Student Optimization for Semi-Supervised
Medical Image Segmentation [99.90263375737362]
We propose ATSO, an asynchronous version of teacher-student optimization.
ATSO partitions the unlabeled data into two subsets and alternately uses one subset to fine-tune the model and updates the label on the other subset.
We evaluate ATSO on two popular medical image segmentation datasets and show its superior performance in various semi-supervised settings.
arXiv Detail & Related papers (2020-06-24T04:05:12Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.