Out-of-distribution data supervision towards biomedical semantic segmentation
- URL: http://arxiv.org/abs/2507.12105v1
- Date: Wed, 16 Jul 2025 10:21:45 GMT
- Title: Out-of-distribution data supervision towards biomedical semantic segmentation
- Authors: Yiquan Gao, Duohui Xu,
- Abstract summary: We propose a data-centric framework, Med-OoD, to address this issue.<n>We show that Med-OoD largely prevents various segmentation networks from the pixel misclassification on medical images.<n>We also present an emerging learning paradigm of training a medical segmentation network completely using OoD data devoid of foreground class labels.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Biomedical segmentation networks easily suffer from the unexpected misclassification between foreground and background objects when learning on limited and imperfect medical datasets. Inspired by the strong power of Out-of-Distribution (OoD) data on other visual tasks, we propose a data-centric framework, Med-OoD to address this issue by introducing OoD data supervision into fully-supervised biomedical segmentation with none of the following needs: (i) external data sources, (ii) feature regularization objectives, (iii) additional annotations. Our method can be seamlessly integrated into segmentation networks without any modification on the architectures. Extensive experiments show that Med-OoD largely prevents various segmentation networks from the pixel misclassification on medical images and achieves considerable performance improvements on Lizard dataset. We also present an emerging learning paradigm of training a medical segmentation network completely using OoD data devoid of foreground class labels, surprisingly turning out 76.1% mIoU as test result. We hope this learning paradigm will attract people to rethink the roles of OoD data. Code is made available at https://github.com/StudioYG/Med-OoD.
Related papers
- MRGen: Segmentation Data Engine For Underrepresented MRI Modalities [59.61465292965639]
Training medical image segmentation models for rare yet clinically significant imaging modalities is challenging due to the scarcity of annotated data.<n>This paper investigates leveraging generative models to synthesize training data, to train segmentation models for underrepresented modalities.
arXiv Detail & Related papers (2024-12-04T16:34:22Z) - Cross-Domain Distribution Alignment for Segmentation of Private Unannotated 3D Medical Images [20.206972068340843]
We introduce a new source-free Unsupervised Domain Adaptation (UDA) method to address this problem.
Our idea is based on estimating the internally learned distribution of a relevant source domain by a base model.
We demonstrate that our approach leads to SOTA performance on a real-world 3D medical dataset.
arXiv Detail & Related papers (2024-10-11T19:28:10Z) - Unsupervised Domain Adaptation for Brain Vessel Segmentation through
Transwarp Contrastive Learning [46.248404274124546]
Unsupervised domain adaptation (UDA) aims to align the labelled source distribution with the unlabelled target distribution to obtain domain-invariant predictive models.
This paper proposes a simple yet potent contrastive learning framework for UDA to narrow the inter-domain gap between labelled source and unlabelled target distribution.
arXiv Detail & Related papers (2024-02-23T10:01:22Z) - ArSDM: Colonoscopy Images Synthesis with Adaptive Refinement Semantic
Diffusion Models [69.9178140563928]
Colonoscopy analysis is essential for assisting clinical diagnosis and treatment.
The scarcity of annotated data limits the effectiveness and generalization of existing methods.
We propose an Adaptive Refinement Semantic Diffusion Model (ArSDM) to generate colonoscopy images that benefit the downstream tasks.
arXiv Detail & Related papers (2023-09-03T07:55:46Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - FedMed-GAN: Federated Domain Translation on Unsupervised Cross-Modality
Brain Image Synthesis [55.939957482776194]
We propose a new benchmark for federated domain translation on unsupervised brain image synthesis (termed as FedMed-GAN)
FedMed-GAN mitigates the mode collapse without sacrificing the performance of generators.
A comprehensive evaluation is provided for comparing FedMed-GAN and other centralized methods.
arXiv Detail & Related papers (2022-01-22T02:50:29Z) - MetaMedSeg: Volumetric Meta-learning for Few-Shot Organ Segmentation [47.428577772279176]
We present MetaMedSeg, a gradient-based meta-learning algorithm that redefines the meta-learning task for the volumetric medical data.
In the experiments, we present an evaluation of the medical decathlon dataset by extracting 2D slices from CT and MRI volumes of different organs.
Our proposed volumetric task definition leads to up to 30% improvement in terms of IoU compared to related baselines.
arXiv Detail & Related papers (2021-09-18T11:13:45Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Uncertainty-aware multi-view co-training for semi-supervised medical
image segmentation and domain adaptation [35.33425093398756]
Unlabeled data is much easier to acquire than well-annotated data.
We propose uncertainty-aware multi-view co-training for medical image segmentation.
Our framework is capable of efficiently utilizing unlabeled data for better performance.
arXiv Detail & Related papers (2020-06-28T22:04:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.