P3Net: Progressive and Periodic Perturbation for Semi-Supervised Medical Image Segmentation
- URL: http://arxiv.org/abs/2505.15861v1
- Date: Wed, 21 May 2025 05:35:28 GMT
- Title: P3Net: Progressive and Periodic Perturbation for Semi-Supervised Medical Image Segmentation
- Authors: Zhenyan Yao, Miao Zhang, Lanhu Wu, Yongri Piao, Feng Tian, Weibing Sun, Huchuan Lu,
- Abstract summary: We propose a progressive and periodic perturbation mechanism (P3M) and a boundary-focused loss to guide the learning of unlabeled data.<n>Our method achieves state-of-the-art performance on two 2D and 3D datasets.
- Score: 60.08541107831459
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Perturbation with diverse unlabeled data has proven beneficial for semi-supervised medical image segmentation (SSMIS). While many works have successfully used various perturbation techniques, a deeper understanding of learning perturbations is needed. Excessive or inappropriate perturbation can have negative effects, so we aim to address two challenges: how to use perturbation mechanisms to guide the learning of unlabeled data through labeled data, and how to ensure accurate predictions in boundary regions. Inspired by human progressive and periodic learning, we propose a progressive and periodic perturbation mechanism (P3M) and a boundary-focused loss. P3M enables dynamic adjustment of perturbations, allowing the model to gradually learn them. Our boundary-focused loss encourages the model to concentrate on boundary regions, enhancing sensitivity to intricate details and ensuring accurate predictions. Experimental results demonstrate that our method achieves state-of-the-art performance on two 2D and 3D datasets. Moreover, P3M is extendable to other methods, and the proposed loss serves as a universal tool for improving existing methods, highlighting the scalability and applicability of our approach.
Related papers
- Object Affordance Recognition and Grounding via Multi-scale Cross-modal Representation Learning [64.32618490065117]
A core problem of Embodied AI is to learn object manipulation from observation, as humans do.<n>We propose a novel approach that learns an affordance-aware 3D representation and employs a stage-wise inference strategy.<n> Experiments demonstrate the effectiveness of our method, showing improved performance in both affordance grounding and classification.
arXiv Detail & Related papers (2025-08-02T04:14:18Z) - A Lesson in Splats: Teacher-Guided Diffusion for 3D Gaussian Splats Generation with 2D Supervision [65.33043028101471]
We introduce a diffusion model for Gaussian Splats, SplatDiffusion, to enable generation of three-dimensional structures from single images.<n>Existing methods rely on deterministic, feed-forward predictions, which limit their ability to handle the inherent ambiguity of 3D inference from 2D data.
arXiv Detail & Related papers (2024-12-01T00:29:57Z) - 2D Feature Distillation for Weakly- and Semi-Supervised 3D Semantic
Segmentation [92.17700318483745]
We propose an image-guidance network (IGNet) which builds upon the idea of distilling high level feature information from a domain adapted synthetically trained 2D semantic segmentation network.
IGNet achieves state-of-the-art results for weakly-supervised LiDAR semantic segmentation on ScribbleKITTI, boasting up to 98% relative performance to fully supervised training with only 8% labeled points.
arXiv Detail & Related papers (2023-11-27T07:57:29Z) - Leveraging Unlabeled Data for 3D Medical Image Segmentation through
Self-Supervised Contrastive Learning [3.7395287262521717]
Current 3D semi-supervised segmentation methods face significant challenges such as limited consideration of contextual information.
We introduce two distinctworks designed to explore and exploit the discrepancies between them, ultimately correcting the erroneous prediction results.
We employ a self-supervised contrastive learning paradigm to distinguish between reliable and unreliable predictions.
arXiv Detail & Related papers (2023-11-21T14:03:16Z) - Cross-head mutual Mean-Teaching for semi-supervised medical image
segmentation [6.738522094694818]
Semi-supervised medical image segmentation (SSMIS) has witnessed substantial advancements by leveraging limited labeled data and abundant unlabeled data.
Existing state-of-the-art (SOTA) methods encounter challenges in accurately predicting labels for the unlabeled data.
We propose a novel Cross-head mutual mean-teaching Network (CMMT-Net) incorporated strong-weak data augmentation.
arXiv Detail & Related papers (2023-10-08T09:13:04Z) - Uncertainty-Aware Adaptation for Self-Supervised 3D Human Pose
Estimation [70.32536356351706]
We introduce MRP-Net that constitutes a common deep network backbone with two output heads subscribing to two diverse configurations.
We derive suitable measures to quantify prediction uncertainty at both pose and joint level.
We present a comprehensive evaluation of the proposed approach and demonstrate state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2022-03-29T07:14:58Z) - Uncertainty-Aware Deep Co-training for Semi-supervised Medical Image
Segmentation [4.935055133266873]
We propose a novel uncertainty-aware scheme to make models learn regions purposefully.
Specifically, we employ Monte Carlo Sampling as an estimation method to attain an uncertainty map.
In the backward process, we joint unsupervised and supervised losses to accelerate the convergence of the network.
arXiv Detail & Related papers (2021-11-23T03:26:24Z) - Medical Instrument Segmentation in 3D US by Hybrid Constrained
Semi-Supervised Learning [62.13520959168732]
We propose a semi-supervised learning framework for instrument segmentation in 3D US.
To achieve the SSL learning, a Dual-UNet is proposed to segment the instrument.
Our proposed method achieves Dice score of about 68.6%-69.1% and the inference time of about 1 sec. per volume.
arXiv Detail & Related papers (2021-07-30T07:59:45Z) - Semi-supervised Semantic Segmentation of Prostate and Organs-at-Risk on
3D Pelvic CT Images [9.33145393480254]
Training effective deep learning models usually require a large amount of high-quality labeled data.
We developed a novel semi-supervised adversarial deep learning approach for 3D pelvic CT image semantic segmentation.
arXiv Detail & Related papers (2020-09-21T01:57:23Z) - Unsupervised Instance Segmentation in Microscopy Images via Panoptic
Domain Adaptation and Task Re-weighting [86.33696045574692]
We propose a Cycle Consistency Panoptic Domain Adaptive Mask R-CNN (CyC-PDAM) architecture for unsupervised nuclei segmentation in histopathology images.
We first propose a nuclei inpainting mechanism to remove the auxiliary generated objects in the synthesized images.
Secondly, a semantic branch with a domain discriminator is designed to achieve panoptic-level domain adaptation.
arXiv Detail & Related papers (2020-05-05T11:08:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.