EAUWSeg: Eliminating annotation uncertainty in weakly-supervised medical image segmentation
- URL: http://arxiv.org/abs/2501.01658v1
- Date: Fri, 03 Jan 2025 06:21:02 GMT
- Title: EAUWSeg: Eliminating annotation uncertainty in weakly-supervised medical image segmentation
- Authors: Wang Lituan, Zhang Lei, Wang Yan, Wang Zhenbin, Zhang Zhenwei, Zhang Yi,
- Abstract summary: Weakly-supervised medical image segmentation is gaining traction as it requires only rough annotations rather than accurate pixel-to-pixel labels.
We propose a novel weak annotation method coupled with its learning framework EAUWSeg to eliminate the annotation uncertainty.
We show that EAUWSeg outperforms existing weakly-supervised segmentation methods.
- Score: 4.334357692599945
- License:
- Abstract: Weakly-supervised medical image segmentation is gaining traction as it requires only rough annotations rather than accurate pixel-to-pixel labels, thereby reducing the workload for specialists. Although some progress has been made, there is still a considerable performance gap between the label-efficient methods and fully-supervised one, which can be attributed to the uncertainty nature of these weak labels. To address this issue, we propose a novel weak annotation method coupled with its learning framework EAUWSeg to eliminate the annotation uncertainty. Specifically, we first propose the Bounded Polygon Annotation (BPAnno) by simply labeling two polygons for a lesion. Then, the tailored learning mechanism that explicitly treat bounded polygons as two separated annotations is proposed to learn invariant feature by providing adversarial supervision signal for model training. Subsequently, a confidence-auxiliary consistency learner incorporates with a classification-guided confidence generator is designed to provide reliable supervision signal for pixels in uncertain region by leveraging the feature presentation consistency across pixels within the same category as well as class-specific information encapsulated in bounded polygons annotation. Experimental results demonstrate that EAUWSeg outperforms existing weakly-supervised segmentation methods. Furthermore, compared to fully-supervised counterparts, the proposed method not only delivers superior performance but also costs much less annotation workload. This underscores the superiority and effectiveness of our approach.
Related papers
- SemSim: Revisiting Weak-to-Strong Consistency from a Semantic Similarity Perspective for Semi-supervised Medical Image Segmentation [18.223854197580145]
Semi-supervised learning (SSL) for medical image segmentation is a challenging yet highly practical task.
We propose a novel framework based on FixMatch, named SemSim, powered by two appealing designs from semantic similarity perspective.
We show that SemSim yields consistent improvements over the state-of-the-art methods across three public segmentation benchmarks.
arXiv Detail & Related papers (2024-10-17T12:31:37Z) - Affinity-Graph-Guided Contractive Learning for Pretext-Free Medical Image Segmentation with Minimal Annotation [55.325956390997]
This paper proposes an affinity-graph-guided semi-supervised contrastive learning framework (Semi-AGCL) for medical image segmentation.
The framework first designs an average-patch-entropy-driven inter-patch sampling method, which can provide a robust initial feature space.
With merely 10% of the complete annotation set, our model approaches the accuracy of the fully annotated baseline, manifesting a marginal deviation of only 2.52%.
arXiv Detail & Related papers (2024-10-14T10:44:47Z) - 3D Medical Image Segmentation with Sparse Annotation via Cross-Teaching
between 3D and 2D Networks [26.29122638813974]
We propose a framework that can robustly learn from sparse annotation using the cross-teaching of both 3D and 2D networks.
Our experimental results on the MMWHS dataset demonstrate that our method outperforms the state-of-the-art (SOTA) semi-supervised segmentation methods.
arXiv Detail & Related papers (2023-07-30T15:26:17Z) - UIA-ViT: Unsupervised Inconsistency-Aware Method based on Vision
Transformer for Face Forgery Detection [52.91782218300844]
We propose a novel Unsupervised Inconsistency-Aware method based on Vision Transformer, called UIA-ViT.
Due to the self-attention mechanism, the attention map among patch embeddings naturally represents the consistency relation, making the vision Transformer suitable for the consistency representation learning.
arXiv Detail & Related papers (2022-10-23T15:24:47Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Boundary-aware Information Maximization for Self-supervised Medical
Image Segmentation [13.828282295918628]
We propose a novel unsupervised pre-training framework that avoids the drawback of contrastive learning.
Experimental results on two benchmark medical segmentation datasets reveal our method's effectiveness when few annotated images are available.
arXiv Detail & Related papers (2022-02-04T20:18:00Z) - Mixed-supervised segmentation: Confidence maximization helps knowledge
distillation [24.892332859630518]
In this work, we propose a dual-branch architecture for deep neural networks.
The upper branch (teacher) receives strong annotations, while the bottom one (student) is driven by limited supervision and guided by the upper branch.
We show that the synergy between the entropy and KL divergence yields substantial improvements in performance.
arXiv Detail & Related papers (2021-09-21T20:06:13Z) - Semi-supervised Semantic Segmentation with Directional Context-aware
Consistency [66.49995436833667]
We focus on the semi-supervised segmentation problem where only a small set of labeled data is provided with a much larger collection of totally unlabeled images.
A preferred high-level representation should capture the contextual information while not losing self-awareness.
We present the Directional Contrastive Loss (DC Loss) to accomplish the consistency in a pixel-to-pixel manner.
arXiv Detail & Related papers (2021-06-27T03:42:40Z) - WSSOD: A New Pipeline for Weakly- and Semi-Supervised Object Detection [75.80075054706079]
We propose a weakly- and semi-supervised object detection framework (WSSOD)
An agent detector is first trained on a joint dataset and then used to predict pseudo bounding boxes on weakly-annotated images.
The proposed framework demonstrates remarkable performance on PASCAL-VOC and MSCOCO benchmark, achieving a high performance comparable to those obtained in fully-supervised settings.
arXiv Detail & Related papers (2021-05-21T11:58:50Z) - Teach me to segment with mixed supervision: Confident students become
masters [27.976487552313113]
Deep segmentation neural networks require large training datasets with pixel-wise segmentations, which are expensive to obtain in practice.
We propose a dual-branch architecture, where the upper branch (teacher) receives strong annotations, while the bottom one (student) is driven by limited supervision and guided by the upper branch.
We demonstrate that our method significantly outperforms other strategies to tackle semantic segmentation within a mixed-supervision framework.
arXiv Detail & Related papers (2020-12-15T02:51:36Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.