Weakly Supervised Vessel Segmentation in X-ray Angiograms by Self-Paced
Learning from Noisy Labels with Suggestive Annotation
- URL: http://arxiv.org/abs/2005.13366v1
- Date: Wed, 27 May 2020 13:55:33 GMT
- Title: Weakly Supervised Vessel Segmentation in X-ray Angiograms by Self-Paced
Learning from Noisy Labels with Suggestive Annotation
- Authors: Jingyang Zhang, Guotai Wang, Hongzhi Xie, Shuyang Zhang, Ning Huang,
Shaoting Zhang, Lixu Gu
- Abstract summary: We propose a weakly supervised training framework that learns from noisy pseudo labels generated from automatic vessel enhancement.
A typical self-paced learning scheme is used to make the training process robust against label noise.
We show that our proposed framework achieves comparable accuracy to fully supervised learning.
- Score: 12.772031281511023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The segmentation of coronary arteries in X-ray angiograms by convolutional
neural networks (CNNs) is promising yet limited by the requirement of precisely
annotating all pixels in a large number of training images, which is extremely
labor-intensive especially for complex coronary trees. To alleviate the burden
on the annotator, we propose a novel weakly supervised training framework that
learns from noisy pseudo labels generated from automatic vessel enhancement,
rather than accurate labels obtained by fully manual annotation. A typical
self-paced learning scheme is used to make the training process robust against
label noise while challenged by the systematic biases in pseudo labels, thus
leading to the decreased performance of CNNs at test time. To solve this
problem, we propose an annotation-refining self-paced learning framework
(AR-SPL) to correct the potential errors using suggestive annotation. An
elaborate model-vesselness uncertainty estimation is also proposed to enable
the minimal annotation cost for suggestive annotation, based on not only the
CNNs in training but also the geometric features of coronary arteries derived
directly from raw data. Experiments show that our proposed framework achieves
1) comparable accuracy to fully supervised learning, which also significantly
outperforms other weakly supervised learning frameworks; 2) largely reduced
annotation cost, i.e., 75.18% of annotation time is saved, and only 3.46% of
image regions are required to be annotated; and 3) an efficient intervention
process, leading to superior performance with even fewer manual interactions.
Related papers
- Affinity-Graph-Guided Contractive Learning for Pretext-Free Medical Image Segmentation with Minimal Annotation [55.325956390997]
This paper proposes an affinity-graph-guided semi-supervised contrastive learning framework (Semi-AGCL) for medical image segmentation.
The framework first designs an average-patch-entropy-driven inter-patch sampling method, which can provide a robust initial feature space.
With merely 10% of the complete annotation set, our model approaches the accuracy of the fully annotated baseline, manifesting a marginal deviation of only 2.52%.
arXiv Detail & Related papers (2024-10-14T10:44:47Z) - One-bit Supervision for Image Classification: Problem, Solution, and
Beyond [114.95815360508395]
This paper presents one-bit supervision, a novel setting of learning with fewer labels, for image classification.
We propose a multi-stage training paradigm and incorporate negative label suppression into an off-the-shelf semi-supervised learning algorithm.
In multiple benchmarks, the learning efficiency of the proposed approach surpasses that using full-bit, semi-supervised supervision.
arXiv Detail & Related papers (2023-11-26T07:39:00Z) - Flip Learning: Erase to Segment [65.84901344260277]
Weakly-supervised segmentation (WSS) can help reduce time-consuming and cumbersome manual annotation.
We propose a novel and general WSS framework called Flip Learning, which only needs the box annotation.
Our proposed approach achieves competitive performance and shows great potential to narrow the gap between fully-supervised and weakly-supervised learning.
arXiv Detail & Related papers (2021-08-02T09:56:10Z) - Cascaded Robust Learning at Imperfect Labels for Chest X-ray
Segmentation [61.09321488002978]
We present a novel cascaded robust learning framework for chest X-ray segmentation with imperfect annotation.
Our model consists of three independent network, which can effectively learn useful information from the peer networks.
Our methods could achieve a significant improvement on the accuracy in segmentation tasks compared to the previous methods.
arXiv Detail & Related papers (2021-04-05T15:50:16Z) - Scribble-Supervised Semantic Segmentation by Uncertainty Reduction on
Neural Representation and Self-Supervision on Neural Eigenspace [21.321005898976253]
Scribble-supervised semantic segmentation has gained much attention recently for its promising performance without high-quality annotations.
This work aims to achieve semantic segmentation by scribble annotations directly without extra information and other limitations.
We propose holistic operations, including minimizing entropy and a network embedded random walk on neural representation to reduce uncertainty.
arXiv Detail & Related papers (2021-02-19T12:33:57Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - Improving Semantic Segmentation via Self-Training [75.07114899941095]
We show that we can obtain state-of-the-art results using a semi-supervised approach, specifically a self-training paradigm.
We first train a teacher model on labeled data, and then generate pseudo labels on a large set of unlabeled data.
Our robust training framework can digest human-annotated and pseudo labels jointly and achieve top performances on Cityscapes, CamVid and KITTI datasets.
arXiv Detail & Related papers (2020-04-30T17:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.