Leveraging point annotations in segmentation learning with boundary loss
- URL: http://arxiv.org/abs/2311.03537v1
- Date: Mon, 6 Nov 2023 21:18:39 GMT
- Title: Leveraging point annotations in segmentation learning with boundary loss
- Authors: Eva Breznik, Hoel Kervadec, Filip Malmberg, Joel Kullberg, H{\aa}kan
Ahlstr\"om, Marleen de Bruijne, Robin Strand
- Abstract summary: This paper investigates the combination of intensity-based distance maps with boundary loss for point-supervised semantic segmentation.
By design the boundary loss imposes a stronger penalty on the false positives the farther away from the object they occur.
We perform experiments on two multi-class datasets; ACDC (heart segmentation) and POEM (whole-body abdominal organ segmentation)
- Score: 2.7874893928375917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates the combination of intensity-based distance maps with
boundary loss for point-supervised semantic segmentation. By design the
boundary loss imposes a stronger penalty on the false positives the farther
away from the object they occur. Hence it is intuitively inappropriate for weak
supervision, where the ground truth label may be much smaller than the actual
object and a certain amount of false positives (w.r.t. the weak ground truth)
is actually desirable. Using intensity-aware distances instead may alleviate
this drawback, allowing for a certain amount of false positives without a
significant increase to the training loss. The motivation for applying the
boundary loss directly under weak supervision lies in its great success for
fully supervised segmentation tasks, but also in not requiring extra priors or
outside information that is usually required -- in some form -- with existing
weakly supervised methods in the literature. This formulation also remains
potentially more attractive than existing CRF-based regularizers, due to its
simplicity and computational efficiency. We perform experiments on two
multi-class datasets; ACDC (heart segmentation) and POEM (whole-body abdominal
organ segmentation). Preliminary results are encouraging and show that this
supervision strategy has great potential. On ACDC it outperforms the CRF-loss
based approach, and on POEM data it performs on par with it. The code for all
our experiments is openly available.
Related papers
- SINDER: Repairing the Singular Defects of DINOv2 [61.98878352956125]
Vision Transformer models trained on large-scale datasets often exhibit artifacts in the patch token they extract.
We propose a novel fine-tuning smooth regularization that rectifies structural deficiencies using only a small dataset.
arXiv Detail & Related papers (2024-07-23T20:34:23Z) - Weakly Supervised Semantic Segmentation for Large-Scale Point Cloud [69.36717778451667]
Existing methods for large-scale point cloud semantic segmentation require expensive, tedious and error-prone manual point-wise annotations.
We propose an effective weakly supervised method containing two components to solve the problem.
The experimental results show the large gain against existing weakly supervised and comparable results to fully supervised methods.
arXiv Detail & Related papers (2022-12-09T09:42:26Z) - Domain Adaptive Segmentation of Electron Microscopy with Sparse Point
Annotations [2.5137859989323537]
We develop a highly annotation-efficient approach with competitive performance.
We focus on weakly-supervised domain adaptation (WDA) with a type of extremely sparse and weak annotation.
We show that our model with only 15% point annotations can achieve comparable performance as supervised models.
arXiv Detail & Related papers (2022-10-24T10:50:37Z) - Positive-Negative Equal Contrastive Loss for Semantic Segmentation [8.664491798389662]
Previous works commonly design plug-and-play modules and structural losses to effectively extract and aggregate the global context.
We propose Positive-Negative Equal contrastive loss (PNE loss), which increases the latent impact of positive embedding on the anchor and treats the positive as well as negative sample pairs equally.
We conduct comprehensive experiments and achieve state-of-the-art performance on two benchmark datasets.
arXiv Detail & Related papers (2022-07-04T13:51:29Z) - Flip Learning: Erase to Segment [65.84901344260277]
Weakly-supervised segmentation (WSS) can help reduce time-consuming and cumbersome manual annotation.
We propose a novel and general WSS framework called Flip Learning, which only needs the box annotation.
Our proposed approach achieves competitive performance and shows great potential to narrow the gap between fully-supervised and weakly-supervised learning.
arXiv Detail & Related papers (2021-08-02T09:56:10Z) - Stateless actor-critic for instance segmentation with high-level priors [3.752550648610726]
Instance segmentation is an important computer vision problem which remains challenging due to deep learning-based methods.
We formulate the instance segmentation problem as graph partitioning and the actor critic predicts the edge weights driven by the rewards, which are based on the conformity of segmented instances to high-level priors on object shape, position or size.
Experiments on toy and real datasets demonstrate that we can achieve excellent performance without any direct supervision based only on a rich set of priors.
arXiv Detail & Related papers (2021-07-06T13:20:14Z) - WSSOD: A New Pipeline for Weakly- and Semi-Supervised Object Detection [75.80075054706079]
We propose a weakly- and semi-supervised object detection framework (WSSOD)
An agent detector is first trained on a joint dataset and then used to predict pseudo bounding boxes on weakly-annotated images.
The proposed framework demonstrates remarkable performance on PASCAL-VOC and MSCOCO benchmark, achieving a high performance comparable to those obtained in fully-supervised settings.
arXiv Detail & Related papers (2021-05-21T11:58:50Z) - Lower-bounded proper losses for weakly supervised classification [73.974163801142]
We discuss the problem of weakly supervised learning of classification, in which instances are given weak labels.
We derive a representation theorem for proper losses in supervised learning, which dualizes the Savage representation.
We experimentally demonstrate the effectiveness of our proposed approach, as compared to improper or unbounded losses.
arXiv Detail & Related papers (2021-03-04T08:47:07Z) - Towards A Category-extended Object Detector without Relabeling or
Conflicts [40.714221493482974]
In this paper, we aim at leaning a strong unified detector that can handle all categories based on the limited datasets without extra manual labor.
We propose a practical framework which focuses on three aspects: better base model, better unlabeled ground-truth mining strategy and better retraining method with pseudo annotations.
arXiv Detail & Related papers (2020-12-28T06:44:53Z) - Find it if You Can: End-to-End Adversarial Erasing for Weakly-Supervised
Semantic Segmentation [6.326017213490535]
We propose a novel formulation of adversarial erasing of the attention maps.
The proposed solution does not require saliency masks, instead it uses a regularization loss to prevent the attention maps from spreading to less discriminative object regions.
Our experiments on the Pascal VOC dataset demonstrate that our adversarial approach increases segmentation performance by 2.1 mIoU compared to our baseline and by 1.0 mIoU compared to previous adversarial erasing approaches.
arXiv Detail & Related papers (2020-11-09T18:35:35Z) - Towards Using Count-level Weak Supervision for Crowd Counting [55.58468947486247]
This paper studies the problem of weakly-supervised crowd counting which learns a model from only a small amount of location-level annotations (fully-supervised) but a large amount of count-level annotations (weakly-supervised)
We devise a simple-yet-effective training strategy, namely Multiple Auxiliary Tasks Training (MATT), to construct regularizes for restricting the freedom of the generated density maps.
arXiv Detail & Related papers (2020-02-29T02:58:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.