Robust Trust Region for Weakly Supervised Segmentation
- URL: http://arxiv.org/abs/2104.01948v1
- Date: Mon, 5 Apr 2021 15:11:29 GMT
- Title: Robust Trust Region for Weakly Supervised Segmentation
- Authors: Dmitrii Marin and Yuri Boykov
- Abstract summary: We propose a new robust trust region approach for regularized losses improving the state-of-the-art results.
Our approach can be seen as a higher-order generalization of the classic chain rule.
- Score: 18.721108305669
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Acquisition of training data for the standard semantic segmentation is
expensive if requiring that each pixel is labeled. Yet, current methods
significantly deteriorate in weakly supervised settings, e.g. where a fraction
of pixels is labeled or when only image-level tags are available. It has been
shown that regularized losses - originally developed for unsupervised low-level
segmentation and representing geometric priors on pixel labels - can
considerably improve the quality of weakly supervised training. However, many
common priors require optimization stronger than gradient descent. Thus, such
regularizers have limited applicability in deep learning. We propose a new
robust trust region approach for regularized losses improving the
state-of-the-art results. Our approach can be seen as a higher-order
generalization of the classic chain rule. It allows neural network optimization
to use strong low-level solvers for the corresponding regularizers, including
discrete ones.
Related papers
- Soft Self-labeling and Potts Relaxations for Weakly-Supervised Segmentation [9.394359851234201]
We consider weakly supervised segmentation where only a fraction of pixels have ground truth labels (scribbles)<n>We focus on a self-labeling approach optimizing relaxations of the standard unsupervised CRF/Potts loss on unlabeled pixels.
arXiv Detail & Related papers (2025-07-02T13:52:34Z) - Image-level Regression for Uncertainty-aware Retinal Image Segmentation [3.7141182051230914]
We introduce a novel Uncertainty-Aware (SAUNA) transform, which adds pixel uncertainty to the ground truth.
Our results indicate that the integration of the SAUNA transform and these segmentation losses led to significant performance boosts for different segmentation models.
arXiv Detail & Related papers (2024-05-27T04:17:10Z) - Learning Semantic Segmentation with Query Points Supervision on Aerial Images [57.09251327650334]
We present a weakly supervised learning algorithm to train semantic segmentation algorithms.
Our proposed approach performs accurate semantic segmentation and improves efficiency by significantly reducing the cost and time required for manual annotation.
arXiv Detail & Related papers (2023-09-11T14:32:04Z) - Improving Semi-Supervised Semantic Segmentation with Dual-Level Siamese Structure Network [7.438140196173472]
We propose a dual-level Siamese structure network (DSSN) for pixel-wise contrastive learning.
We introduce a novel class-aware pseudo-label selection strategy for weak-to-strong supervision.
Our proposed method achieves state-of-the-art results on two datasets.
arXiv Detail & Related papers (2023-07-26T03:30:28Z) - Dynamic Feature Regularized Loss for Weakly Supervised Semantic
Segmentation [37.43674181562307]
We propose a new regularized loss which utilizes both shallow and deep features that are dynamically updated.
Our approach achieves new state-of-the-art performances, outperforming other approaches by a significant margin with more than 6% mIoU increase.
arXiv Detail & Related papers (2021-08-03T05:11:00Z) - Mixed Supervision Learning for Whole Slide Image Classification [88.31842052998319]
We propose a mixed supervision learning framework for super high-resolution images.
During the patch training stage, this framework can make use of coarse image-level labels to refine self-supervised learning.
A comprehensive strategy is proposed to suppress pixel-level false positives and false negatives.
arXiv Detail & Related papers (2021-07-02T09:46:06Z) - Semi-supervised Semantic Segmentation with Directional Context-aware
Consistency [66.49995436833667]
We focus on the semi-supervised segmentation problem where only a small set of labeled data is provided with a much larger collection of totally unlabeled images.
A preferred high-level representation should capture the contextual information while not losing self-awareness.
We present the Directional Contrastive Loss (DC Loss) to accomplish the consistency in a pixel-to-pixel manner.
arXiv Detail & Related papers (2021-06-27T03:42:40Z) - A Simple Baseline for Semi-supervised Semantic Segmentation with Strong
Data Augmentation [74.8791451327354]
We propose a simple yet effective semi-supervised learning framework for semantic segmentation.
A set of simple design and training techniques can collectively improve the performance of semi-supervised semantic segmentation significantly.
Our method achieves state-of-the-art results in the semi-supervised settings on the Cityscapes and Pascal VOC datasets.
arXiv Detail & Related papers (2021-04-15T06:01:39Z) - Dissecting Supervised Constrastive Learning [24.984074794337157]
Minimizing cross-entropy over the softmax scores of a linear map composed with a high-capacity encoder is arguably the most popular choice for training neural networks on supervised learning tasks.
We show that one can directly optimize the encoder instead, to obtain equally (or even more) discriminative representations via a supervised variant of a contrastive objective.
arXiv Detail & Related papers (2021-02-17T15:22:38Z) - Pixel-Level Cycle Association: A New Perspective for Domain Adaptive
Semantic Segmentation [169.82760468633236]
We propose to build the pixel-level cycle association between source and target pixel pairs.
Our method can be trained end-to-end in one stage and introduces no additional parameters.
arXiv Detail & Related papers (2020-10-31T00:11:36Z) - Gradient Centralization: A New Optimization Technique for Deep Neural
Networks [74.935141515523]
gradient centralization (GC) operates directly on gradients by centralizing the gradient vectors to have zero mean.
GC can be viewed as a projected gradient descent method with a constrained loss function.
GC is very simple to implement and can be easily embedded into existing gradient based DNNs with only one line of code.
arXiv Detail & Related papers (2020-04-03T10:25:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.