Soft labelling for semantic segmentation: Bringing coherence to label
down-sampling
- URL: http://arxiv.org/abs/2302.13961v3
- Date: Mon, 19 Feb 2024 07:11:57 GMT
- Title: Soft labelling for semantic segmentation: Bringing coherence to label
down-sampling
- Authors: Roberto Alcover-Couso, Marcos Escudero-Vinolo, Juan C. SanMiguel and
Jose M. Martinez
- Abstract summary: In semantic segmentation, down-sampling is commonly performed due to limited resources.
We propose a novel framework for label down-sampling via soft-labeling.
This proposal also produces reliable annotations for under-represented semantic classes.
- Score: 1.797129499170058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In semantic segmentation, training data down-sampling is commonly performed
due to limited resources, the need to adapt image size to the model input, or
improve data augmentation. This down-sampling typically employs different
strategies for the image data and the annotated labels. Such discrepancy leads
to mismatches between the down-sampled color and label images. Hence, the
training performance significantly decreases as the down-sampling factor
increases. In this paper, we bring together the down-sampling strategies for
the image data and the training labels. To that aim, we propose a novel
framework for label down-sampling via soft-labeling that better conserves label
information after down-sampling. Therefore, fully aligning soft-labels with
image data to keep the distribution of the sampled pixels. This proposal also
produces reliable annotations for under-represented semantic classes.
Altogether, it allows training competitive models at lower resolutions.
Experiments show that the proposal outperforms other down-sampling strategies.
Moreover, state-of-the-art performance is achieved for reference benchmarks,
but employing significantly less computational resources than foremost
approaches. This proposal enables competitive research for semantic
segmentation under resource constraints.
Related papers
- Heavy Labels Out! Dataset Distillation with Label Space Lightening [69.67681224137561]
HeLlO aims at effective image-to-label projectors, with which synthetic labels can be directly generated online from synthetic images.
We demonstrate that with only about 0.003% of the original storage required for a complete set of soft labels, we achieve comparable performance to current state-of-the-art dataset distillation methods on large-scale datasets.
arXiv Detail & Related papers (2024-08-15T15:08:58Z) - Towards Efficient and Accurate CT Segmentation via Edge-Preserving Probabilistic Downsampling [2.1465347972460367]
Downsampling images and labels, often necessitated by limited resources or to expedite network training, leads to the loss of small objects and thin boundaries.
This undermines the segmentation network's capacity to interpret images accurately and predict detailed labels, resulting in diminished performance compared to processing at original resolutions.
We introduce a novel method named Edge-preserving Probabilistic Downsampling (EPD)
It utilizes class uncertainty within a local window to produce soft labels, with the window size dictating the downsampling factor.
arXiv Detail & Related papers (2024-04-05T10:01:31Z) - Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - Two-Step Active Learning for Instance Segmentation with Uncertainty and
Diversity Sampling [20.982992381790034]
We propose a post-hoc active learning algorithm that integrates uncertainty-based sampling with diversity-based sampling.
Our proposed algorithm is not only simple and easy to implement, but it also delivers superior performance on various datasets.
arXiv Detail & Related papers (2023-09-28T03:40:30Z) - Handling Image and Label Resolution Mismatch in Remote Sensing [10.009103959118931]
We show how to handle resolution mismatch between overhead imagery and ground-truth label sources.
We present a method that is supervised using low-resolution labels, but takes advantage of an exemplar set of high-resolution labels.
Our method incorporates region aggregation, adversarial learning, and self-supervised pretraining to generate fine-supervised predictions.
arXiv Detail & Related papers (2022-11-28T21:56:07Z) - Category-Adaptive Label Discovery and Noise Rejection for Multi-label
Image Recognition with Partial Positive Labels [78.88007892742438]
Training multi-label models with partial positive labels (MLR-PPL) attracts increasing attention.
Previous works regard unknown labels as negative and adopt traditional MLR algorithms.
We propose to explore semantic correlation among different images to facilitate the MLR-PPL task.
arXiv Detail & Related papers (2022-11-15T02:11:20Z) - AdaWAC: Adaptively Weighted Augmentation Consistency Regularization for
Volumetric Medical Image Segmentation [3.609538870261841]
We propose an adaptive weighting algorithm for volumetric medical image segmentation.
AdaWAC assigns label-dense samples to supervised cross-entropy loss and label-sparse samples to consistency regularization.
We empirically demonstrate that AdaWAC not only enhances segmentation performance and sample efficiency but also improves robustness to the subpopulation shift in labels.
arXiv Detail & Related papers (2022-10-04T20:28:38Z) - An analysis of over-sampling labeled data in semi-supervised learning
with FixMatch [66.34968300128631]
Most semi-supervised learning methods over-sample labeled data when constructing training mini-batches.
This paper studies whether this common practice improves learning and how.
We compare it to an alternative setting where each mini-batch is uniformly sampled from all the training data, labeled or not.
arXiv Detail & Related papers (2022-01-03T12:22:26Z) - Learning to Downsample for Segmentation of Ultra-High Resolution Images [6.432524678252553]
We show that learning the spatially varying downsampling strategy jointly with segmentation offers advantages in segmenting large images with limited computational budget.
Our method adapts the sampling density over different locations so that more samples are collected from the small important regions and less from the others.
We show on two public and one local high-resolution datasets that our method consistently learns sampling locations preserving more information and boosting segmentation accuracy over baseline methods.
arXiv Detail & Related papers (2021-09-22T23:04:59Z) - Region-level Active Learning for Cluttered Scenes [60.93811392293329]
We introduce a new strategy that subsumes previous Image-level and Object-level approaches into a generalized, Region-level approach.
We show that this approach significantly decreases labeling effort and improves rare object search on realistic data with inherent class-imbalance and cluttered scenes.
arXiv Detail & Related papers (2021-08-20T14:02:38Z) - Semi-supervised Semantic Segmentation with Directional Context-aware
Consistency [66.49995436833667]
We focus on the semi-supervised segmentation problem where only a small set of labeled data is provided with a much larger collection of totally unlabeled images.
A preferred high-level representation should capture the contextual information while not losing self-awareness.
We present the Directional Contrastive Loss (DC Loss) to accomplish the consistency in a pixel-to-pixel manner.
arXiv Detail & Related papers (2021-06-27T03:42:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.