Learning from Partially Overlapping Labels: Image Segmentation under
Annotation Shift
- URL: http://arxiv.org/abs/2107.05938v1
- Date: Tue, 13 Jul 2021 09:22:24 GMT
- Title: Learning from Partially Overlapping Labels: Image Segmentation under
Annotation Shift
- Authors: Gregory Filbrandt, Konstantinos Kamnitsas, David Bernstein, Alexandra
Taylor, Ben Glocker
- Abstract summary: We propose several strategies for learning from partially overlapping labels in the context of abdominal organ segmentation.
We find that combining a semi-supervised approach with an adaptive cross entropy loss can successfully exploit heterogeneously annotated data.
- Score: 68.6874404805223
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scarcity of high quality annotated images remains a limiting factor for
training accurate image segmentation models. While more and more annotated
datasets become publicly available, the number of samples in each individual
database is often small. Combining different databases to create larger amounts
of training data is appealing yet challenging due to the heterogeneity as a
result of differences in data acquisition and annotation processes, often
yielding incompatible or even conflicting information. In this paper, we
investigate and propose several strategies for learning from partially
overlapping labels in the context of abdominal organ segmentation. We find that
combining a semi-supervised approach with an adaptive cross entropy loss can
successfully exploit heterogeneously annotated data and substantially improve
segmentation accuracy compared to baseline and alternative approaches.
Related papers
- Guidelines for Cerebrovascular Segmentation: Managing Imperfect Annotations in the context of Semi-Supervised Learning [3.231698506153459]
Supervised learning methods achieve excellent performances when fed with a sufficient amount of labeled data.
Such labels are typically highly time-consuming, error-prone and expensive to produce.
Semi-supervised learning approaches leverage both labeled and unlabeled data, and are very useful when only a small fraction of the dataset is labeled.
arXiv Detail & Related papers (2024-04-02T09:31:06Z) - Self-similarity Driven Scale-invariant Learning for Weakly Supervised
Person Search [66.95134080902717]
We propose a novel one-step framework, named Self-similarity driven Scale-invariant Learning (SSL)
We introduce a Multi-scale Exemplar Branch to guide the network in concentrating on the foreground and learning scale-invariant features.
Experiments on PRW and CUHK-SYSU databases demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2023-02-25T04:48:11Z) - Consistency Regularisation in Varying Contexts and Feature Perturbations
for Semi-Supervised Semantic Segmentation of Histology Images [14.005379068469361]
We present a consistency based semi-supervised learning (SSL) approach that can help mitigate this challenge.
SSL models might also be susceptible to changing context and features perturbations exhibiting poor generalisation due to the limited training data.
We show that cross-consistency training makes the encoder features invariant to different perturbations and improves the prediction confidence.
arXiv Detail & Related papers (2023-01-30T18:21:57Z) - Dense FixMatch: a simple semi-supervised learning method for pixel-wise
prediction tasks [68.36996813591425]
We propose Dense FixMatch, a simple method for online semi-supervised learning of dense and structured prediction tasks.
We enable the application of FixMatch in semi-supervised learning problems beyond image classification by adding a matching operation on the pseudo-labels.
Dense FixMatch significantly improves results compared to supervised learning using only labeled data, approaching its performance with 1/4 of the labeled samples.
arXiv Detail & Related papers (2022-10-18T15:02:51Z) - SLRNet: Semi-Supervised Semantic Segmentation Via Label Reuse for Human
Decomposition Images [5.560471251954644]
We propose a semi-supervised method that reuses available labels for unlabeled images of a dataset by exploiting existing similarities.
We evaluate our method on a large dataset of human decomposition images and find that our method, while conceptually simple, outperforms state-of-the-art consistency.
arXiv Detail & Related papers (2022-02-24T04:58:02Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Semi-weakly Supervised Contrastive Representation Learning for Retinal
Fundus Images [0.2538209532048867]
We propose a semi-weakly supervised contrastive learning framework for representation learning using semi-weakly annotated images.
We empirically validate the transfer learning performance of SWCL on seven public retinal fundus datasets.
arXiv Detail & Related papers (2021-08-04T15:50:09Z) - Semi-supervised Semantic Segmentation with Directional Context-aware
Consistency [66.49995436833667]
We focus on the semi-supervised segmentation problem where only a small set of labeled data is provided with a much larger collection of totally unlabeled images.
A preferred high-level representation should capture the contextual information while not losing self-awareness.
We present the Directional Contrastive Loss (DC Loss) to accomplish the consistency in a pixel-to-pixel manner.
arXiv Detail & Related papers (2021-06-27T03:42:40Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.