Learning to segment images with classification labels
- URL: http://arxiv.org/abs/1912.12533v2
- Date: Sun, 29 Nov 2020 19:56:32 GMT
- Title: Learning to segment images with classification labels
- Authors: Ozan Ciga, Anne L. Martel
- Abstract summary: We propose an architecture that can alleviate the requirements for segmentation-level ground truth by making use of image-level labels.
In our experiments, we show using only one segmentation-level annotation per class, we can achieve performance comparable to a fully annotated dataset.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Two of the most common tasks in medical imaging are classification and
segmentation. Either task requires labeled data annotated by experts, which is
scarce and expensive to collect. Annotating data for segmentation is generally
considered to be more laborious as the annotator has to draw around the
boundaries of regions of interest, as opposed to assigning image patches a
class label. Furthermore, in tasks such as breast cancer histopathology, any
realistic clinical application often includes working with whole slide images,
whereas most publicly available training data are in the form of image patches,
which are given a class label. We propose an architecture that can alleviate
the requirements for segmentation-level ground truth by making use of
image-level labels to reduce the amount of time spent on data curation. In
addition, this architecture can help unlock the potential of previously
acquired image-level datasets on segmentation tasks by annotating a small
number of regions of interest. In our experiments, we show using only one
segmentation-level annotation per class, we can achieve performance comparable
to a fully annotated dataset.
Related papers
- Auxiliary Tasks Enhanced Dual-affinity Learning for Weakly Supervised
Semantic Segmentation [79.05949524349005]
We propose AuxSegNet+, a weakly supervised auxiliary learning framework to explore the rich information from saliency maps.
We also propose a cross-task affinity learning mechanism to learn pixel-level affinities from the saliency and segmentation feature maps.
arXiv Detail & Related papers (2024-03-02T10:03:21Z) - Learning Semantic Segmentation with Query Points Supervision on Aerial Images [57.09251327650334]
We present a weakly supervised learning algorithm to train semantic segmentation algorithms.
Our proposed approach performs accurate semantic segmentation and improves efficiency by significantly reducing the cost and time required for manual annotation.
arXiv Detail & Related papers (2023-09-11T14:32:04Z) - Open-world Semantic Segmentation via Contrasting and Clustering
Vision-Language Embedding [95.78002228538841]
We propose a new open-world semantic segmentation pipeline that makes the first attempt to learn to segment semantic objects of various open-world categories without any efforts on dense annotations.
Our method can directly segment objects of arbitrary categories, outperforming zero-shot segmentation methods that require data labeling on three benchmark datasets.
arXiv Detail & Related papers (2022-07-18T09:20:04Z) - Semantic Segmentation In-the-Wild Without Seeing Any Segmentation
Examples [34.97652735163338]
We propose a novel approach for creating semantic segmentation masks for every object.
Our method takes as input the image-level labels of the class categories present in the image.
The output of this stage provides pixel-level pseudo-labels, instead of the manual pixel-level labels required by supervised methods.
arXiv Detail & Related papers (2021-12-06T17:32:38Z) - Open-World Entity Segmentation [70.41548013910402]
We introduce a new image segmentation task, termed Entity (ES) with the aim to segment all visual entities in an image without considering semantic category labels.
All semantically-meaningful segments are equally treated as categoryless entities and there is no thing-stuff distinction.
ES enables the following: (1) merging multiple datasets to form a large training set without the need to resolve label conflicts; (2) any model trained on one dataset can generalize exceptionally well to other datasets with unseen domains.
arXiv Detail & Related papers (2021-07-29T17:59:05Z) - Hierarchical Semantic Segmentation using Psychometric Learning [17.417302703539367]
We develop a novel approach to collect segmentation annotations from experts based on psychometric testing.
Our method consists of the psychometric testing procedure, active query selection, query enhancement, and a deep metric learning model.
We show the merits of our method with evaluation on the synthetically generated image, aerial image and histology image.
arXiv Detail & Related papers (2021-07-07T13:38:33Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Grafit: Learning fine-grained image representations with coarse labels [114.17782143848315]
This paper tackles the problem of learning a finer representation than the one provided by training labels.
By jointly leveraging the coarse labels and the underlying fine-grained latent space, it significantly improves the accuracy of category-level retrieval methods.
arXiv Detail & Related papers (2020-11-25T19:06:26Z) - Manifold-driven Attention Maps for Weakly Supervised Segmentation [9.289524646688244]
We propose a manifold driven attention-based network to enhance visual salient regions.
Our method generates superior attention maps directly during inference without the need of extra computations.
arXiv Detail & Related papers (2020-04-07T00:03:28Z) - Realizing Pixel-Level Semantic Learning in Complex Driving Scenes based
on Only One Annotated Pixel per Class [17.481116352112682]
We propose a new semantic segmentation task under complex driving scenes based on weakly supervised condition.
A three step process is built for pseudo labels generation, which progressively implement optimal feature representation for each category.
Experiments on Cityscapes dataset demonstrate that the proposed method provides a feasible way to solve weakly supervised semantic segmentation task.
arXiv Detail & Related papers (2020-03-10T12:57:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.