Deep Active Learning for Joint Classification & Segmentation with Weak
Annotator
- URL: http://arxiv.org/abs/2010.04889v2
- Date: Sat, 14 Nov 2020 04:09:55 GMT
- Title: Deep Active Learning for Joint Classification & Segmentation with Weak
Annotator
- Authors: Soufiane Belharbi, Ismail Ben Ayed, Luke McCaffrey, Eric Granger
- Abstract summary: CNN visualization and interpretation methods, like class-activation maps (CAMs), are typically used to highlight the image regions linked to class predictions.
We propose an active learning framework, which progressively integrates pixel-level annotations during training.
Our results indicate that, by simply using random sample selection, the proposed approach can significantly outperform state-of-the-art CAMs and AL methods.
- Score: 22.271760669551817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: CNN visualization and interpretation methods, like class-activation maps
(CAMs), are typically used to highlight the image regions linked to class
predictions. These models allow to simultaneously classify images and extract
class-dependent saliency maps, without the need for costly pixel-level
annotations. However, they typically yield segmentations with high
false-positive rates and, therefore, coarse visualisations, more so when
processing challenging images, as encountered in histology. To mitigate this
issue, we propose an active learning (AL) framework, which progressively
integrates pixel-level annotations during training. Given training data with
global image-level labels, our deep weakly-supervised learning model jointly
performs supervised image-level classification and active learning for
segmentation, integrating pixel annotations by an oracle. Unlike standard AL
methods that focus on sample selection, we also leverage large numbers of
unlabeled images via pseudo-segmentations (i.e., self-learning at the pixel
level), and integrate them with the oracle-annotated samples during training.
We report extensive experiments over two challenging benchmarks --
high-resolution medical images (histology GlaS data for colon cancer) and
natural images (CUB-200-2011 for bird species). Our results indicate that, by
simply using random sample selection, the proposed approach can significantly
outperform state-of the-art CAMs and AL methods, with an identical
oracle-supervision budget. Our code is publicly available.
Related papers
- Multilevel Saliency-Guided Self-Supervised Learning for Image Anomaly
Detection [15.212031255539022]
Anomaly detection (AD) is a fundamental task in computer vision.
We propose CutSwap, which leverages saliency guidance to incorporate semantic cues for augmentation.
CutSwap achieves state-of-the-art AD performance on two mainstream AD benchmark datasets.
arXiv Detail & Related papers (2023-11-30T08:03:53Z) - CSP: Self-Supervised Contrastive Spatial Pre-Training for
Geospatial-Visual Representations [90.50864830038202]
We present Contrastive Spatial Pre-Training (CSP), a self-supervised learning framework for geo-tagged images.
We use a dual-encoder to separately encode the images and their corresponding geo-locations, and use contrastive objectives to learn effective location representations from images.
CSP significantly boosts the model performance with 10-34% relative improvement with various labeled training data sampling ratios.
arXiv Detail & Related papers (2023-05-01T23:11:18Z) - High-fidelity Pseudo-labels for Boosting Weakly-Supervised Segmentation [17.804090651425955]
Image-level weakly-supervised segmentation (WSSS) reduces the usually vast data annotation cost by surrogate segmentation masks during training.
Our work is based on two techniques for improving CAMs; importance sampling, which is a substitute for GAP, and the feature similarity loss.
We reformulate both techniques based on binomial posteriors of multiple independent binary problems.
This has two benefits; their performance is improved and they become more general, resulting in an add-on method that can boost virtually any WSSS method.
arXiv Detail & Related papers (2023-04-05T17:43:57Z) - MoBYv2AL: Self-supervised Active Learning for Image Classification [57.4372176671293]
We present MoBYv2AL, a novel self-supervised active learning framework for image classification.
Our contribution lies in lifting MoBY, one of the most successful self-supervised learning algorithms, to the AL pipeline.
We achieve state-of-the-art results when compared to recent AL methods.
arXiv Detail & Related papers (2023-01-04T10:52:02Z) - A Pixel-Level Meta-Learner for Weakly Supervised Few-Shot Semantic
Segmentation [40.27705176115985]
Few-shot semantic segmentation addresses the learning task in which only few images with ground truth pixel-level labels are available for the novel classes of interest.
We propose a novel meta-learning framework, which predicts pseudo pixel-level segmentation masks from a limited amount of data and their semantic labels.
Our proposed learning model can be viewed as a pixel-level meta-learner.
arXiv Detail & Related papers (2021-11-02T08:28:11Z) - Mixed Supervision Learning for Whole Slide Image Classification [88.31842052998319]
We propose a mixed supervision learning framework for super high-resolution images.
During the patch training stage, this framework can make use of coarse image-level labels to refine self-supervised learning.
A comprehensive strategy is proposed to suppress pixel-level false positives and false negatives.
arXiv Detail & Related papers (2021-07-02T09:46:06Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation [49.90178055521207]
This work addresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level annotations and pixel-level segmentation.
We formulate WSSS as a novel group-wise learning task that explicitly models semantic dependencies in a group of images to estimate more reliable pseudo ground-truths.
In particular, we devise a graph neural network (GNN) for group-wise semantic mining, wherein input images are represented as graph nodes.
arXiv Detail & Related papers (2020-12-09T12:40:13Z) - Efficient Full Image Interactive Segmentation by Leveraging Within-image
Appearance Similarity [39.17599924322882]
We propose a new approach to interactive full-image semantic segmentation.
We leverage a key observation: propagation from labeled to unlabeled pixels does not necessarily require class-specific knowledge.
We build on this observation and propose an approach capable of jointly propagating pixel labels from multiple classes.
arXiv Detail & Related papers (2020-07-16T08:21:59Z) - SCAN: Learning to Classify Images without Labels [73.69513783788622]
We advocate a two-step approach where feature learning and clustering are decoupled.
A self-supervised task from representation learning is employed to obtain semantically meaningful features.
We obtain promising results on ImageNet, and outperform several semi-supervised learning methods in the low-data regime.
arXiv Detail & Related papers (2020-05-25T18:12:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.