Semantic Segmentation with Active Semi-Supervised Learning
- URL: http://arxiv.org/abs/2203.10730v1
- Date: Mon, 21 Mar 2022 04:16:25 GMT
- Title: Semantic Segmentation with Active Semi-Supervised Learning
- Authors: Aneesh Rangnekar, Christopher Kanan, Matthew Hoffman
- Abstract summary: We propose a novel algorithm that combines active learning and semi-supervised learning.
Our method obtains over 95% of the network's performance on the full-training set.
- Score: 23.79742108127707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Using deep learning, we now have the ability to create exceptionally good
semantic segmentation systems; however, collecting the prerequisite pixel-wise
annotations for training images remains expensive and time-consuming.
Therefore, it would be ideal to minimize the number of human annotations needed
when creating a new dataset. Here, we address this problem by proposing a novel
algorithm that combines active learning and semi-supervised learning. Active
learning is an approach for identifying the best unlabeled samples to annotate.
While there has been work on active learning for segmentation, most methods
require annotating all pixel objects in each image, rather than only the most
informative regions. We argue that this is inefficient. Instead, our active
learning approach aims to minimize the number of annotations per-image. Our
method is enriched with semi-supervised learning, where we use pseudo labels
generated with a teacher-student framework to identify image regions that help
disambiguate confused classes. We also integrate mechanisms that enable better
performance on imbalanced label distributions, which have not been studied
previously for active learning in semantic segmentation. In experiments on the
CamVid and CityScapes datasets, our method obtains over 95% of the network's
performance on the full-training set using less than 19% of the training data,
whereas the previous state of the art required 40% of the training data.
Related papers
- Two-Step Active Learning for Instance Segmentation with Uncertainty and
Diversity Sampling [20.982992381790034]
We propose a post-hoc active learning algorithm that integrates uncertainty-based sampling with diversity-based sampling.
Our proposed algorithm is not only simple and easy to implement, but it also delivers superior performance on various datasets.
arXiv Detail & Related papers (2023-09-28T03:40:30Z) - Learning Semantic Segmentation with Query Points Supervision on Aerial Images [57.09251327650334]
We present a weakly supervised learning algorithm to train semantic segmentation algorithms.
Our proposed approach performs accurate semantic segmentation and improves efficiency by significantly reducing the cost and time required for manual annotation.
arXiv Detail & Related papers (2023-09-11T14:32:04Z) - Semantic Segmentation with Active Semi-Supervised Representation
Learning [23.79742108127707]
We train an effective semantic segmentation algorithm with significantly lesser labeled data.
We extend the prior state-of-the-art S4AL algorithm by replacing its mean teacher approach for semi-supervised learning with a self-training approach.
We evaluate our method on CamVid and CityScapes datasets, the de-facto standards for active learning for semantic segmentation.
arXiv Detail & Related papers (2022-10-16T00:21:43Z) - A Pixel-Level Meta-Learner for Weakly Supervised Few-Shot Semantic
Segmentation [40.27705176115985]
Few-shot semantic segmentation addresses the learning task in which only few images with ground truth pixel-level labels are available for the novel classes of interest.
We propose a novel meta-learning framework, which predicts pseudo pixel-level segmentation masks from a limited amount of data and their semantic labels.
Our proposed learning model can be viewed as a pixel-level meta-learner.
arXiv Detail & Related papers (2021-11-02T08:28:11Z) - AugNet: End-to-End Unsupervised Visual Representation Learning with
Image Augmentation [3.6790362352712873]
We propose AugNet, a new deep learning training paradigm to learn image features from a collection of unlabeled pictures.
Our experiments demonstrate that the method is able to represent the image in low dimensional space.
Unlike many deep-learning-based image retrieval algorithms, our approach does not require access to external annotated datasets.
arXiv Detail & Related papers (2021-06-11T09:02:30Z) - Towards Good Practices for Efficiently Annotating Large-Scale Image
Classification Datasets [90.61266099147053]
We investigate efficient annotation strategies for collecting multi-class classification labels for a large collection of images.
We propose modifications and best practices aimed at minimizing human labeling effort.
Simulated experiments on a 125k image subset of the ImageNet100 show that it can be annotated to 80% top-1 accuracy with 0.35 annotations per image on average.
arXiv Detail & Related papers (2021-04-26T16:29:32Z) - A Closer Look at Self-training for Zero-Label Semantic Segmentation [53.4488444382874]
Being able to segment unseen classes not observed during training is an important technical challenge in deep learning.
Prior zero-label semantic segmentation works approach this task by learning visual-semantic embeddings or generative models.
We propose a consistency regularizer to filter out noisy pseudo-labels by taking the intersections of the pseudo-labels generated from different augmentations of the same image.
arXiv Detail & Related papers (2021-04-21T14:34:33Z) - Grafit: Learning fine-grained image representations with coarse labels [114.17782143848315]
This paper tackles the problem of learning a finer representation than the one provided by training labels.
By jointly leveraging the coarse labels and the underlying fine-grained latent space, it significantly improves the accuracy of category-level retrieval methods.
arXiv Detail & Related papers (2020-11-25T19:06:26Z) - SCAN: Learning to Classify Images without Labels [73.69513783788622]
We advocate a two-step approach where feature learning and clustering are decoupled.
A self-supervised task from representation learning is employed to obtain semantically meaningful features.
We obtain promising results on ImageNet, and outperform several semi-supervised learning methods in the low-data regime.
arXiv Detail & Related papers (2020-05-25T18:12:33Z) - Naive-Student: Leveraging Semi-Supervised Learning in Video Sequences
for Urban Scene Segmentation [57.68890534164427]
In this work, we ask if we may leverage semi-supervised learning in unlabeled video sequences and extra images to improve the performance on urban scene segmentation.
We simply predict pseudo-labels for the unlabeled data and train subsequent models with both human-annotated and pseudo-labeled data.
Our Naive-Student model, trained with such simple yet effective iterative semi-supervised learning, attains state-of-the-art results at all three Cityscapes benchmarks.
arXiv Detail & Related papers (2020-05-20T18:00:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.