Revisiting Deep Active Learning for Semantic Segmentation
- URL: http://arxiv.org/abs/2302.04075v1
- Date: Wed, 8 Feb 2023 14:23:37 GMT
- Title: Revisiting Deep Active Learning for Semantic Segmentation
- Authors: Sudhanshu Mittal, Joshua Niemeijer, J\"org P. Sch\"afer, Thomas Brox
- Abstract summary: We show that the data distribution is decisive for the performance of the various active learning objectives proposed in the literature.
We demonstrate that the integration of semi-supervised learning with active learning can improve performance when the two objectives are aligned.
- Score: 37.3546941940388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active learning automatically selects samples for annotation from a data pool
to achieve maximum performance with minimum annotation cost. This is
particularly critical for semantic segmentation, where annotations are costly.
In this work, we show in the context of semantic segmentation that the data
distribution is decisive for the performance of the various active learning
objectives proposed in the literature. Particularly, redundancy in the data, as
it appears in most driving scenarios and video datasets, plays a large role. We
demonstrate that the integration of semi-supervised learning with active
learning can improve performance when the two objectives are aligned. Our
experimental study shows that current active learning benchmarks for
segmentation in driving scenarios are not realistic since they operate on data
that is already curated for maximum diversity. Accordingly, we propose a more
realistic evaluation scheme in which the value of active learning becomes
clearly visible, both by itself and in combination with semi-supervised
learning.
Related papers
- Active Prompt Learning with Vision-Language Model Priors [9.173468790066956]
We introduce a class-guided clustering that leverages the pre-trained image and text encoders of vision-language models.
We propose a budget-saving selective querying based on adaptive class-wise thresholds.
arXiv Detail & Related papers (2024-11-23T02:34:33Z) - Vocabulary-Defined Semantics: Latent Space Clustering for Improving In-Context Learning [32.178931149612644]
In-context learning enables language models to adapt to downstream data or incorporate tasks by few samples as demonstrations within the prompts.
However, the performance of in-context learning can be unstable depending on the quality, format, or order of demonstrations.
We propose a novel approach "vocabulary-defined semantics"
arXiv Detail & Related papers (2024-01-29T14:29:48Z) - BAL: Balancing Diversity and Novelty for Active Learning [53.289700543331925]
We introduce a novel framework, Balancing Active Learning (BAL), which constructs adaptive sub-pools to balance diverse and uncertain data.
Our approach outperforms all established active learning methods on widely recognized benchmarks by 1.20%.
arXiv Detail & Related papers (2023-12-26T08:14:46Z) - ALP: Action-Aware Embodied Learning for Perception [60.64801970249279]
We introduce Action-Aware Embodied Learning for Perception (ALP)
ALP incorporates action information into representation learning through a combination of optimizing a reinforcement learning policy and an inverse dynamics prediction objective.
We show that ALP outperforms existing baselines in several downstream perception tasks.
arXiv Detail & Related papers (2023-06-16T21:51:04Z) - Reinforcement Learning from Passive Data via Latent Intentions [86.4969514480008]
We show that passive data can still be used to learn features that accelerate downstream RL.
Our approach learns from passive data by modeling intentions.
Our experiments demonstrate the ability to learn from many forms of passive data, including cross-embodiment video data and YouTube videos.
arXiv Detail & Related papers (2023-04-10T17:59:05Z) - Learning Context-aware Classifier for Semantic Segmentation [88.88198210948426]
In this paper, contextual hints are exploited via learning a context-aware classifier.
Our method is model-agnostic and can be easily applied to generic segmentation models.
With only negligible additional parameters and +2% inference time, decent performance gain has been achieved on both small and large models.
arXiv Detail & Related papers (2023-03-21T07:00:35Z) - Semantic Segmentation with Active Semi-Supervised Representation
Learning [23.79742108127707]
We train an effective semantic segmentation algorithm with significantly lesser labeled data.
We extend the prior state-of-the-art S4AL algorithm by replacing its mean teacher approach for semi-supervised learning with a self-training approach.
We evaluate our method on CamVid and CityScapes datasets, the de-facto standards for active learning for semantic segmentation.
arXiv Detail & Related papers (2022-10-16T00:21:43Z) - Active Pointly-Supervised Instance Segmentation [106.38955769817747]
We present an economic active learning setting, named active pointly-supervised instance segmentation (APIS)
APIS starts with box-level annotations and iteratively samples a point within the box and asks if it falls on the object.
The model developed with these strategies yields consistent performance gain on the challenging MS-COCO dataset.
arXiv Detail & Related papers (2022-07-23T11:25:24Z) - SIMILAR: Submodular Information Measures Based Active Learning In
Realistic Scenarios [1.911678487931003]
SIMILAR is a unified active learning framework using recently proposed submodular information measures (SIM) as acquisition functions.
We show that SIMILAR significantly outperforms existing active learning algorithms by as much as 5% - 18% in the case of rare classes and 5% - 10% in the case of out-of-distribution data.
arXiv Detail & Related papers (2021-07-01T19:49:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.