DEAL: Difficulty-aware Active Learning for Semantic Segmentation
- URL: http://arxiv.org/abs/2010.08705v1
- Date: Sat, 17 Oct 2020 03:25:25 GMT
- Title: DEAL: Difficulty-aware Active Learning for Semantic Segmentation
- Authors: Shuai Xie, Zunlei Feng, Ying Chen, Songtao Sun, Chao Ma and Mingli
Song
- Abstract summary: Active learning aims to address the paucity of labeled data by finding the most informative samples.
We propose a semantic Difficulty-awarE Active Learning network composed of two branches: the common segmentation branch and the semantic difficulty branch.
For the latter branch, with the supervision of segmentation error between the segmentation result and GT, a pixel-wise probability attention module is introduced to learn the semantic difficulty scores for different semantic areas.
Two acquisition functions are devised to select the most valuable samples with semantic difficulty.
- Score: 33.96850316081623
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active learning aims to address the paucity of labeled data by finding the
most informative samples. However, when applying to semantic segmentation,
existing methods ignore the segmentation difficulty of different semantic
areas, which leads to poor performance on those hard semantic areas such as
tiny or slender objects. To deal with this problem, we propose a semantic
Difficulty-awarE Active Learning (DEAL) network composed of two branches: the
common segmentation branch and the semantic difficulty branch. For the latter
branch, with the supervision of segmentation error between the segmentation
result and GT, a pixel-wise probability attention module is introduced to learn
the semantic difficulty scores for different semantic areas. Finally, two
acquisition functions are devised to select the most valuable samples with
semantic difficulty. Competitive results on semantic segmentation benchmarks
demonstrate that DEAL achieves state-of-the-art active learning performance and
improves the performance of the hard semantic areas in particular.
Related papers
- Frequency-based Matcher for Long-tailed Semantic Segmentation [22.199174076366003]
We focus on a relatively under-explored task setting, long-tailed semantic segmentation (LTSS)
We propose a dual-metric evaluation system and construct the LTSS benchmark to demonstrate the performance of semantic segmentation methods and long-tailed solutions.
We also propose a transformer-based algorithm to improve LTSS, frequency-based matcher, which solves the oversuppression problem by one-to-many matching.
arXiv Detail & Related papers (2024-06-06T09:57:56Z) - Auxiliary Tasks Enhanced Dual-affinity Learning for Weakly Supervised
Semantic Segmentation [79.05949524349005]
We propose AuxSegNet+, a weakly supervised auxiliary learning framework to explore the rich information from saliency maps.
We also propose a cross-task affinity learning mechanism to learn pixel-level affinities from the saliency and segmentation feature maps.
arXiv Detail & Related papers (2024-03-02T10:03:21Z) - Semantic Connectivity-Driven Pseudo-labeling for Cross-domain
Segmentation [89.41179071022121]
Self-training is a prevailing approach in cross-domain semantic segmentation.
We propose a novel approach called Semantic Connectivity-driven pseudo-labeling.
This approach formulates pseudo-labels at the connectivity level and thus can facilitate learning structured and low-noise semantics.
arXiv Detail & Related papers (2023-12-11T12:29:51Z) - Learning Context-aware Classifier for Semantic Segmentation [88.88198210948426]
In this paper, contextual hints are exploited via learning a context-aware classifier.
Our method is model-agnostic and can be easily applied to generic segmentation models.
With only negligible additional parameters and +2% inference time, decent performance gain has been achieved on both small and large models.
arXiv Detail & Related papers (2023-03-21T07:00:35Z) - SegmentMeIfYouCan: A Benchmark for Anomaly Segmentation [111.61261419566908]
Deep neural networks (DNNs) are usually trained on a closed set of semantic classes.
They are ill-equipped to handle previously-unseen objects.
detecting and localizing such objects is crucial for safety-critical applications such as perception for automated driving.
arXiv Detail & Related papers (2021-04-30T07:58:19Z) - Exposing Semantic Segmentation Failures via Maximum Discrepancy
Competition [102.75463782627791]
We take steps toward answering the question by exposing failures of existing semantic segmentation methods in the open visual world.
Inspired by previous research on model falsification, we start from an arbitrarily large image set, and automatically sample a small image set by MAximizing the Discrepancy (MAD) between two segmentation methods.
The selected images have the greatest potential in falsifying either (or both) of the two methods.
A segmentation method, whose failures are more difficult to be exposed in the MAD competition, is considered better.
arXiv Detail & Related papers (2021-02-27T16:06:25Z) - Three Ways to Improve Semantic Segmentation with Self-Supervised Depth
Estimation [90.87105131054419]
We present a framework for semi-supervised semantic segmentation, which is enhanced by self-supervised monocular depth estimation from unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset, where all three modules demonstrate significant performance gains.
arXiv Detail & Related papers (2020-12-19T21:18:03Z) - Lookahead Adversarial Learning for Near Real-Time Semantic Segmentation [2.538209532048867]
We build a conditional adversarial network with a state-of-the-art segmentation model (DeepLabv3+) at its core.
We focus on semantic segmentation models that run fast at inference for near real-time field applications.
arXiv Detail & Related papers (2020-06-19T17:04:38Z) - Unsupervised segmentation via semantic-apparent feature fusion [21.75371777263847]
This research proposes an unsupervised foreground segmentation method based on semantic-apparent feature fusion (SAFF)
Key regions of foreground object can be accurately responded via semantic features, while apparent features provide richer detailed expression.
By fusing semantic and apparent features, as well as cascading the modules of intra-image adaptive feature weight learning and inter-image common feature learning, the research achieves performance that significantly exceeds baselines.
arXiv Detail & Related papers (2020-05-21T08:28:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.