Maturity-Aware Active Learning for Semantic Segmentation with
Hierarchically-Adaptive Sample Assessment
- URL: http://arxiv.org/abs/2308.14904v1
- Date: Mon, 28 Aug 2023 21:13:04 GMT
- Title: Maturity-Aware Active Learning for Semantic Segmentation with
Hierarchically-Adaptive Sample Assessment
- Authors: Amirsaeed Yazdani, Xuelu Li, and Vishal Monga
- Abstract summary: "Maturity-Aware Distribution Breakdown-based Active Learning" (MADBAL) is an AL method that takes into account the different "sample" definitions jointly.
MADBAL makes significant performance leaps even in the early AL stage, hence reducing the training burden significantly.
It outperforms state-of-the-art methods on Cityscapes and PASCAL VOC datasets as verified in our experiments.
- Score: 18.65352271757926
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Active Learning (AL) for semantic segmentation is challenging due to heavy
class imbalance and different ways of defining "sample" (pixels, areas, etc.),
leaving the interpretation of the data distribution ambiguous. We propose
"Maturity-Aware Distribution Breakdown-based Active Learning'' (MADBAL), an AL
method that benefits from a hierarchical approach to define a multiview data
distribution, which takes into account the different "sample" definitions
jointly, hence able to select the most impactful segmentation pixels with
comprehensive understanding. MADBAL also features a novel uncertainty
formulation, where AL supporting modules are included to sense the features'
maturity whose weighted influence continuously contributes to the uncertainty
detection. In this way, MADBAL makes significant performance leaps even in the
early AL stage, hence reducing the training burden significantly. It
outperforms state-of-the-art methods on Cityscapes and PASCAL VOC datasets as
verified in our extensive experiments.
Related papers
- Annotation-Efficient Polyp Segmentation via Active Learning [45.59503015577479]
We propose a deep active learning framework for annotation-efficient polyp segmentation.
In practice, we measure the uncertainty of each sample by examining the similarity between features masked by the prediction map of the polyp and the background area.
We show that our proposed method achieved state-of-the-art performance compared to other competitors on both a public dataset and a large-scale in-house dataset.
arXiv Detail & Related papers (2024-03-21T12:25:17Z) - Variational Self-Supervised Contrastive Learning Using Beta Divergence [0.0]
We present a contrastive self-supervised learning method which is robust to data noise, grounded in the domain of variational methods.
We demonstrate the effectiveness of the proposed method through rigorous experiments including linear evaluation and fine-tuning scenarios with multi-label datasets in the face understanding domain.
arXiv Detail & Related papers (2023-09-05T17:21:38Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Probing Contextual Diversity for Dense Out-of-Distribution Detection [33.95082228484776]
Detection of out-of-distribution (OoD) samples in the context of image classification has recently become an area of interest and active study.
We introduce MOoSe, an efficient strategy to leverage the various levels of context represented within semantic segmentation models.
We show that even a simple aggregation of multi-scale representations has consistently positive effects on OoD detection and uncertainty estimation.
arXiv Detail & Related papers (2022-08-30T12:10:30Z) - Meta-Causal Feature Learning for Out-of-Distribution Generalization [71.38239243414091]
This paper presents a balanced meta-causal learner (BMCL), which includes a balanced task generation module (BTG) and a meta-causal feature learning module (MCFL)
BMCL effectively identifies the class-invariant visual regions for classification and may serve as a general framework to improve the performance of the state-of-the-art methods.
arXiv Detail & Related papers (2022-08-22T09:07:02Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - MCDAL: Maximum Classifier Discrepancy for Active Learning [74.73133545019877]
Recent state-of-the-art active learning methods have mostly leveraged Generative Adversarial Networks (GAN) for sample acquisition.
We propose in this paper a novel active learning framework that we call Maximum Discrepancy for Active Learning (MCDAL)
In particular, we utilize two auxiliary classification layers that learn tighter decision boundaries by maximizing the discrepancies among them.
arXiv Detail & Related papers (2021-07-23T06:57:08Z) - Active Learning under Label Shift [80.65643075952639]
We introduce a "medial distribution" to incorporate a tradeoff between importance and class-balanced sampling.
We prove sample complexity and generalization guarantees for Mediated Active Learning under Label Shift (MALLS)
We empirically demonstrate MALLS scales to high-dimensional datasets and can reduce the sample complexity of active learning by 60% in deep active learning tasks.
arXiv Detail & Related papers (2020-07-16T17:30:02Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.