A survey of active learning algorithms for supervised remote sensing
image classification
- URL: http://arxiv.org/abs/2104.07784v1
- Date: Thu, 15 Apr 2021 21:36:59 GMT
- Title: A survey of active learning algorithms for supervised remote sensing
image classification
- Authors: Devis Tuia, Michele Volpi, Loris Copa, Mikhail Kanevski, Jordi
Munoz-Mari
- Abstract summary: Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines.
Active learning aims at building efficient training sets by iteratively improving the model performance through sampling.
This paper reviews and tests the main families of active learning algorithms: committee, large margin and posterior probability-based.
- Score: 5.384800591054857
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Defining an efficient training set is one of the most delicate phases for the
success of remote sensing image classification routines. The complexity of the
problem, the limited temporal and financial resources, as well as the high
intraclass variance can make an algorithm fail if it is trained with a
suboptimal dataset. Active learning aims at building efficient training sets by
iteratively improving the model performance through sampling. A user-defined
heuristic ranks the unlabeled pixels according to a function of the uncertainty
of their class membership and then the user is asked to provide labels for the
most uncertain pixels. This paper reviews and tests the main families of active
learning algorithms: committee, large margin and posterior probability-based.
For each of them, the most recent advances in the remote sensing community are
discussed and some heuristics are detailed and tested. Several challenging
remote sensing scenarios are considered, including very high spatial resolution
and hyperspectral image classification. Finally, guidelines for choosing the
good architecture are provided for new and/or unexperienced user.
Related papers
- DiverseNet: Decision Diversified Semi-supervised Semantic Segmentation Networks for Remote Sensing Imagery [17.690698736544626]
We propose DiverseNet which explores multi-head and multi-model semi-supervised learning algorithms by simultaneously enhancing precision and diversity during training.
The two proposed methods in the DiverseNet family, namely DiverseHead and DiverseModel, both achieve the better semantic segmentation performance in four widely utilised remote sensing imagery data sets.
arXiv Detail & Related papers (2023-11-22T22:20:10Z) - Few-shot Image Classification based on Gradual Machine Learning [6.935034849731568]
Few-shot image classification aims to accurately classify unlabeled images using only a few labeled samples.
We propose a novel approach based on the non-i.i.d paradigm of gradual machine learning (GML)
We show that the proposed approach can improve the SOTA performance by 1-5% in terms of accuracy.
arXiv Detail & Related papers (2023-07-28T12:30:41Z) - Towards Diverse Evaluation of Class Incremental Learning: A Representation Learning Perspective [67.45111837188685]
Class incremental learning (CIL) algorithms aim to continually learn new object classes from incrementally arriving data.
We experimentally analyze neural network models trained by CIL algorithms using various evaluation protocols in representation learning.
arXiv Detail & Related papers (2022-06-16T11:44:11Z) - Hybrid Optimized Deep Convolution Neural Network based Learning Model
for Object Detection [0.0]
Object identification is one of the most fundamental and difficult issues in computer vision.
In recent years, deep learning-based object detection techniques have grabbed the public's interest.
In this study, a unique deep learning classification technique is used to create an autonomous object detecting system.
The suggested framework has a detection accuracy of 0.9864, which is greater than current techniques.
arXiv Detail & Related papers (2022-03-02T04:39:37Z) - Learning Representations for Pixel-based Control: What Matters and Why? [22.177382138487566]
We present a simple baseline approach that can learn meaningful representations with no metric-based learning, no data augmentations, no world-model learning, and no contrastive learning.
Our results show that finer categorization of benchmarks on the basis of characteristics like density of reward, planning horizon of the problem, presence of task-irrelevant components, etc., is crucial in evaluating algorithms.
arXiv Detail & Related papers (2021-11-15T14:16:28Z) - Low-Regret Active learning [64.36270166907788]
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training.
At the core of our work is an efficient algorithm for sleeping experts that is tailored to achieve low regret on predictable (easy) instances.
arXiv Detail & Related papers (2021-04-06T22:53:45Z) - Hyperspherical embedding for novel class classification [1.5952956981784217]
We present a constraint-based approach applied to representations in the latent space under the normalized softmax loss.
We experimentally validate the proposed approach for the classification of unseen classes on different datasets using both metric learning and the normalized softmax loss.
Our results show that not only our proposed strategy can be efficiently trained on larger set of classes, as it does not require pairwise learning, but also present better classification results than the metric learning strategies.
arXiv Detail & Related papers (2021-02-05T15:42:13Z) - Minimax Active Learning [61.729667575374606]
Active learning aims to develop label-efficient algorithms by querying the most representative samples to be labeled by a human annotator.
Current active learning techniques either rely on model uncertainty to select the most uncertain samples or use clustering or reconstruction to choose the most diverse set of unlabeled examples.
We develop a semi-supervised minimax entropy-based active learning algorithm that leverages both uncertainty and diversity in an adversarial manner.
arXiv Detail & Related papers (2020-12-18T19:03:40Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Expert Training: Task Hardness Aware Meta-Learning for Few-Shot
Classification [62.10696018098057]
We propose an easy-to-hard expert meta-training strategy to arrange the training tasks properly.
A task hardness aware module is designed and integrated into the training procedure to estimate the hardness of a task.
Experimental results on the miniImageNet and tieredImageNetSketch datasets show that the meta-learners can obtain better results with our expert training strategy.
arXiv Detail & Related papers (2020-07-13T08:49:00Z) - SCAN: Learning to Classify Images without Labels [73.69513783788622]
We advocate a two-step approach where feature learning and clustering are decoupled.
A self-supervised task from representation learning is employed to obtain semantically meaningful features.
We obtain promising results on ImageNet, and outperform several semi-supervised learning methods in the low-data regime.
arXiv Detail & Related papers (2020-05-25T18:12:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.