Current Trends in Deep Learning for Earth Observation: An Open-source
Benchmark Arena for Image Classification
- URL: http://arxiv.org/abs/2207.07189v1
- Date: Thu, 14 Jul 2022 20:18:58 GMT
- Title: Current Trends in Deep Learning for Earth Observation: An Open-source
Benchmark Arena for Image Classification
- Authors: Ivica Dimitrovski, Ivan Kitanovski, Dragi Kocev, Nikola Simidjievski
- Abstract summary: 'AiTLAS: Benchmark Arena' is an open-source benchmark framework for evaluating state-of-the-art deep learning approaches for image classification.
We present a comprehensive comparative analysis of more than 400 models derived from nine different state-of-the-art architectures.
- Score: 7.511257876007757
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present 'AiTLAS: Benchmark Arena' -- an open-source benchmark framework
for evaluating state-of-the-art deep learning approaches for image
classification in Earth Observation (EO). To this end, we present a
comprehensive comparative analysis of more than 400 models derived from nine
different state-of-the-art architectures, and compare them to a variety of
multi-class and multi-label classification tasks from 22 datasets with
different sizes and properties. In addition to models trained entirely on these
datasets, we also benchmark models trained in the context of transfer learning,
leveraging pre-trained model variants, as it is typically performed in
practice. All presented approaches are general and can be easily extended to
many other remote sensing image classification tasks not considered in this
study. To ensure reproducibility and facilitate better usability and further
developments, all of the experimental resources including the trained models,
model configurations and processing details of the datasets (with their
corresponding splits used for training and evaluating the models) are publicly
available on the repository: https://github.com/biasvariancelabs/aitlas-arena.
Related papers
- Hierarchical Multi-Label Classification with Missing Information for Benthic Habitat Imagery [1.6492989697868894]
We show the capacity to conduct HML training in scenarios where there exist multiple levels of missing annotation information.
We find that, when using smaller one-hot image label datasets typical of local or regional scale benthic science projects, models pre-trained with self-supervision on a larger collection of in-domain benthic data outperform models pre-trained on ImageNet.
arXiv Detail & Related papers (2024-09-10T16:15:01Z) - Investigating Self-Supervised Methods for Label-Efficient Learning [27.029542823306866]
We study different self supervised pretext tasks, namely contrastive learning, clustering, and masked image modelling for their low-shot capabilities.
We introduce a framework involving both mask image modelling and clustering as pretext tasks, which performs better across all low-shot downstream tasks.
When testing the model on full scale datasets, we show performance gains in multi-class classification, multi-label classification and semantic segmentation.
arXiv Detail & Related papers (2024-06-25T10:56:03Z) - Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Fantastic Gains and Where to Find Them: On the Existence and Prospect of
General Knowledge Transfer between Any Pretrained Model [74.62272538148245]
We show that for arbitrary pairings of pretrained models, one model extracts significant data context unavailable in the other.
We investigate if it is possible to transfer such "complementary" knowledge from one model to another without performance degradation.
arXiv Detail & Related papers (2023-10-26T17:59:46Z) - Fine-Grained ImageNet Classification in the Wild [0.0]
Robustness tests can uncover several vulnerabilities and biases which go unnoticed during the typical model evaluation stage.
In our work, we perform fine-grained classification on closely related categories, which are identified with the help of hierarchical knowledge.
arXiv Detail & Related papers (2023-03-04T12:25:07Z) - Improving Label Quality by Jointly Modeling Items and Annotators [68.8204255655161]
We propose a fully Bayesian framework for learning ground truth labels from noisy annotators.
Our framework ensures scalability by factoring a generative, Bayesian soft clustering model over label distributions into the classic David and Skene joint annotator-data model.
arXiv Detail & Related papers (2021-06-20T02:15:20Z) - Benchmarking Representation Learning for Natural World Image Collections [13.918304838054846]
We present two new natural world visual classification datasets, iNat2021 and NeWT.
The former consists of 2.7M images from 10k different species uploaded by users of the citizen science application iNaturalist.
We benchmarking the performance of representation learning algorithms on a suite of challenging natural world binary classification tasks that go beyond standard species classification.
We provide a comprehensive analysis of feature extractors trained with and without supervision on ImageNet and iNat2021, shedding light on the strengths and weaknesses of different learned features across a diverse set of tasks.
arXiv Detail & Related papers (2021-03-30T16:41:49Z) - Deep Semi-Supervised Learning for Time Series Classification [1.096924880299061]
We investigate the transferability of state-of-the-art deep semi-supervised models from image to time series classification.
We show that these transferred semi-supervised models show significant performance gains over strong supervised, semi-supervised and self-supervised alternatives.
arXiv Detail & Related papers (2021-02-06T17:40:56Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z) - Improving QA Generalization by Concurrent Modeling of Multiple Biases [61.597362592536896]
Existing NLP datasets contain various biases that models can easily exploit to achieve high performances on the corresponding evaluation sets.
We propose a general framework for improving the performance on both in-domain and out-of-domain datasets by concurrent modeling of multiple biases in the training data.
We extensively evaluate our framework on extractive question answering with training data from various domains with multiple biases of different strengths.
arXiv Detail & Related papers (2020-10-07T11:18:49Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.