SatImNet: Structured and Harmonised Training Data for Enhanced Satellite
Imagery Classification
- URL: http://arxiv.org/abs/2006.10623v2
- Date: Tue, 3 Nov 2020 22:44:00 GMT
- Title: SatImNet: Structured and Harmonised Training Data for Enhanced Satellite
Imagery Classification
- Authors: Vasileios Syrris, Ondrej Pesek, Pierre Soille
- Abstract summary: We describe procedures of open-source training data management, integration, and data retrieval.
We propose SatImNet, a collection of open training data, structured and harmonized according to specific rules.
Two modelling approaches based on convolutional neural networks have been designed and configured to deal with satellite image classification and segmentation.
- Score: 0.32228025627337864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic supervised classification with complex modelling such as deep
neural networks requires the availability of representative training data sets.
While there exists a plethora of data sets that can be used for this purpose,
they are usually very heterogeneous and not interoperable. In this context, the
present work has a twofold objective: i) to describe procedures of open-source
training data management, integration, and data retrieval, and ii) to
demonstrate the practical use of varying source training data for remote
sensing image classification. For the former, we propose SatImNet, a collection
of open training data, structured and harmonized according to specific rules.
For the latter, two modelling approaches based on convolutional neural networks
have been designed and configured to deal with satellite image classification
and segmentation.
Related papers
- Example-Based Explainable AI and its Application for Remote Sensing
Image Classification [0.0]
We show an example of an instance in a training dataset that is similar to the input data to be inferred.
Using a remote sensing image dataset from the Sentinel-2 satellite, the concept was successfully demonstrated.
arXiv Detail & Related papers (2023-02-03T03:48:43Z) - Self-supervised Pre-training for Semantic Segmentation in an Indoor
Scene [8.357801312689622]
We propose RegConsist, a method for self-supervised pre-training of a semantic segmentation model.
We use a variant of contrastive learning to train a DCNN model for predicting semantic segmentation from RGB views in the target environment.
The proposed method outperforms models pre-trained on ImageNet and achieves competitive performance when using models that are trained for exactly the same task but on a different dataset.
arXiv Detail & Related papers (2022-10-04T20:10:14Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - Self-supervised Audiovisual Representation Learning for Remote Sensing Data [96.23611272637943]
We propose a self-supervised approach for pre-training deep neural networks in remote sensing.
By exploiting the correspondence between geo-tagged audio recordings and remote sensing, this is done in a completely label-free manner.
We show that our approach outperforms existing pre-training strategies for remote sensing imagery.
arXiv Detail & Related papers (2021-08-02T07:50:50Z) - DeepSatData: Building large scale datasets of satellite images for
training machine learning models [77.17638664503215]
This report presents design considerations for automatically generating satellite imagery datasets for training machine learning models.
We discuss issues faced from the point of view of deep neural network training and evaluation.
arXiv Detail & Related papers (2021-04-28T15:13:12Z) - Graph-Based Neural Network Models with Multiple Self-Supervised
Auxiliary Tasks [79.28094304325116]
Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points.
We propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion.
arXiv Detail & Related papers (2020-11-14T11:09:51Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z) - Neural networks adapting to datasets: learning network size and topology [77.34726150561087]
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a gradient-based training.
The resulting network has the structure of a graph tailored to the particular learning task and dataset.
arXiv Detail & Related papers (2020-06-22T12:46:44Z) - Weakly-supervised land classification for coastal zone based on deep convolutional neural networks by incorporating dual-polarimetric characteristics into training dataset [1.0494061710470493]
We explore the performance of DCNNs on semantic segmentation using spaceborne polarimetric synthetic aperture radar (PolSAR) datasets.
The semantic segmentation task using PolSAR data can be categorized as weakly supervised learning when the characteristics of SAR data and data annotating procedures are factored in.
Three DCNN models, including SegNet, U-Net, and LinkNet, are implemented next.
arXiv Detail & Related papers (2020-03-30T17:32:49Z) - SemI2I: Semantically Consistent Image-to-Image Translation for Domain
Adaptation of Remote Sensing Data [7.577893526158495]
We propose a new data augmentation approach that transfers the style of test data to training data using generative adversarial networks.
Our semantic segmentation framework consists in first training a U-net from the real training data and then fine-tuning it on the test stylized fake training data generated by the proposed approach.
arXiv Detail & Related papers (2020-02-14T09:07:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.