Weakly-supervised land classification for coastal zone based on deep convolutional neural networks by incorporating dual-polarimetric characteristics into training dataset
- URL: http://arxiv.org/abs/2003.13648v4
- Date: Tue, 19 Nov 2024 03:30:22 GMT
- Title: Weakly-supervised land classification for coastal zone based on deep convolutional neural networks by incorporating dual-polarimetric characteristics into training dataset
- Authors: Sheng Sun, Armando Marino, Wenze Shui, Zhongwen Hu,
- Abstract summary: We explore the performance of DCNNs on semantic segmentation using spaceborne polarimetric synthetic aperture radar (PolSAR) datasets.
The semantic segmentation task using PolSAR data can be categorized as weakly supervised learning when the characteristics of SAR data and data annotating procedures are factored in.
Three DCNN models, including SegNet, U-Net, and LinkNet, are implemented next.
- Score: 1.0494061710470493
- License:
- Abstract: In this work we explore the performance of DCNNs on semantic segmentation using spaceborne polarimetric synthetic aperture radar (PolSAR) datasets. The semantic segmentation task using PolSAR data can be categorized as weakly supervised learning when the characteristics of SAR data and data annotating procedures are factored in. Datasets are initially analyzed for selecting feasible pre-training images. Then the differences between spaceborne and airborne datasets are examined in terms of spatial resolution and viewing geometry. In this study we used two dual-polarimetric images acquired by TerraSAR-X DLR. A novel method to produce training dataset with more supervised information is developed. Specifically, a series of typical classified images as well as intensity images serve as training datasets. A field survey is conducted for an area of about 20 square kilometers to obtain a ground truth dataset used for accuracy evaluation. Several transfer learning strategies are made for aforementioned training datasets which will be combined in a practicable order. Three DCNN models, including SegNet, U-Net, and LinkNet, are implemented next.
Related papers
- LiOn-XA: Unsupervised Domain Adaptation via LiDAR-Only Cross-Modal Adversarial Training [61.26381389532653]
LiOn-XA is an unsupervised domain adaptation (UDA) approach that combines LiDAR-Only Cross-Modal (X) learning with Adversarial training for 3D LiDAR point cloud semantic segmentation.
Our experiments on 3 real-to-real adaptation scenarios demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-10-21T09:50:17Z) - Rethinking Transformers Pre-training for Multi-Spectral Satellite
Imagery [78.43828998065071]
Recent advances in unsupervised learning have demonstrated the ability of large vision models to achieve promising results on downstream tasks.
Such pre-training techniques have also been explored recently in the remote sensing domain due to the availability of large amount of unlabelled data.
In this paper, we re-visit transformers pre-training and leverage multi-scale information that is effectively utilized with multiple modalities.
arXiv Detail & Related papers (2024-03-08T16:18:04Z) - An evaluation of Deep Learning based stereo dense matching dataset shift
from aerial images and a large scale stereo dataset [2.048226951354646]
We present a method for generating ground-truth disparity maps directly from Light Detection and Ranging (LiDAR) and images.
We evaluate 11 dense matching methods across datasets with diverse scene types, image resolutions, and geometric configurations.
arXiv Detail & Related papers (2024-02-19T20:33:46Z) - CSP: Self-Supervised Contrastive Spatial Pre-Training for
Geospatial-Visual Representations [90.50864830038202]
We present Contrastive Spatial Pre-Training (CSP), a self-supervised learning framework for geo-tagged images.
We use a dual-encoder to separately encode the images and their corresponding geo-locations, and use contrastive objectives to learn effective location representations from images.
CSP significantly boosts the model performance with 10-34% relative improvement with various labeled training data sampling ratios.
arXiv Detail & Related papers (2023-05-01T23:11:18Z) - MV-JAR: Masked Voxel Jigsaw and Reconstruction for LiDAR-Based
Self-Supervised Pre-Training [58.07391711548269]
Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training.
Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training.
arXiv Detail & Related papers (2023-03-23T17:59:02Z) - Navya3DSeg -- Navya 3D Semantic Segmentation Dataset & split generation
for autonomous vehicles [63.20765930558542]
3D semantic data are useful for core perception tasks such as obstacle detection and ego-vehicle localization.
We propose a new dataset, Navya 3D (Navya3DSeg), with a diverse label space corresponding to a large scale production grade operational domain.
It contains 23 labeled sequences and 25 supplementary sequences without labels, designed to explore self-supervised and semi-supervised semantic segmentation benchmarks on point clouds.
arXiv Detail & Related papers (2023-02-16T13:41:19Z) - A benchmark dataset for deep learning-based airplane detection: HRPlanes [3.5297361401370044]
We create a novel airplane detection dataset called High Resolution Planes (HRPlanes) by using images from Google Earth (GE)
HRPlanes include GE images of several different airports across the world to represent a variety of landscape, seasonal and satellite geometry conditions obtained from different satellites.
Our preliminary results show that the proposed dataset can be a valuable data source and benchmark data set for future applications.
arXiv Detail & Related papers (2022-04-22T23:49:44Z) - Active Learning for Improved Semi-Supervised Semantic Segmentation in
Satellite Images [1.0152838128195467]
Semi-supervised techniques generate pseudo-labels from a small set of labeled examples.
We propose to use an active learning-based sampling strategy to select a highly representative set of labeled training data.
We report a 27% improvement in mIoU with as little as 2% labeled data using active learning sampling strategies.
arXiv Detail & Related papers (2021-10-15T00:29:31Z) - DeepSatData: Building large scale datasets of satellite images for
training machine learning models [77.17638664503215]
This report presents design considerations for automatically generating satellite imagery datasets for training machine learning models.
We discuss issues faced from the point of view of deep neural network training and evaluation.
arXiv Detail & Related papers (2021-04-28T15:13:12Z) - Sparse Signal Models for Data Augmentation in Deep Learning ATR [0.8999056386710496]
We propose a data augmentation approach to incorporate domain knowledge and improve the generalization power of a data-intensive learning algorithm.
We exploit the sparsity of the scattering centers in the spatial domain and the smoothly-varying structure of the scattering coefficients in the azimuthal domain to solve the ill-posed problem of over-parametrized model fitting.
arXiv Detail & Related papers (2020-12-16T21:46:33Z) - SatImNet: Structured and Harmonised Training Data for Enhanced Satellite
Imagery Classification [0.32228025627337864]
We describe procedures of open-source training data management, integration, and data retrieval.
We propose SatImNet, a collection of open training data, structured and harmonized according to specific rules.
Two modelling approaches based on convolutional neural networks have been designed and configured to deal with satellite image classification and segmentation.
arXiv Detail & Related papers (2020-06-18T15:46:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.