LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic
Segmentation
- URL: http://arxiv.org/abs/2110.08733v3
- Date: Thu, 21 Oct 2021 01:26:31 GMT
- Title: LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic
Segmentation
- Authors: Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu and Yanfei Zhong
- Abstract summary: LoveDA dataset contains 5987 HSR images with 166 annotated objects from three different cities.
LoveDA dataset is suitable for both land-cover semantic segmentation and unsupervised domain adaptation (UDA) tasks.
- Score: 7.629717457706323
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep learning approaches have shown promising results in remote sensing high
spatial resolution (HSR) land-cover mapping. However, urban and rural scenes
can show completely different geographical landscapes, and the inadequate
generalizability of these algorithms hinders city-level or national-level
mapping. Most of the existing HSR land-cover datasets mainly promote the
research of learning semantic representation, thereby ignoring the model
transferability. In this paper, we introduce the Land-cOVEr Domain Adaptive
semantic segmentation (LoveDA) dataset to advance semantic and transferable
learning. The LoveDA dataset contains 5987 HSR images with 166768 annotated
objects from three different cities. Compared to the existing datasets, the
LoveDA dataset encompasses two domains (urban and rural), which brings
considerable challenges due to the: 1) multi-scale objects; 2) complex
background samples; and 3) inconsistent class distributions. The LoveDA dataset
is suitable for both land-cover semantic segmentation and unsupervised domain
adaptation (UDA) tasks. Accordingly, we benchmarked the LoveDA dataset on
eleven semantic segmentation methods and eight UDA methods. Some exploratory
studies including multi-scale architectures and strategies, additional
background supervision, and pseudo-label analysis were also carried out to
address these challenges. The code and data are available at
https://github.com/Junjue-Wang/LoveDA.
Related papers
- T-UDA: Temporal Unsupervised Domain Adaptation in Sequential Point
Clouds [2.5291108878852864]
unsupervised domain adaptation (UDA) methods adapt models trained on one (source) domain with annotations available to another (target) domain for which only unannotated data are available.
We introduce a novel domain adaptation method that leverages the best of both trends. Dubbed T-UDA for "temporal UDA", such a combination yields massive performance gains for the task of 3D semantic segmentation of driving scenes.
arXiv Detail & Related papers (2023-09-15T10:47:12Z) - GeoMultiTaskNet: remote sensing unsupervised domain adaptation using
geographical coordinates [6.575290987792054]
Land cover maps are a pivotal element in a wide range of Earth Observation (EO) applications.
Unsupervised Domain Adaption (UDA) could tackle these issues by adapting a model trained on a source domain, where labels are available, to a target domain, without annotations.
We propose a new lightweight model, GeoMultiTaskNet, to adapt the semantic segmentation loss to the frequency of classes.
arXiv Detail & Related papers (2023-04-16T11:00:43Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - Navya3DSeg -- Navya 3D Semantic Segmentation Dataset & split generation
for autonomous vehicles [63.20765930558542]
3D semantic data are useful for core perception tasks such as obstacle detection and ego-vehicle localization.
We propose a new dataset, Navya 3D (Navya3DSeg), with a diverse label space corresponding to a large scale production grade operational domain.
It contains 23 labeled sequences and 25 supplementary sequences without labels, designed to explore self-supervised and semi-supervised semantic segmentation benchmarks on point clouds.
arXiv Detail & Related papers (2023-02-16T13:41:19Z) - DeepAstroUDA: Semi-Supervised Universal Domain Adaptation for
Cross-Survey Galaxy Morphology Classification and Anomaly Detection [0.0]
We present a universal domain adaptation method, textitDeepAstroUDA, as an approach to overcome this challenge.
textitDeepAstroUDA is capable of bridging the gap between two astronomical surveys, increasing classification accuracy in both domains.
Our method also performs well as an anomaly detection algorithm and successfully clusters unknown class samples even in the unlabeled target dataset.
arXiv Detail & Related papers (2023-02-03T21:20:58Z) - Tackling Long-Tailed Category Distribution Under Domain Shifts [50.21255304847395]
Existing approaches cannot handle the scenario where both issues exist.
We designed three novel core functional blocks including Distribution Calibrated Classification Loss, Visual-Semantic Mapping and Semantic-Similarity Guided Augmentation.
Two new datasets were proposed for this problem, named AWA2-LTS and ImageNet-LTS.
arXiv Detail & Related papers (2022-07-20T19:07:46Z) - Improving Semi-Supervised and Domain-Adaptive Semantic Segmentation with
Self-Supervised Depth Estimation [94.16816278191477]
We present a framework for semi-adaptive and domain-supervised semantic segmentation.
It is enhanced by self-supervised monocular depth estimation trained only on unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset.
arXiv Detail & Related papers (2021-08-28T01:33:38Z) - Campus3D: A Photogrammetry Point Cloud Benchmark for Hierarchical
Understanding of Outdoor Scene [76.4183572058063]
We present a richly-annotated 3D point cloud dataset for multiple outdoor scene understanding tasks.
The dataset has been point-wisely annotated with both hierarchical and instance-based labels.
We formulate a hierarchical learning problem for 3D point cloud segmentation and propose a measurement evaluating consistency across various hierarchies.
arXiv Detail & Related papers (2020-08-11T19:10:32Z) - Very High Resolution Land Cover Mapping of Urban Areas at Global Scale
with Convolutional Neural Networks [0.0]
This paper describes a methodology to produce a 7-classes land cover map of urban areas from very high resolution images and limited noisy labeled data.
We created a training dataset on a few areas of interest aggregating databases, semi-automatic classification, and manual annotation to get a complete ground truth in each class.
The final product is a highly valuable land cover map computed from model predictions stitched together, binarized, and refined before vectorization.
arXiv Detail & Related papers (2020-05-12T10:03:20Z) - Grounded Situation Recognition [56.18102368133022]
We introduce Grounded Situation Recognition (GSR), a task that requires producing structured semantic summaries of images.
GSR presents important technical challenges: identifying semantic saliency, categorizing and localizing a large and diverse set of entities.
We show initial findings on three exciting future directions enabled by our models: conditional querying, visual chaining, and grounded semantic aware image retrieval.
arXiv Detail & Related papers (2020-03-26T17:57:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.