DynamicEarthNet: Daily Multi-Spectral Satellite Dataset for Semantic
Change Segmentation
- URL: http://arxiv.org/abs/2203.12560v1
- Date: Wed, 23 Mar 2022 17:22:22 GMT
- Title: DynamicEarthNet: Daily Multi-Spectral Satellite Dataset for Semantic
Change Segmentation
- Authors: Aysim Toker, Lukas Kondmann, Mark Weber, Marvin Eisenberger, Andr\'es
Camero, Jingliang Hu, Ariadna Pregel Hoderlein, \c{C}a\u{g}lar \c{S}enaras,
Timothy Davis, Daniel Cremers, Giovanni Marchisio, Xiao Xiang Zhu, Laura
Leal-Taix\'e
- Abstract summary: We propose the DynamicEarthNet dataset that consists of daily, multi-spectral satellite observations of 75 selected areas of interest.
These observations are paired with pixel-wise monthly semantic segmentation labels of 7 land use and land cover classes.
- Score: 43.72597365517224
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Earth observation is a fundamental tool for monitoring the evolution of land
use in specific areas of interest. Observing and precisely defining change, in
this context, requires both time-series data and pixel-wise segmentations. To
that end, we propose the DynamicEarthNet dataset that consists of daily,
multi-spectral satellite observations of 75 selected areas of interest
distributed over the globe with imagery from Planet Labs. These observations
are paired with pixel-wise monthly semantic segmentation labels of 7 land use
and land cover (LULC) classes. DynamicEarthNet is the first dataset that
provides this unique combination of daily measurements and high-quality labels.
In our experiments, we compare several established baselines that either
utilize the daily observations as additional training data (semi-supervised
learning) or multiple observations at once (spatio-temporal learning) as a
point of reference for future research. Finally, we propose a new evaluation
metric SCS that addresses the specific challenges associated with time-series
semantic change segmentation. The data is available at:
https://mediatum.ub.tum.de/1650201.
Related papers
- Deep Multimodal Fusion for Semantic Segmentation of Remote Sensing Earth Observation Data [0.08192907805418582]
This paper proposes a late fusion deep learning model (LF-DLM) for semantic segmentation.
One branch integrates detailed textures from aerial imagery captured by UNetFormer with a Multi-Axis Vision Transformer (ViT) backbone.
The other branch captures complex-temporal dynamics from the Sentinel-2 satellite imageMax time series using a U-ViNet with Temporal Attention (U-TAE)
arXiv Detail & Related papers (2024-10-01T07:50:37Z) - Diffusion Models as Data Mining Tools [87.77999285241219]
This paper demonstrates how to use generative models trained for image synthesis as tools for visual data mining.
We show that after finetuning conditional diffusion models to synthesize images from a specific dataset, we can use these models to define a typicality measure.
This measure assesses how typical visual elements are for different data labels, such as geographic location, time stamps, semantic labels, or even the presence of a disease.
arXiv Detail & Related papers (2024-07-20T17:14:31Z) - Learning from Unlabelled Data with Transformers: Domain Adaptation for Semantic Segmentation of High Resolution Aerial Images [30.324252605889356]
We develop a new model for semantic segmentation of unlabelled images.
NEOS performs domain adaptation as the target domain does not have ground truth semantic segmentation masks.
The distribution inconsistencies between the target and source domains are due to differences in acquisition scenes, environment conditions, sensors, and times.
arXiv Detail & Related papers (2024-04-17T12:12:48Z) - SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - Ben-ge: Extending BigEarthNet with Geographical and Environmental Data [1.1377027568901037]
We present the ben-ge dataset, which supplements the BigEarthNet-MM dataset by compiling freely and globally available geographical and environmental data.
Based on this dataset, we showcase the value of combining different data modalities for the downstream tasks of patch-based land-use/land-cover classification and land-use/land-cover segmentation.
arXiv Detail & Related papers (2023-07-04T14:17:54Z) - Embedding Earth: Self-supervised contrastive pre-training for dense land
cover classification [61.44538721707377]
We present Embedding Earth a self-supervised contrastive pre-training method for leveraging the large availability of satellite imagery.
We observe significant improvements up to 25% absolute mIoU when pre-trained with our proposed method.
We find that learnt features can generalize between disparate regions opening up the possibility of using the proposed pre-training scheme.
arXiv Detail & Related papers (2022-03-11T16:14:14Z) - Segmentation of VHR EO Images using Unsupervised Learning [19.00071868539993]
We propose an unsupervised semantic segmentation method that can be trained using just a single unlabeled scene.
The proposed method exploits this property to sample smaller patches from the larger scene.
After unsupervised training on the target image/scene, the model automatically segregates the major classes present in the scene and produces the segmentation map.
arXiv Detail & Related papers (2021-07-09T11:42:48Z) - STEP: Segmenting and Tracking Every Pixel [107.23184053133636]
We present a new benchmark: Segmenting and Tracking Every Pixel (STEP)
Our work is the first that targets this task in a real-world setting that requires dense interpretation in both spatial and temporal domains.
For measuring the performance, we propose a novel evaluation metric and Tracking Quality (STQ)
arXiv Detail & Related papers (2021-02-23T18:43:02Z) - Geography-Aware Self-Supervised Learning [79.4009241781968]
We show that due to their different characteristics, a non-trivial gap persists between contrastive and supervised learning on standard benchmarks.
We propose novel training methods that exploit the spatially aligned structure of remote sensing data.
Our experiments show that our proposed method closes the gap between contrastive and supervised learning on image classification, object detection and semantic segmentation for remote sensing.
arXiv Detail & Related papers (2020-11-19T17:29:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.