An Aligned Multi-Temporal Multi-Resolution Satellite Image Dataset for
Change Detection Research
- URL: http://arxiv.org/abs/2302.12301v1
- Date: Thu, 23 Feb 2023 19:43:20 GMT
- Title: An Aligned Multi-Temporal Multi-Resolution Satellite Image Dataset for
Change Detection Research
- Authors: Rahul Deshmukh, Constantine J. Roros, Amith Kashyap, Avinash C. Kak
- Abstract summary: This paper presents an aligned multi-temporal and multi-resolution satellite image dataset for research in change detection.
The dataset was created by augmenting the SpaceNet-7 dataset with temporally parallel stacks of Landsat and Sentinel images.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents an aligned multi-temporal and multi-resolution satellite
image dataset for research in change detection. We expect our dataset to be
useful to researchers who want to fuse information from multiple satellites for
detecting changes on the surface of the earth that may not be fully visible in
any single satellite. The dataset we present was created by augmenting the
SpaceNet-7 dataset with temporally parallel stacks of Landsat and Sentinel
images. The SpaceNet-7 dataset consists of time-sequenced Planet images
recorded over 101 AOIs (Areas-of-Interest). In our dataset, for each of the 60
AOIs that are meant for training, we augment the Planet datacube with
temporally parallel datacubes of Landsat and Sentinel images. The temporal
alignments between the high-res Planet images, on the one hand, and the Landsat
and Sentinel images, on the other, are approximate since the temporal
resolution for the Planet images is one month -- each image being a mosaic of
the best data collected over a month. Whenever we have a choice regarding which
Landsat and Sentinel images to pair up with the Planet images, we have chosen
those that had the least cloud cover. A particularly important feature of our
dataset is that the high-res and the low-res images are spatially aligned
together with our MuRA framework presented in this paper. Foundational to the
alignment calculation is the modeling of inter-satellite misalignment errors
with polynomials as in NASA's AROP algorithm. We have named our dataset MuRA-T
for the MuRA framework that is used for aligning the cross-satellite images and
"T" for the temporal dimension in the dataset.
Related papers
- Deep Multimodal Fusion for Semantic Segmentation of Remote Sensing Earth Observation Data [0.08192907805418582]
This paper proposes a late fusion deep learning model (LF-DLM) for semantic segmentation.
One branch integrates detailed textures from aerial imagery captured by UNetFormer with a Multi-Axis Vision Transformer (ViT) backbone.
The other branch captures complex-temporal dynamics from the Sentinel-2 satellite imageMax time series using a U-ViNet with Temporal Attention (U-TAE)
arXiv Detail & Related papers (2024-10-01T07:50:37Z) - SeeFar: Satellite Agnostic Multi-Resolution Dataset for Geospatial Foundation Models [0.0873811641236639]
SeeFar is an evolving collection of multi-resolution satellite images from public and commercial satellites.
We curated this dataset for training geospatial foundation models, unconstrained by satellite type.
arXiv Detail & Related papers (2024-06-10T20:24:14Z) - Getting it Right: Improving Spatial Consistency in Text-to-Image Models [103.52640413616436]
One of the key shortcomings in current text-to-image (T2I) models is their inability to consistently generate images which faithfully follow the spatial relationships specified in the text prompt.
We create SPRIGHT, the first spatially focused, large-scale dataset, by re-captioning 6 million images from 4 widely used vision datasets.
We find that training on images containing a larger number of objects leads to substantial improvements in spatial consistency, including state-of-the-art results on T2I-CompBench with a spatial score of 0.2133, by fine-tuning on 500 images.
arXiv Detail & Related papers (2024-04-01T15:55:25Z) - DiffusionSat: A Generative Foundation Model for Satellite Imagery [63.2807119794691]
We present DiffusionSat, to date the largest generative foundation model trained on a collection of publicly available large, high-resolution remote sensing datasets.
Our method produces realistic samples and can be used to solve multiple generative tasks including temporal generation, superresolution given multi-spectral inputs and in-painting.
arXiv Detail & Related papers (2023-12-06T16:53:17Z) - Diffusion Models for Interferometric Satellite Aperture Radar [73.01013149014865]
Probabilistic Diffusion Models (PDMs) have recently emerged as a very promising class of generative models.
Here, we leverage PDMs to generate several radar-based satellite image datasets.
We show that PDMs succeed in generating images with complex and realistic structures, but that sampling time remains an issue.
arXiv Detail & Related papers (2023-08-31T16:26:17Z) - Convolutional Neural Processes for Inpainting Satellite Images [56.032183666893246]
Inpainting involves predicting what is missing based on the known pixels and is an old problem in image processing.
We show ConvvNPs can outperform classical methods and state-of-the-art deep learning inpainting models on a scanline inpainting problem for LANDSAT 7 satellite images.
arXiv Detail & Related papers (2022-05-24T23:29:04Z) - Satellite Image Time Series Analysis for Big Earth Observation Data [50.591267188664666]
This paper describes sits, an open-source R package for satellite image time series analysis using machine learning.
We show that this approach produces high accuracy for land use and land cover maps through a case study in the Cerrado biome.
arXiv Detail & Related papers (2022-04-24T15:23:25Z) - A benchmark dataset for deep learning-based airplane detection: HRPlanes [3.5297361401370044]
We create a novel airplane detection dataset called High Resolution Planes (HRPlanes) by using images from Google Earth (GE)
HRPlanes include GE images of several different airports across the world to represent a variety of landscape, seasonal and satellite geometry conditions obtained from different satellites.
Our preliminary results show that the proposed dataset can be a valuable data source and benchmark data set for future applications.
arXiv Detail & Related papers (2022-04-22T23:49:44Z) - DynamicEarthNet: Daily Multi-Spectral Satellite Dataset for Semantic
Change Segmentation [43.72597365517224]
We propose the DynamicEarthNet dataset that consists of daily, multi-spectral satellite observations of 75 selected areas of interest.
These observations are paired with pixel-wise monthly semantic segmentation labels of 7 land use and land cover classes.
arXiv Detail & Related papers (2022-03-23T17:22:22Z) - Salient Objects in Clutter [130.63976772770368]
This paper identifies and addresses a serious design bias of existing salient object detection (SOD) datasets.
This design bias has led to a saturation in performance for state-of-the-art SOD models when evaluated on existing datasets.
We propose a new high-quality dataset and update the previous saliency benchmark.
arXiv Detail & Related papers (2021-05-07T03:49:26Z) - Attentive Weakly Supervised land cover mapping for object-based
satellite image time series data with spatial interpretation [4.549831511476249]
We propose a new deep learning framework, named TASSEL, that is able to intelligently exploit the weak supervision provided by the coarse granularity labels.
Our framework also produces an additional side-information that supports the model interpretability with the aim to make the black box gray.
arXiv Detail & Related papers (2020-04-30T10:23:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.