The SpaceNet Multi-Temporal Urban Development Challenge
- URL: http://arxiv.org/abs/2102.11958v1
- Date: Tue, 23 Feb 2021 22:01:22 GMT
- Title: The SpaceNet Multi-Temporal Urban Development Challenge
- Authors: Adam Van Etten, Daniel Hogan
- Abstract summary: Building footprints provide a useful proxy for a great many humanitarian applications.
In this paper we discuss efforts to develop techniques for precise building footprint localization, tracking, and change detection.
The competition centered around a brand new open source dataset of Planet Labs satellite imagery mosaics at 4m resolution.
Winning participants demonstrated impressive performance with the newly developed SpaceNet Change and Object Tracking (SCOT) metric.
- Score: 5.191792224645409
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Building footprints provide a useful proxy for a great many humanitarian
applications. For example, building footprints are useful for high fidelity
population estimates, and quantifying population statistics is fundamental to
~1/4 of the United Nations Sustainable Development Goals Indicators. In this
paper we (the SpaceNet Partners) discuss efforts to develop techniques for
precise building footprint localization, tracking, and change detection via the
SpaceNet Multi-Temporal Urban Development Challenge (also known as SpaceNet 7).
In this NeurIPS 2020 competition, participants were asked identify and track
buildings in satellite imagery time series collected over rapidly urbanizing
areas. The competition centered around a brand new open source dataset of
Planet Labs satellite imagery mosaics at 4m resolution, which includes 24
images (one per month) covering ~100 unique geographies. Tracking individual
buildings at this resolution is quite challenging, yet the winning participants
demonstrated impressive performance with the newly developed SpaceNet Change
and Object Tracking (SCOT) metric. This paper details the top-5 winning
approaches, as well as analysis of results that yielded a handful of
interesting anecdotes such as decreasing performance with latitude.
Related papers
- AIM 2024 Sparse Neural Rendering Challenge: Methods and Results [64.19942455360068]
This paper reviews the challenge on Sparse Neural Rendering that was part of the Advances in Image Manipulation (AIM) workshop, held in conjunction with ECCV 2024.
The challenge aims at producing novel camera view synthesis of diverse scenes from sparse image observations.
Participants are asked to optimise objective fidelity to the ground-truth images as measured via the Peak Signal-to-Noise Ratio (PSNR) metric.
arXiv Detail & Related papers (2024-09-23T14:17:40Z) - Cross Pseudo Supervision Framework for Sparsely Labelled Geospatial Images [0.0]
Land Use Land Cover (LULC) mapping is a vital tool for urban and resource planning.
This study introduces a semi-supervised segmentation model for LULC prediction using high-resolution satellite images.
We propose a modified Cross Pseudo Supervision framework to train image segmentation models on sparsely labelled data.
arXiv Detail & Related papers (2024-08-05T11:14:23Z) - EarthLoc: Astronaut Photography Localization by Indexing Earth from
Space [22.398824732314015]
Astronaut photography presents a unique Earth observations dataset with immense value for both scientific research and disaster response.
Current manual localization efforts are time-consuming, motivating the need for automated solutions.
We propose a novel approach - leveraging image retrieval - to address this challenge efficiently.
arXiv Detail & Related papers (2024-03-11T14:30:51Z) - Cross-City Matters: A Multimodal Remote Sensing Benchmark Dataset for
Cross-City Semantic Segmentation using High-Resolution Domain Adaptation
Networks [82.82866901799565]
We build a new set of multimodal remote sensing benchmark datasets (including hyperspectral, multispectral, SAR) for the study purpose of the cross-city semantic segmentation task.
Beyond the single city, we propose a high-resolution domain adaptation network, HighDAN, to promote the AI model's generalization ability from the multi-city environments.
HighDAN is capable of retaining the spatially topological structure of the studied urban scene well in a parallel high-to-low resolution fusion fashion.
arXiv Detail & Related papers (2023-09-26T23:55:39Z) - Unified Data Management and Comprehensive Performance Evaluation for
Urban Spatial-Temporal Prediction [Experiment, Analysis & Benchmark] [78.05103666987655]
This work addresses challenges in accessing and utilizing diverse urban spatial-temporal datasets.
We introduceatomic files, a unified storage format designed for urban spatial-temporal big data, and validate its effectiveness on 40 diverse datasets.
We conduct extensive experiments using diverse models and datasets, establishing a performance leaderboard and identifying promising research directions.
arXiv Detail & Related papers (2023-08-24T16:20:00Z) - A Satellite Imagery Dataset for Long-Term Sustainable Development in
United States Cities [15.862784224905095]
We develop a satellite imagery dataset using deep learning models for five sustainable development indicators.
The proposed dataset covers the 100 most populated U.S. cities and corresponding Census Block Groups from 2014 to 2023.
arXiv Detail & Related papers (2023-08-01T11:40:19Z) - Continental-Scale Building Detection from High Resolution Satellite
Imagery [5.56205296867374]
We study variations in architecture, loss functions, regularization, pre-training, self-training and post-processing that increase instance segmentation performance.
Experiments were carried out using a dataset of 100k satellite images across Africa containing 1.75M manually labelled building instances.
We report novel methods for improving performance of building detection with this type of model, including the use of mixup.
arXiv Detail & Related papers (2021-07-26T15:48:14Z) - Detection, Tracking, and Counting Meets Drones in Crowds: A Benchmark [97.07865343576361]
We construct a benchmark with a new drone-captured largescale dataset, named as DroneCrowd.
We annotate 20,800 people trajectories with 4.8 million heads and several video-level attributes.
We design the Space-Time Neighbor-Aware Network (STNNet) as a strong baseline to solve object detection, tracking and counting jointly in dense crowds.
arXiv Detail & Related papers (2021-05-06T04:46:14Z) - The Multi-Temporal Urban Development SpaceNet Dataset [7.606927524074595]
We present the Multi-Temporal Urban Development SpaceNet (MUDS) dataset.
This open source dataset consists of medium resolution (4.0m) satellite imagery mosaics.
Each building is assigned a unique identifier (i.e. address), which permits tracking of individual objects over time.
We demonstrate methods to track building footprint construction (or demolition) over time, thereby directly assessing urbanization.
arXiv Detail & Related papers (2021-02-08T18:28:52Z) - The MineRL 2020 Competition on Sample Efficient Reinforcement Learning
using Human Priors [62.9301667732188]
We propose a second iteration of the MineRL Competition.
The primary goal of the competition is to foster the development of algorithms which can efficiently leverage human demonstrations.
The competition is structured into two rounds in which competitors are provided several paired versions of the dataset and environment.
At the end of each round, competitors submit containerized versions of their learning algorithms to the AIcrowd platform.
arXiv Detail & Related papers (2021-01-26T20:32:30Z) - Batch Exploration with Examples for Scalable Robotic Reinforcement
Learning [63.552788688544254]
Batch Exploration with Examples (BEE) explores relevant regions of the state-space guided by a modest number of human provided images of important states.
BEE is able to tackle challenging vision-based manipulation tasks both in simulation and on a real Franka robot.
arXiv Detail & Related papers (2020-10-22T17:49:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.