Hi-UCD: A Large-scale Dataset for Urban Semantic Change Detection in
Remote Sensing Imagery
- URL: http://arxiv.org/abs/2011.03247v7
- Date: Mon, 28 Dec 2020 01:47:48 GMT
- Title: Hi-UCD: A Large-scale Dataset for Urban Semantic Change Detection in
Remote Sensing Imagery
- Authors: Shiqi Tian, Ailong Ma, Zhuo Zheng, Yanfei Zhong
- Abstract summary: Hi-UCD is a large scale benchmark dataset for urban change detection.
It can be used for detecting and analyzing refined urban changes.
We benchmark our dataset using some classic methods in binary and multi-class change detection.
- Score: 5.151973524974052
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the acceleration of the urban expansion, urban change detection (UCD),
as a significant and effective approach, can provide the change information
with respect to geospatial objects for dynamical urban analysis. However,
existing datasets suffer from three bottlenecks: (1) lack of high spatial
resolution images; (2) lack of semantic annotation; (3) lack of long-range
multi-temporal images. In this paper, we propose a large scale benchmark
dataset, termed Hi-UCD. This dataset uses aerial images with a spatial
resolution of 0.1 m provided by the Estonia Land Board, including three-time
phases, and semantically annotated with nine classes of land cover to obtain
the direction of ground objects change. It can be used for detecting and
analyzing refined urban changes. We benchmark our dataset using some classic
methods in binary and multi-class change detection. Experimental results show
that Hi-UCD is challenging yet useful. We hope the Hi-UCD can become a strong
benchmark accelerating future research.
Related papers
- Continuous Urban Change Detection from Satellite Image Time Series with Temporal Feature Refinement and Multi-Task Integration [5.095834019284525]
Urbanization advances at unprecedented rates, resulting in negative effects on the environment and human well-being.
Deep learning-based methods have achieved promising urban change detection results from optical satellite image pairs.
We propose a continuous urban change detection method that identifies changes in each consecutive image pair of a satellite image time series.
arXiv Detail & Related papers (2024-06-25T10:53:57Z) - FUSU: A Multi-temporal-source Land Use Change Segmentation Dataset for Fine-grained Urban Semantic Understanding [6.833536116934201]
We introduce FUSU, the first fine-grained land use change segmentation dataset for Fine-grained Urban Semantic Understanding.
FUSU features the most detailed land use classification system to date, with 17 classes and 30 billion pixels of annotations.
It includes bi-temporal high-resolution satellite images with 0.2-0.5 m ground sample distance and monthly optical and radar satellite time series, covering 847 km2 across five urban areas in the southern and northern of China.
arXiv Detail & Related papers (2024-05-29T12:56:11Z) - Advancing Applications of Satellite Photogrammetry: Novel Approaches for Built-up Area Modeling and Natural Environment Monitoring using Stereo/Multi-view Satellite Image-derived 3D Data [0.0]
This dissertation explores several novel approaches based on stereo and multi-view satellite image-derived 3D geospatial data.
It introduces four parts of novel approaches that deal with the spatial and temporal challenges with satellite-derived 3D data.
Overall, this dissertation demonstrates the extensive potential of satellite photogrammetry applications in addressing urban and environmental challenges.
arXiv Detail & Related papers (2024-04-18T20:02:52Z) - Semi-supervised Learning from Street-View Images and OpenStreetMap for
Automatic Building Height Estimation [59.6553058160943]
We propose a semi-supervised learning (SSL) method of automatically estimating building height from Mapillary SVI and OpenStreetMap data.
The proposed method leads to a clear performance boosting in estimating building heights with a Mean Absolute Error (MAE) around 2.1 meters.
The preliminary result is promising and motivates our future work in scaling up the proposed method based on low-cost VGI data.
arXiv Detail & Related papers (2023-07-05T18:16:30Z) - Vision Transformers, a new approach for high-resolution and large-scale
mapping of canopy heights [50.52704854147297]
We present a new vision transformer (ViT) model optimized with a classification (discrete) and a continuous loss function.
This model achieves better accuracy than previously used convolutional based approaches (ConvNets) optimized with only a continuous loss function.
arXiv Detail & Related papers (2023-04-22T22:39:03Z) - 3D Data Augmentation for Driving Scenes on Camera [50.41413053812315]
We propose a 3D data augmentation approach termed Drive-3DAug, aiming at augmenting the driving scenes on camera in the 3D space.
We first utilize Neural Radiance Field (NeRF) to reconstruct the 3D models of background and foreground objects.
Then, augmented driving scenes can be obtained by placing the 3D objects with adapted location and orientation at the pre-defined valid region of backgrounds.
arXiv Detail & Related papers (2023-03-18T05:51:05Z) - Towards Model Generalization for Monocular 3D Object Detection [57.25828870799331]
We present an effective unified camera-generalized paradigm (CGP) for Mono3D object detection.
We also propose the 2D-3D geometry-consistent object scaling strategy (GCOS) to bridge the gap via an instance-level augment.
Our method called DGMono3D achieves remarkable performance on all evaluated datasets and surpasses the SoTA unsupervised domain adaptation scheme.
arXiv Detail & Related papers (2022-05-23T23:05:07Z) - SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point
Clouds [52.624157840253204]
We introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km2.
Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset.
arXiv Detail & Related papers (2022-01-12T14:48:11Z) - UrbanNet: Leveraging Urban Maps for Long Range 3D Object Detection [0.0]
UrbanNet is a modular architecture for long range monocular 3D object detection with static cameras.
Our proposed system combines commonly available urban maps along with a mature 2D object detector and an efficient 3D object descriptor.
We evaluate UrbanNet on a novel challenging synthetic dataset and highlight the advantages of its design for traffic detection in roads with changing slope.
arXiv Detail & Related papers (2021-10-11T19:03:20Z) - Deep Learning Framework for Detecting Ground Deformation in the Built
Environment using Satellite InSAR data [7.503635457124339]
We adapt a pre-trained convolutional neural network (CNN) to detect deformation in a national-scale velocity field.
We focus on the UK where previously identified deformation is associated with coal-mining, ground water withdrawal, landslides and tunnelling.
The results demonstrate the potential applicability of the proposed framework to the development of automated ground motion analysis systems.
arXiv Detail & Related papers (2020-05-07T03:14:00Z) - Quantifying Data Augmentation for LiDAR based 3D Object Detection [139.64869289514525]
In this work, we shed light on different data augmentation techniques commonly used in Light Detection and Ranging (LiDAR) based 3D Object Detection.
We investigate a variety of global and local augmentation techniques, where global augmentation techniques are applied to the entire point cloud of a scene and local augmentation techniques are only applied to points belonging to individual objects in the scene.
Our findings show that both types of data augmentation can lead to performance increases, but it also turns out, that some augmentation techniques, such as individual object translation, for example, can be counterproductive and can hurt the overall performance.
arXiv Detail & Related papers (2020-04-03T16:09:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.