Identifying every building's function in large-scale urban areas with multi-modality remote-sensing data
- URL: http://arxiv.org/abs/2405.05133v1
- Date: Wed, 8 May 2024 15:32:20 GMT
- Title: Identifying every building's function in large-scale urban areas with multi-modality remote-sensing data
- Authors: Zhuohong Li, Wei He, Jiepan Li, Hongyan Zhang,
- Abstract summary: This study proposes a semi-supervised framework to identify every building's function in large-scale urban areas.
optical images, building height, and nighttime-light data are collected to describe the morphological attributes of buildings.
Results are evaluated by 20,000 validation points and statistical survey reports from the government.
- Score: 5.18540804614798
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Buildings, as fundamental man-made structures in urban environments, serve as crucial indicators for understanding various city function zones. Rapid urbanization has raised an urgent need for efficiently surveying building footprints and functions. In this study, we proposed a semi-supervised framework to identify every building's function in large-scale urban areas with multi-modality remote-sensing data. In detail, optical images, building height, and nighttime-light data are collected to describe the morphological attributes of buildings. Then, the area of interest (AOI) and building masks from the volunteered geographic information (VGI) data are collected to form sparsely labeled samples. Furthermore, the multi-modality data and weak labels are utilized to train a segmentation model with a semi-supervised strategy. Finally, results are evaluated by 20,000 validation points and statistical survey reports from the government. The evaluations reveal that the produced function maps achieve an OA of 82% and Kappa of 71% among 1,616,796 buildings in Shanghai, China. This study has the potential to support large-scale urban management and sustainable urban development. All collected data and produced maps are open access at https://github.com/LiZhuoHong/BuildingMap.
Related papers
- ControlCity: A Multimodal Diffusion Model Based Approach for Accurate Geospatial Data Generation and Urban Morphology Analysis [6.600555803960957]
We propose a multi-source geographic data transformation solution, utilizing accessible and complete VGI data to assist in generating urban building footprint data.
We then present ControlCity, a geographic data transformation method based on a multimodal diffusion model.
Experiments across 22 global cities demonstrate that ControlCity successfully simulates real urban building patterns.
arXiv Detail & Related papers (2024-09-25T16:03:33Z) - UrBench: A Comprehensive Benchmark for Evaluating Large Multimodal Models in Multi-View Urban Scenarios [60.492736455572015]
We present UrBench, a benchmark designed for evaluating LMMs in complex multi-view urban scenarios.
UrBench contains 11.6K meticulously curated questions at both region-level and role-level.
Our evaluations on 21 LMMs show that current LMMs struggle in the urban environments in several aspects.
arXiv Detail & Related papers (2024-08-30T13:13:35Z) - Explainable Hierarchical Urban Representation Learning for Commuting Flow Prediction [1.5156879440024378]
Commuting flow prediction is an essential task for municipal operations in the real world.
We develop a heterogeneous graph-based model to generate meaningful region embeddings for predicting different types of inter-level OD flows.
Our proposed model outperforms existing models in terms of a uniform urban structure.
arXiv Detail & Related papers (2024-08-27T03:30:01Z) - CMAB: A First National-Scale Multi-Attribute Building Dataset in China Derived from Open Source Data and GeoAI [1.3586572110652484]
This paper presents the first national-scale Multi-Attribute Building dataset (CMAB) covering 3,667 spatial cities, 29 million buildings, and 21.3 billion square meters of rooftops.
Using billions of high-resolution Google Earth images and 60 million street view images (SVIs), we generated rooftop, height, function, age, and quality attributes for each building.
Our dataset and results are crucial for global SDGs and urban planning.
arXiv Detail & Related papers (2024-08-12T02:09:25Z) - City Foundation Models for Learning General Purpose Representations from
OpenStreetMap [17.577683270277173]
We present CityFM, a framework to train a foundation model within a selected geographical area of interest, such as a city.
CityFM relies solely on open data from OpenStreetMap, and produces multimodal representations of entities of different types, spatial, visual, and textual information.
In all the experiments, CityFM achieves performance superior to, or on par with, the baselines.
arXiv Detail & Related papers (2023-10-01T05:55:30Z) - Cross-City Matters: A Multimodal Remote Sensing Benchmark Dataset for
Cross-City Semantic Segmentation using High-Resolution Domain Adaptation
Networks [82.82866901799565]
We build a new set of multimodal remote sensing benchmark datasets (including hyperspectral, multispectral, SAR) for the study purpose of the cross-city semantic segmentation task.
Beyond the single city, we propose a high-resolution domain adaptation network, HighDAN, to promote the AI model's generalization ability from the multi-city environments.
HighDAN is capable of retaining the spatially topological structure of the studied urban scene well in a parallel high-to-low resolution fusion fashion.
arXiv Detail & Related papers (2023-09-26T23:55:39Z) - Unified Data Management and Comprehensive Performance Evaluation for
Urban Spatial-Temporal Prediction [Experiment, Analysis & Benchmark] [78.05103666987655]
This work addresses challenges in accessing and utilizing diverse urban spatial-temporal datasets.
We introduceatomic files, a unified storage format designed for urban spatial-temporal big data, and validate its effectiveness on 40 diverse datasets.
We conduct extensive experiments using diverse models and datasets, establishing a performance leaderboard and identifying promising research directions.
arXiv Detail & Related papers (2023-08-24T16:20:00Z) - Semi-supervised Learning from Street-View Images and OpenStreetMap for
Automatic Building Height Estimation [59.6553058160943]
We propose a semi-supervised learning (SSL) method of automatically estimating building height from Mapillary SVI and OpenStreetMap data.
The proposed method leads to a clear performance boosting in estimating building heights with a Mean Absolute Error (MAE) around 2.1 meters.
The preliminary result is promising and motivates our future work in scaling up the proposed method based on low-cost VGI data.
arXiv Detail & Related papers (2023-07-05T18:16:30Z) - Building Floorspace in China: A Dataset and Learning Pipeline [0.32228025627337864]
This paper provides a first milestone in measuring the floorspace of buildings in 40 major Chinese cities.
We use Sentinel-1 and -2 satellite images as our main data source.
We provide a detailed description of our data, algorithms, and evaluations.
arXiv Detail & Related papers (2023-03-03T21:45:36Z) - Building Coverage Estimation with Low-resolution Remote Sensing Imagery [65.95520230761544]
We propose a method for estimating building coverage using only publicly available low-resolution satellite imagery.
Our model achieves a coefficient of determination as high as 0.968 on predicting building coverage in regions of different levels of development around the world.
arXiv Detail & Related papers (2023-01-04T05:19:33Z) - Methodological Foundation of a Numerical Taxonomy of Urban Form [62.997667081978825]
We present a method for numerical taxonomy of urban form derived from biological systematics.
We derive homogeneous urban tissue types and, by determining overall morphological similarity between them, generate a hierarchical classification of urban form.
After framing and presenting the method, we test it on two cities - Prague and Amsterdam.
arXiv Detail & Related papers (2021-04-30T12:47:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.