Coverage and Bias of Street View Imagery in Mapping the Urban Environment
- URL: http://arxiv.org/abs/2409.15386v3
- Date: Fri, 24 Jan 2025 12:06:47 GMT
- Title: Coverage and Bias of Street View Imagery in Mapping the Urban Environment
- Authors: Zicheng Fan, Chen-Chieh Feng, Filip Biljecki,
- Abstract summary: Street View Imagery (SVI) has emerged as a valuable data form in urban studies, enabling new ways to map and sense urban environments.
However, fundamental concerns regarding the representativeness, quality, and reliability of SVI remain underexplored.
This research proposes a novel and effective method to estimate SVI's element-level coverage in the urban environment.
- Score: 0.0
- License:
- Abstract: Street View Imagery (SVI) has emerged as a valuable data form in urban studies, enabling new ways to map and sense urban environments. However, fundamental concerns regarding the representativeness, quality, and reliability of SVI remain underexplored, e.g. to what extent can cities be captured by such data and do data gaps result in bias. This research, positioned at the intersection of spatial data quality and urban analytics, addresses these concerns by proposing a novel and effective method to estimate SVI's element-level coverage in the urban environment. The method integrates the positional relationships between SVI and target elements, as well as the impact of physical obstructions. Expanding the domain of data quality to SVI, we introduce an indicator system that evaluates the extent of coverage, focusing on the completeness and frequency dimensions. Taking London as a case study, three experiments are conducted to identify potential biases in SVI's ability to cover and represent urban environmental elements, using building facades as an example. It is found that despite their high availability along urban road networks, Google Street View covers only 62.4 % of buildings in the case study area. The average facade coverage per building is 12.4 %. SVI tends to over-represent non-residential buildings, thus possibly resulting in biased analyses, and its coverage of environmental elements is position-dependent. The research also highlights the variability of SVI coverage under different data acquisition practices and proposes an optimal sampling interval range of 50-60 m for SVI collection. The findings suggest that while SVI offers valuable insights, it is no panacea - its application in urban research requires careful consideration of data coverage and element-level representativeness to ensure reliable results.
Related papers
- Urban Flood Mapping Using Satellite Synthetic Aperture Radar Data: A Review of Characteristics, Approaches and Datasets [17.621744717937993]
This study focuses on the challenges and advancements in SAR-based urban flood mapping.
It specifically addresses the limitations of spatial and temporal resolution in SAR data and discusses the essential pre-processing steps.
It highlights a lack of open-access SAR datasets for urban flood mapping, hindering development in advanced deep learning-based methods.
arXiv Detail & Related papers (2024-11-06T09:30:13Z) - UrBench: A Comprehensive Benchmark for Evaluating Large Multimodal Models in Multi-View Urban Scenarios [60.492736455572015]
We present UrBench, a benchmark designed for evaluating LMMs in complex multi-view urban scenarios.
UrBench contains 11.6K meticulously curated questions at both region-level and role-level.
Our evaluations on 21 LMMs show that current LMMs struggle in the urban environments in several aspects.
arXiv Detail & Related papers (2024-08-30T13:13:35Z) - Open-source data pipeline for street-view images: a case study on
community mobility during COVID-19 pandemic [0.9423257767158634]
Street View Images (SVI) are a common source of valuable data for researchers.
Google Street View images are collected infrequently, making temporal analysis challenging.
This study demonstrates the feasibility and value of collecting and using SVI for research purposes beyond what is possible with currently available SVI data.
arXiv Detail & Related papers (2024-01-23T20:56:16Z) - Cross-City Matters: A Multimodal Remote Sensing Benchmark Dataset for
Cross-City Semantic Segmentation using High-Resolution Domain Adaptation
Networks [82.82866901799565]
We build a new set of multimodal remote sensing benchmark datasets (including hyperspectral, multispectral, SAR) for the study purpose of the cross-city semantic segmentation task.
Beyond the single city, we propose a high-resolution domain adaptation network, HighDAN, to promote the AI model's generalization ability from the multi-city environments.
HighDAN is capable of retaining the spatially topological structure of the studied urban scene well in a parallel high-to-low resolution fusion fashion.
arXiv Detail & Related papers (2023-09-26T23:55:39Z) - A Contextual Master-Slave Framework on Urban Region Graph for Urban
Village Detection [68.84486900183853]
We build an urban region graph (URG) to model the urban area in a hierarchically structured way.
Then, we design a novel contextual master-slave framework to effectively detect the urban village from the URG.
The proposed framework can learn to balance the generality and specificity for UV detection in an urban area.
arXiv Detail & Related papers (2022-11-26T18:17:39Z) - SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point
Clouds [52.624157840253204]
We introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km2.
Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset.
arXiv Detail & Related papers (2022-01-12T14:48:11Z) - CitySurfaces: City-Scale Semantic Segmentation of Sidewalk Materials [6.573006589628846]
Most cities lack a spatial catalog of their surfaces due to the cost-prohibitive and time-consuming nature of data collection.
Recent advancements in computer vision, together with the availability of street-level images, provide new opportunities for cities to extract large-scale built environment data.
We propose CitySurfaces, an active learning-based framework that leverages computer vision techniques for classifying sidewalk materials.
arXiv Detail & Related papers (2022-01-06T21:58:37Z) - An Experimental Urban Case Study with Various Data Sources and a Model
for Traffic Estimation [65.28133251370055]
We organize an experimental campaign with video measurement in an area within the urban network of Zurich, Switzerland.
We focus on capturing the traffic state in terms of traffic flow and travel times by ensuring measurements from established thermal cameras.
We propose a simple yet efficient Multiple Linear Regression (MLR) model to estimate travel times with fusion of various data sources.
arXiv Detail & Related papers (2021-08-02T08:13:57Z) - Assessing bikeability with street view imagery and computer vision [0.0]
We develop an exhaustive index of bikeability composed of 34 indicators.
As they outperformed non-SVI counterparts by a wide margin, SVI indicators are also found to be superior in assessing urban bikeability.
arXiv Detail & Related papers (2021-05-13T14:08:58Z) - Towards Semantic Segmentation of Urban-Scale 3D Point Clouds: A Dataset,
Benchmarks and Challenges [52.624157840253204]
We present an urban-scale photogrammetric point cloud dataset with nearly three billion richly annotated points.
Our dataset consists of large areas from three UK cities, covering about 7.6 km2 of the city landscape.
We evaluate the performance of state-of-the-art algorithms on our dataset and provide a comprehensive analysis of the results.
arXiv Detail & Related papers (2020-09-07T14:47:07Z) - Standardized Green View Index and Quantification of Different Metrics of
Urban Green Vegetation [0.0]
This study proposes an improved indicator of greenery visibility for analytical use (standardized GVI; sGVI)
It is shown that the sGVI, a weighted form of GVI aggregated to an area, mitigates the bias of densely located measurement sites.
Also, by comparing sGVI and NDVI at city block level, we found that sGVI captures the presence of vegetation better in the city center, whereas NDVI is better in capturing vegetation in parks and forests.
arXiv Detail & Related papers (2020-08-01T09:58:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.