Coverage and Bias of Street View Imagery in Mapping the Urban Environment
- URL: http://arxiv.org/abs/2409.15386v1
- Date: Sun, 22 Sep 2024 02:58:43 GMT
- Title: Coverage and Bias of Street View Imagery in Mapping the Urban Environment
- Authors: Zicheng Fan, Chen-Chieh Feng, Filip Biljecki,
- Abstract summary: Street View Imagery (SVI) has emerged as a valuable data form in urban studies, enabling new ways to map and sense urban environments.
This research proposes a novel workflow to estimate SVI's feature-level coverage on urban environment.
Using London as a case study, three experiments are conducted to identify potential biases in SVI's ability to cover and represent urban features.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Street View Imagery (SVI) has emerged as a valuable data form in urban studies, enabling new ways to map and sense urban environments. However, fundamental concerns regarding the representativeness, quality, and reliability of SVI remain underexplored, e.g.\ to what extent can cities be captured by such data and do data gaps result in bias. This research, positioned at the intersection of spatial data quality and urban analytics, addresses these concerns by proposing a novel workflow to estimate SVI's feature-level coverage on urban environment. The workflow integrates the positional relationships between SVI and target features, as well as the impact of environmental obstructions. Expanding the domain of data quality to SVI, we introduce an indicator system that evaluates the extent of coverage, focusing on the completeness and frequency dimensions. Using London as a case study, three experiments are conducted to identify potential biases in SVI's ability to cover and represent urban features, with a focus on building facades. The research highlights the limitations of traditional spatial data quality metrics in assessing SVI, and variability of SVI coverage under different data acquisition practices. Tailored approaches that consider the unique metadata and horizontal perspective of SVI are also underscored. The findings suggest that while SVI offers valuable insights, it is no panacea -- its application in urban research requires careful consideration of data coverage and feature-level representativeness to ensure reliable results.
Related papers
- Urban Flood Mapping Using Satellite Synthetic Aperture Radar Data: A Review of Characteristics, Approaches and Datasets [17.621744717937993]
This study focuses on the challenges and advancements in SAR-based urban flood mapping.
It specifically addresses the limitations of spatial and temporal resolution in SAR data and discusses the essential pre-processing steps.
It highlights a lack of open-access SAR datasets for urban flood mapping, hindering development in advanced deep learning-based methods.
arXiv Detail & Related papers (2024-11-06T09:30:13Z) - The State of the Art in Visual Analytics for 3D Urban Data [5.056350278679641]
Urbanization has amplified the importance of three-dimensional structures in urban environments.
With the growing availability of 3D urban data, numerous studies have focused on developing visual analysis techniques tailored to the unique characteristics of urban environments.
incorporating the third dimension into visual analytics introduces additional challenges in designing effective visual tools to tackle urban data's diverse complexities.
arXiv Detail & Related papers (2024-04-24T16:50:42Z) - AVIBench: Towards Evaluating the Robustness of Large Vision-Language Model on Adversarial Visual-Instructions [52.9787902653558]
Large Vision-Language Models (LVLMs) have shown significant progress in well responding to visual-instructions from users.
Despite the critical importance of LVLMs' robustness against such threats, current research in this area remains limited.
We introduce AVIBench, a framework designed to analyze the robustness of LVLMs when facing various adversarial visual-instructions.
arXiv Detail & Related papers (2024-03-14T12:51:07Z) - Open-source data pipeline for street-view images: a case study on
community mobility during COVID-19 pandemic [0.9423257767158634]
Street View Images (SVI) are a common source of valuable data for researchers.
Google Street View images are collected infrequently, making temporal analysis challenging.
This study demonstrates the feasibility and value of collecting and using SVI for research purposes beyond what is possible with currently available SVI data.
arXiv Detail & Related papers (2024-01-23T20:56:16Z) - RadOcc: Learning Cross-Modality Occupancy Knowledge through Rendering
Assisted Distillation [50.35403070279804]
3D occupancy prediction is an emerging task that aims to estimate the occupancy states and semantics of 3D scenes using multi-view images.
We propose RadOcc, a Rendering assisted distillation paradigm for 3D Occupancy prediction.
arXiv Detail & Related papers (2023-12-19T03:39:56Z) - Cross-City Matters: A Multimodal Remote Sensing Benchmark Dataset for
Cross-City Semantic Segmentation using High-Resolution Domain Adaptation
Networks [82.82866901799565]
We build a new set of multimodal remote sensing benchmark datasets (including hyperspectral, multispectral, SAR) for the study purpose of the cross-city semantic segmentation task.
Beyond the single city, we propose a high-resolution domain adaptation network, HighDAN, to promote the AI model's generalization ability from the multi-city environments.
HighDAN is capable of retaining the spatially topological structure of the studied urban scene well in a parallel high-to-low resolution fusion fashion.
arXiv Detail & Related papers (2023-09-26T23:55:39Z) - SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point
Clouds [52.624157840253204]
We introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km2.
Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset.
arXiv Detail & Related papers (2022-01-12T14:48:11Z) - Off-policy Imitation Learning from Visual Inputs [83.22342811160114]
We propose OPIfVI, which is composed of an off-policy learning manner, data augmentation, and encoder techniques.
We show that OPIfVI is able to achieve expert-level performance and outperform existing baselines.
arXiv Detail & Related papers (2021-11-08T09:06:12Z) - Assessing bikeability with street view imagery and computer vision [0.0]
We develop an exhaustive index of bikeability composed of 34 indicators.
As they outperformed non-SVI counterparts by a wide margin, SVI indicators are also found to be superior in assessing urban bikeability.
arXiv Detail & Related papers (2021-05-13T14:08:58Z) - Towards Semantic Segmentation of Urban-Scale 3D Point Clouds: A Dataset,
Benchmarks and Challenges [52.624157840253204]
We present an urban-scale photogrammetric point cloud dataset with nearly three billion richly annotated points.
Our dataset consists of large areas from three UK cities, covering about 7.6 km2 of the city landscape.
We evaluate the performance of state-of-the-art algorithms on our dataset and provide a comprehensive analysis of the results.
arXiv Detail & Related papers (2020-09-07T14:47:07Z) - Standardized Green View Index and Quantification of Different Metrics of
Urban Green Vegetation [0.0]
This study proposes an improved indicator of greenery visibility for analytical use (standardized GVI; sGVI)
It is shown that the sGVI, a weighted form of GVI aggregated to an area, mitigates the bias of densely located measurement sites.
Also, by comparing sGVI and NDVI at city block level, we found that sGVI captures the presence of vegetation better in the city center, whereas NDVI is better in capturing vegetation in parks and forests.
arXiv Detail & Related papers (2020-08-01T09:58:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.