Assessing bikeability with street view imagery and computer vision
- URL: http://arxiv.org/abs/2105.08499v1
- Date: Thu, 13 May 2021 14:08:58 GMT
- Title: Assessing bikeability with street view imagery and computer vision
- Authors: Koichi Ito, Filip Biljecki
- Abstract summary: We develop an exhaustive index of bikeability composed of 34 indicators.
As they outperformed non-SVI counterparts by a wide margin, SVI indicators are also found to be superior in assessing urban bikeability.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Studies evaluating bikeability usually compute spatial indicators shaping
cycling conditions and conflate them in a quantitative index. Much research
involves site visits or conventional geospatial approaches, and few studies
have leveraged street view imagery (SVI) for conducting virtual audits. These
have assessed a limited range of aspects, and not all have been automated using
computer vision (CV). Furthermore, studies have not yet zeroed in on gauging
the usability of these technologies thoroughly. We investigate, with
experiments at a fine spatial scale and across multiple geographies (Singapore
and Tokyo), whether we can use SVI and CV to assess bikeability
comprehensively. Extending related work, we develop an exhaustive index of
bikeability composed of 34 indicators. The results suggest that SVI and CV are
adequate to evaluate bikeability in cities comprehensively. As they
outperformed non-SVI counterparts by a wide margin, SVI indicators are also
found to be superior in assessing urban bikeability, and potentially can be
used independently, replacing traditional techniques. However, the paper
exposes some limitations, suggesting that the best way forward is combining
both SVI and non-SVI approaches. The new bikeability index presents a
contribution in transportation and urban analytics, and it is scalable to
assess cycling appeal widely.
Related papers
- Evaluating the effects of Data Sparsity on the Link-level Bicycling Volume Estimation: A Graph Convolutional Neural Network Approach [54.84957282120537]
We present the first study to utilize a Graph Convolutional Network architecture to model link-level bicycling volumes.
We estimate the Annual Average Daily Bicycle (AADB) counts across the City of Melbourne, Australia using Strava Metro bicycling count data.
Our results show that the GCN model performs better than these traditional models in predicting AADB counts.
arXiv Detail & Related papers (2024-10-11T04:53:18Z) - Coverage and Bias of Street View Imagery in Mapping the Urban Environment [0.0]
Street View Imagery (SVI) has emerged as a valuable data form in urban studies, enabling new ways to map and sense urban environments.
This research proposes a novel workflow to estimate SVI's feature-level coverage on urban environment.
Using London as a case study, three experiments are conducted to identify potential biases in SVI's ability to cover and represent urban features.
arXiv Detail & Related papers (2024-09-22T02:58:43Z) - RoadBEV: Road Surface Reconstruction in Bird's Eye View [55.0558717607946]
Road surface conditions, especially geometry profiles, enormously affect driving performance of autonomous vehicles. Vision-based online road reconstruction promisingly captures road information in advance.
Bird's-Eye-View (BEV) perception provides immense potential to more reliable and accurate reconstruction.
This paper uniformly proposes two simple yet effective models for road elevation reconstruction in BEV named RoadBEV-mono and RoadBEV-stereo.
arXiv Detail & Related papers (2024-04-09T20:24:29Z) - HoloVIC: Large-scale Dataset and Benchmark for Multi-Sensor Holographic Intersection and Vehicle-Infrastructure Cooperative [23.293162454592544]
We constructed holographic intersections with various layouts to build a large-scale multi-sensor holographic vehicle-infrastructure cooperation dataset, called HoloVIC.
Our dataset includes 3 different types of sensors (Camera, Lidar, Fisheye) and employs 4 sensor-s based on the different intersections.
arXiv Detail & Related papers (2024-03-05T04:08:19Z) - Open-source data pipeline for street-view images: a case study on
community mobility during COVID-19 pandemic [0.9423257767158634]
Street View Images (SVI) are a common source of valuable data for researchers.
Google Street View images are collected infrequently, making temporal analysis challenging.
This study demonstrates the feasibility and value of collecting and using SVI for research purposes beyond what is possible with currently available SVI data.
arXiv Detail & Related papers (2024-01-23T20:56:16Z) - SSCBench: A Large-Scale 3D Semantic Scene Completion Benchmark for Autonomous Driving [87.8761593366609]
SSCBench is a benchmark that integrates scenes from widely used automotive datasets.
We benchmark models using monocular, trinocular, and cloud input to assess the performance gap.
We have unified semantic labels across diverse datasets to simplify cross-domain generalization testing.
arXiv Detail & Related papers (2023-06-15T09:56:33Z) - OpenLane-V2: A Topology Reasoning Benchmark for Unified 3D HD Mapping [84.65114565766596]
We present OpenLane-V2, the first dataset on topology reasoning for traffic scene structure.
OpenLane-V2 consists of 2,000 annotated road scenes that describe traffic elements and their correlation to the lanes.
We evaluate various state-of-the-art methods, and present their quantitative and qualitative results on OpenLane-V2 to indicate future avenues for investigating topology reasoning in traffic scenes.
arXiv Detail & Related papers (2023-04-20T16:31:22Z) - Street-View Image Generation from a Bird's-Eye View Layout [95.36869800896335]
Bird's-Eye View (BEV) Perception has received increasing attention in recent years.
Data-driven simulation for autonomous driving has been a focal point of recent research.
We propose BEVGen, a conditional generative model that synthesizes realistic and spatially consistent surrounding images.
arXiv Detail & Related papers (2023-01-11T18:39:34Z) - 4Seasons: Benchmarking Visual SLAM and Long-Term Localization for
Autonomous Driving in Challenging Conditions [54.59279160621111]
We present a novel visual SLAM and long-term localization benchmark for autonomous driving in challenging conditions based on the large-scale 4Seasons dataset.
The proposed benchmark provides drastic appearance variations caused by seasonal changes and diverse weather and illumination conditions.
We introduce a new unified benchmark for jointly evaluating visual odometry, global place recognition, and map-based visual localization performance.
arXiv Detail & Related papers (2022-12-31T13:52:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.