Assessing bikeability with street view imagery and computer vision
- URL: http://arxiv.org/abs/2105.08499v1
- Date: Thu, 13 May 2021 14:08:58 GMT
- Title: Assessing bikeability with street view imagery and computer vision
- Authors: Koichi Ito, Filip Biljecki
- Abstract summary: We develop an exhaustive index of bikeability composed of 34 indicators.
As they outperformed non-SVI counterparts by a wide margin, SVI indicators are also found to be superior in assessing urban bikeability.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Studies evaluating bikeability usually compute spatial indicators shaping
cycling conditions and conflate them in a quantitative index. Much research
involves site visits or conventional geospatial approaches, and few studies
have leveraged street view imagery (SVI) for conducting virtual audits. These
have assessed a limited range of aspects, and not all have been automated using
computer vision (CV). Furthermore, studies have not yet zeroed in on gauging
the usability of these technologies thoroughly. We investigate, with
experiments at a fine spatial scale and across multiple geographies (Singapore
and Tokyo), whether we can use SVI and CV to assess bikeability
comprehensively. Extending related work, we develop an exhaustive index of
bikeability composed of 34 indicators. The results suggest that SVI and CV are
adequate to evaluate bikeability in cities comprehensively. As they
outperformed non-SVI counterparts by a wide margin, SVI indicators are also
found to be superior in assessing urban bikeability, and potentially can be
used independently, replacing traditional techniques. However, the paper
exposes some limitations, suggesting that the best way forward is combining
both SVI and non-SVI approaches. The new bikeability index presents a
contribution in transportation and urban analytics, and it is scalable to
assess cycling appeal widely.
Related papers
- Which cycling environment appears safer? Learning cycling safety perceptions from pairwise image comparisons [2.3900828891729784]
Cycling is critical for cities to transition to more sustainable transport modes. Yet, safety concerns remain a critical deterrent for individuals to cycle.
In this study, we tackle the problem of capturing and understanding how individuals perceive cycling risk.
We base our approach on using pairwise comparisons of real-world images, repeatedly presenting respondents with pairs of road environments.
We ask them to select the one they perceive as safer for cycling, if any.
Using the collected data, we train a siamese-convolutional neural network using a multi-loss framework that learns from individuals' responses, learns preferences directly from images,
arXiv Detail & Related papers (2024-12-13T03:56:40Z) - Extrapolated Urban View Synthesis Benchmark [53.657271730352214]
Photorealistic simulators are essential for the training and evaluation of vision-centric autonomous vehicles (AVs)
At their core is Novel View Synthesis (NVS), a crucial capability that generates diverse unseen viewpoints to accommodate the broad and continuous pose distribution of AVs.
Recent advances in radiance fields, such as 3D Gaussian Splatting, achieve photorealistic rendering at real-time speeds and have been widely used in modeling large-scale driving scenes.
We have released our data to help advance self-driving and urban robotics simulation technology.
arXiv Detail & Related papers (2024-12-06T18:41:39Z) - Evaluating the effects of Data Sparsity on the Link-level Bicycling Volume Estimation: A Graph Convolutional Neural Network Approach [54.84957282120537]
We present the first study to utilize a Graph Convolutional Network architecture to model link-level bicycling volumes.
We estimate the Annual Average Daily Bicycle (AADB) counts across the City of Melbourne, Australia using Strava Metro bicycling count data.
Our results show that the GCN model performs better than these traditional models in predicting AADB counts.
arXiv Detail & Related papers (2024-10-11T04:53:18Z) - Coverage and Bias of Street View Imagery in Mapping the Urban Environment [0.0]
Street View Imagery (SVI) has emerged as a valuable data form in urban studies, enabling new ways to map and sense urban environments.
However, fundamental concerns regarding the representativeness, quality, and reliability of SVI remain underexplored.
This research proposes a novel and effective method to estimate SVI's element-level coverage in the urban environment.
arXiv Detail & Related papers (2024-09-22T02:58:43Z) - RoadBEV: Road Surface Reconstruction in Bird's Eye View [55.0558717607946]
Road surface conditions, especially geometry profiles, enormously affect driving performance of autonomous vehicles. Vision-based online road reconstruction promisingly captures road information in advance.
Bird's-Eye-View (BEV) perception provides immense potential to more reliable and accurate reconstruction.
This paper uniformly proposes two simple yet effective models for road elevation reconstruction in BEV named RoadBEV-mono and RoadBEV-stereo.
arXiv Detail & Related papers (2024-04-09T20:24:29Z) - Open-source data pipeline for street-view images: a case study on
community mobility during COVID-19 pandemic [0.9423257767158634]
Street View Images (SVI) are a common source of valuable data for researchers.
Google Street View images are collected infrequently, making temporal analysis challenging.
This study demonstrates the feasibility and value of collecting and using SVI for research purposes beyond what is possible with currently available SVI data.
arXiv Detail & Related papers (2024-01-23T20:56:16Z) - OpenLane-V2: A Topology Reasoning Benchmark for Unified 3D HD Mapping [84.65114565766596]
We present OpenLane-V2, the first dataset on topology reasoning for traffic scene structure.
OpenLane-V2 consists of 2,000 annotated road scenes that describe traffic elements and their correlation to the lanes.
We evaluate various state-of-the-art methods, and present their quantitative and qualitative results on OpenLane-V2 to indicate future avenues for investigating topology reasoning in traffic scenes.
arXiv Detail & Related papers (2023-04-20T16:31:22Z) - Street-View Image Generation from a Bird's-Eye View Layout [95.36869800896335]
Bird's-Eye View (BEV) Perception has received increasing attention in recent years.
Data-driven simulation for autonomous driving has been a focal point of recent research.
We propose BEVGen, a conditional generative model that synthesizes realistic and spatially consistent surrounding images.
arXiv Detail & Related papers (2023-01-11T18:39:34Z) - 4Seasons: Benchmarking Visual SLAM and Long-Term Localization for
Autonomous Driving in Challenging Conditions [54.59279160621111]
We present a novel visual SLAM and long-term localization benchmark for autonomous driving in challenging conditions based on the large-scale 4Seasons dataset.
The proposed benchmark provides drastic appearance variations caused by seasonal changes and diverse weather and illumination conditions.
We introduce a new unified benchmark for jointly evaluating visual odometry, global place recognition, and map-based visual localization performance.
arXiv Detail & Related papers (2022-12-31T13:52:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.