Green View Index Analysis and Optimal Green View Index Path Based on
Street View and Deep Learning
- URL: http://arxiv.org/abs/2104.12627v1
- Date: Mon, 26 Apr 2021 14:53:21 GMT
- Title: Green View Index Analysis and Optimal Green View Index Path Based on
Street View and Deep Learning
- Authors: Anqi Hu, Jiahao Zhang and Hiroyuki Kaga
- Abstract summary: In this paper, we used Google API to obtain street view images of Osaka City.
PSPNet is used to segment the Osaka City street view images and analyse the Green View Index (GVI) data of Osaka area.
Three methods, namely corridor analysis, geometric network and a combination of them, were then used to calculate the optimal GVI paths.
- Score: 2.8682942808330703
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Streetscapes are an important part of the urban landscape, analysing and
studying them can increase the understanding of the cities' infrastructure,
which can lead to better planning and design of the urban living environment.
In this paper, we used Google API to obtain street view images of Osaka City.
The semantic segmentation model PSPNet is used to segment the Osaka City street
view images and analyse the Green View Index (GVI) data of Osaka area. Based on
the GVI data, three methods, namely corridor analysis, geometric network and a
combination of them, were then used to calculate the optimal GVI paths in Osaka
City. The corridor analysis and geometric network methods allow for a more
detailed delineation of the optimal GVI path from general areas to specific
routes. Our analysis not only allows for the calculation of specific routes for
the optimal GVI paths, but also allows for the visualisation and integration of
neighbourhood landscape data. By summarising all the data, a more specific and
objective analysis of the landscape in the study area can be carried out and
based on this, the available natural resources can be maximised for a better
life.
Related papers
- Image-based Visibility Analysis Replacing Line-of-Sight Simulation: An Urban Landmark Perspective [2.3315115235829342]
The study challenges the traditional LoS-based approaches by introducing a new, image-based visibility analysis method.<n>In the first case study, the method proves its reliability in detecting the visibility of six tall landmark constructions in global cities, with an overall accuracy of 87%.<n>In the second case, the proposed visibility graph uncovers the form and strength of connections for multiple landmarks along the River Thames in London.
arXiv Detail & Related papers (2025-05-17T03:41:45Z) - Unsupervised Urban Land Use Mapping with Street View Contrastive Clustering and a Geographical Prior [16.334202302817783]
This study introduces an unsupervised contrastive clustering model for street view images with a built-in geographical prior.
We experimentally show that our method can generate land use maps from geotagged street view image datasets of two cities.
arXiv Detail & Related papers (2025-04-24T13:41:27Z) - Streetscape Analysis with Generative AI (SAGAI): Vision-Language Assessment and Mapping of Urban Scenes [0.9208007322096533]
This paper introduces SAGAI: Streetscape Analysis with Generative Artificial Intelligence.
It is a modular workflow for scoring street-level urban scenes using open-access data and vision-language models.
It operates without task-specific training or proprietary software dependencies.
arXiv Detail & Related papers (2025-04-23T09:08:06Z) - Parking Space Detection in the City of Granada [0.0]
This paper addresses the challenge of parking space detection in urban areas, focusing on the city of Granada.
We develop and apply semantic segmentation techniques to accurately identify parked cars, moving cars and roads.
We employ Fully Convolutional Networks, Pyramid Networks and Dilated Convolutions, demonstrating their effectiveness in urban semantic segmentation.
arXiv Detail & Related papers (2025-01-11T22:29:12Z) - StreetviewLLM: Extracting Geographic Information Using a Chain-of-Thought Multimodal Large Language Model [12.789465279993864]
Geospatial predictions are crucial for diverse fields such as disaster management, urban planning, and public health.
We propose StreetViewLLM, a novel framework that integrates a large language model with the chain-of-thought reasoning and multimodal data sources.
The model has been applied to seven global cities, including Hong Kong, Tokyo, Singapore, Los Angeles, New York, London, and Paris.
arXiv Detail & Related papers (2024-11-19T05:15:19Z) - BuildingView: Constructing Urban Building Exteriors Databases with Street View Imagery and Multimodal Large Language Mode [1.0937094979510213]
Building Exteriors are increasingly important in urban analytics, driven by advancements in Street View Imagery and its integration with urban research.
We propose BuildingView, a novel approach that integrates high-resolution visual data from Google Street View with spatial information from OpenStreetMap via the Overpass API.
This research improves the accuracy of urban building exterior data, identifies key sustainability and design indicators, and develops a framework for their extraction and categorization.
arXiv Detail & Related papers (2024-09-29T03:00:16Z) - Effective Urban Region Representation Learning Using Heterogeneous Urban
Graph Attention Network (HUGAT) [0.0]
We propose heterogeneous urban graph attention network (HUGAT) for learning the representations of urban regions.
In our experiments on NYC data, HUGAT outperformed all the state-of-the-art models.
arXiv Detail & Related papers (2022-02-18T04:59:20Z) - SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point
Clouds [52.624157840253204]
We introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km2.
Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset.
arXiv Detail & Related papers (2022-01-12T14:48:11Z) - Sidewalk Measurements from Satellite Images: Preliminary Findings [10.870041943009722]
We train a computer vision model to detect sidewalks, roads, and buildings from remote-sensing imagery.
We apply shape analysis techniques to study different attributes of the extracted sidewalks.
arXiv Detail & Related papers (2021-12-12T02:22:46Z) - Semantic Segmentation on Swiss3DCities: A Benchmark Study on Aerial
Photogrammetric 3D Pointcloud Dataset [67.44497676652173]
We introduce a new outdoor urban 3D pointcloud dataset, covering a total area of 2.7 $km2$, sampled from three Swiss cities.
The dataset is manually annotated for semantic segmentation with per-point labels, and is built using photogrammetry from images acquired by multirotors equipped with high-resolution cameras.
arXiv Detail & Related papers (2020-12-23T21:48:47Z) - Road Scene Graph: A Semantic Graph-Based Scene Representation Dataset
for Intelligent Vehicles [72.04891523115535]
We propose road scene graph,a special scene-graph for intelligent vehicles.
It provides not only object proposals but also their pair-wise relationships.
By organizing them in a topological graph, these data are explainable, fully-connected, and could be easily processed by GCNs.
arXiv Detail & Related papers (2020-11-27T07:33:11Z) - GINet: Graph Interaction Network for Scene Parsing [58.394591509215005]
We propose a Graph Interaction unit (GI unit) and a Semantic Context Loss (SC-loss) to promote context reasoning over image regions.
The proposed GINet outperforms the state-of-the-art approaches on the popular benchmarks, including Pascal-Context and COCO Stuff.
arXiv Detail & Related papers (2020-09-14T02:52:45Z) - Towards Semantic Segmentation of Urban-Scale 3D Point Clouds: A Dataset,
Benchmarks and Challenges [52.624157840253204]
We present an urban-scale photogrammetric point cloud dataset with nearly three billion richly annotated points.
Our dataset consists of large areas from three UK cities, covering about 7.6 km2 of the city landscape.
We evaluate the performance of state-of-the-art algorithms on our dataset and provide a comprehensive analysis of the results.
arXiv Detail & Related papers (2020-09-07T14:47:07Z) - Learning Physical Graph Representations from Visual Scenes [56.7938395379406]
Physical Scene Graphs (PSGs) represent scenes as hierarchical graphs with nodes corresponding intuitively to object parts at different scales, and edges to physical connections between parts.
PSGNet augments standard CNNs by including: recurrent feedback connections to combine low and high-level image information; graph pooling and vectorization operations that convert spatially-uniform feature maps into object-centric graph structures.
We show that PSGNet outperforms alternative self-supervised scene representation algorithms at scene segmentation tasks.
arXiv Detail & Related papers (2020-06-22T16:10:26Z) - Urban2Vec: Incorporating Street View Imagery and POIs for Multi-Modal
Urban Neighborhood Embedding [8.396746290518102]
Urban2Vec is an unsupervised multi-modal framework which incorporates both street view imagery and point-of-interest data.
We show that Urban2Vec can achieve performances better than baseline models and comparable to fully-supervised methods in downstream prediction tasks.
arXiv Detail & Related papers (2020-01-29T21:30:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.