Green View Index Analysis and Optimal Green View Index Path Based on
Street View and Deep Learning
- URL: http://arxiv.org/abs/2104.12627v1
- Date: Mon, 26 Apr 2021 14:53:21 GMT
- Title: Green View Index Analysis and Optimal Green View Index Path Based on
Street View and Deep Learning
- Authors: Anqi Hu, Jiahao Zhang and Hiroyuki Kaga
- Abstract summary: In this paper, we used Google API to obtain street view images of Osaka City.
PSPNet is used to segment the Osaka City street view images and analyse the Green View Index (GVI) data of Osaka area.
Three methods, namely corridor analysis, geometric network and a combination of them, were then used to calculate the optimal GVI paths.
- Score: 2.8682942808330703
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Streetscapes are an important part of the urban landscape, analysing and
studying them can increase the understanding of the cities' infrastructure,
which can lead to better planning and design of the urban living environment.
In this paper, we used Google API to obtain street view images of Osaka City.
The semantic segmentation model PSPNet is used to segment the Osaka City street
view images and analyse the Green View Index (GVI) data of Osaka area. Based on
the GVI data, three methods, namely corridor analysis, geometric network and a
combination of them, were then used to calculate the optimal GVI paths in Osaka
City. The corridor analysis and geometric network methods allow for a more
detailed delineation of the optimal GVI path from general areas to specific
routes. Our analysis not only allows for the calculation of specific routes for
the optimal GVI paths, but also allows for the visualisation and integration of
neighbourhood landscape data. By summarising all the data, a more specific and
objective analysis of the landscape in the study area can be carried out and
based on this, the available natural resources can be maximised for a better
life.
Related papers
- Geo-located Aspect Based Sentiment Analysis (ABSA) for Crowdsourced
Evaluation of Urban Environments [0.0]
We develop an ABSA model capable of extracting urban aspects contained within geo-located textual urban appraisals, along with corresponding aspect sentiment classification.
Our model achieves significant improvement in prediction accuracy on urban reviews, for both Aspect Term Extraction (ATE) and Aspect Sentiment Classification (ASC) tasks.
For demonstrative analysis, positive and negative urban aspects across Boston are spatially visualized.
arXiv Detail & Related papers (2023-12-19T15:37:27Z) - Effective Urban Region Representation Learning Using Heterogeneous Urban
Graph Attention Network (HUGAT) [0.0]
We propose heterogeneous urban graph attention network (HUGAT) for learning the representations of urban regions.
In our experiments on NYC data, HUGAT outperformed all the state-of-the-art models.
arXiv Detail & Related papers (2022-02-18T04:59:20Z) - SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point
Clouds [52.624157840253204]
We introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km2.
Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset.
arXiv Detail & Related papers (2022-01-12T14:48:11Z) - Sidewalk Measurements from Satellite Images: Preliminary Findings [10.870041943009722]
We train a computer vision model to detect sidewalks, roads, and buildings from remote-sensing imagery.
We apply shape analysis techniques to study different attributes of the extracted sidewalks.
arXiv Detail & Related papers (2021-12-12T02:22:46Z) - Semantic Segmentation on Swiss3DCities: A Benchmark Study on Aerial
Photogrammetric 3D Pointcloud Dataset [67.44497676652173]
We introduce a new outdoor urban 3D pointcloud dataset, covering a total area of 2.7 $km2$, sampled from three Swiss cities.
The dataset is manually annotated for semantic segmentation with per-point labels, and is built using photogrammetry from images acquired by multirotors equipped with high-resolution cameras.
arXiv Detail & Related papers (2020-12-23T21:48:47Z) - Road Scene Graph: A Semantic Graph-Based Scene Representation Dataset
for Intelligent Vehicles [72.04891523115535]
We propose road scene graph,a special scene-graph for intelligent vehicles.
It provides not only object proposals but also their pair-wise relationships.
By organizing them in a topological graph, these data are explainable, fully-connected, and could be easily processed by GCNs.
arXiv Detail & Related papers (2020-11-27T07:33:11Z) - GINet: Graph Interaction Network for Scene Parsing [58.394591509215005]
We propose a Graph Interaction unit (GI unit) and a Semantic Context Loss (SC-loss) to promote context reasoning over image regions.
The proposed GINet outperforms the state-of-the-art approaches on the popular benchmarks, including Pascal-Context and COCO Stuff.
arXiv Detail & Related papers (2020-09-14T02:52:45Z) - Towards Semantic Segmentation of Urban-Scale 3D Point Clouds: A Dataset,
Benchmarks and Challenges [52.624157840253204]
We present an urban-scale photogrammetric point cloud dataset with nearly three billion richly annotated points.
Our dataset consists of large areas from three UK cities, covering about 7.6 km2 of the city landscape.
We evaluate the performance of state-of-the-art algorithms on our dataset and provide a comprehensive analysis of the results.
arXiv Detail & Related papers (2020-09-07T14:47:07Z) - Learning Physical Graph Representations from Visual Scenes [56.7938395379406]
Physical Scene Graphs (PSGs) represent scenes as hierarchical graphs with nodes corresponding intuitively to object parts at different scales, and edges to physical connections between parts.
PSGNet augments standard CNNs by including: recurrent feedback connections to combine low and high-level image information; graph pooling and vectorization operations that convert spatially-uniform feature maps into object-centric graph structures.
We show that PSGNet outperforms alternative self-supervised scene representation algorithms at scene segmentation tasks.
arXiv Detail & Related papers (2020-06-22T16:10:26Z) - Urban2Vec: Incorporating Street View Imagery and POIs for Multi-Modal
Urban Neighborhood Embedding [8.396746290518102]
Urban2Vec is an unsupervised multi-modal framework which incorporates both street view imagery and point-of-interest data.
We show that Urban2Vec can achieve performances better than baseline models and comparable to fully-supervised methods in downstream prediction tasks.
arXiv Detail & Related papers (2020-01-29T21:30:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.