Urban Mosaic: Visual Exploration of Streetscapes Using Large-Scale Image
Data
- URL: http://arxiv.org/abs/2008.13321v1
- Date: Mon, 31 Aug 2020 02:23:12 GMT
- Title: Urban Mosaic: Visual Exploration of Streetscapes Using Large-Scale Image
Data
- Authors: Fabio Miranda, Maryam Hosseini, Marcos Lage, Harish Doraiswamy, Graham
Dove, Claudio T. Silva
- Abstract summary: Urban Mosaic is a tool for exploring the urban fabric through a spatially and temporally dense data set of 7.7 million street-level images from New York City.
- Score: 13.01318877814786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Urban planning is increasingly data driven, yet the challenge of designing
with data at a city scale and remaining sensitive to the impact at a human
scale is as important today as it was for Jane Jacobs. We address this
challenge with Urban Mosaic,a tool for exploring the urban fabric through a
spatially and temporally dense data set of 7.7 million street-level images from
New York City, captured over the period of a year. Working in collaboration
with professional practitioners, we use Urban Mosaic to investigate questions
of accessibility and mobility, and preservation and retrofitting. In doing so,
we demonstrate how tools such as this might provide a bridge between the city
and the street, by supporting activities such as visual comparison of
geographically distant neighborhoods,and temporal analysis of unfolding urban
development.
Related papers
- Streetscapes: Large-scale Consistent Street View Generation Using Autoregressive Video Diffusion [61.929653153389964]
We present a method for generating Streetscapes-long sequences of views through an on-the-fly synthesized city-scale scene.
Our method can scale to much longer-range camera trajectories, spanning several city blocks, while maintaining visual quality and consistency.
arXiv Detail & Related papers (2024-07-18T17:56:30Z) - MetaUrban: A Simulation Platform for Embodied AI in Urban Spaces [52.0930915607703]
Recent advances in Robotics and Embodied AI make public urban spaces no longer exclusive to humans.
Food delivery bots and electric wheelchairs have started sharing sidewalks with pedestrians, while diverse robot dogs and humanoids have recently emerged in the street.
Ensuring the generalizability and safety of these forthcoming mobile machines is crucial when navigating through the bustling streets in urban spaces.
We present MetaUrban, a compositional simulation platform for Embodied AI research in urban spaces.
arXiv Detail & Related papers (2024-07-11T17:56:49Z) - Eyes on the Streets: Leveraging Street-Level Imaging to Model Urban Crime Dynamics [0.0]
This study addresses the challenge of urban safety in New York City by examining the relationship between the built environment and crime rates.
We aim to identify how urban landscapes correlate with crime statistics, focusing on the characteristics of street views and their association with crime rates.
arXiv Detail & Related papers (2024-04-15T21:33:45Z) - CityPulse: Fine-Grained Assessment of Urban Change with Street View Time
Series [12.621355888239359]
Urban transformations have profound societal impact on both individuals and communities at large.
We propose an end-to-end change detection model to effectively capture physical alterations in the built environment at scale.
Our approach has the potential to supplement existing dataset and serve as a fine-grained and accurate assessment of urban change.
arXiv Detail & Related papers (2024-01-02T08:57:09Z) - Unified Data Management and Comprehensive Performance Evaluation for
Urban Spatial-Temporal Prediction [Experiment, Analysis & Benchmark] [78.05103666987655]
This work addresses challenges in accessing and utilizing diverse urban spatial-temporal datasets.
We introduceatomic files, a unified storage format designed for urban spatial-temporal big data, and validate its effectiveness on 40 diverse datasets.
We conduct extensive experiments using diverse models and datasets, establishing a performance leaderboard and identifying promising research directions.
arXiv Detail & Related papers (2023-08-24T16:20:00Z) - The Urban Toolkit: A Grammar-based Framework for Urban Visual Analytics [5.674216760436341]
The complex nature of urban issues and the overwhelming amount of available data have posed significant challenges in translating these efforts into actionable insights.
When analyzing a feature of interest, an urban expert must transform, integrate, and visualize different thematic (e.g., sunlight access, demographic) and physical (e.g., buildings, street networks) data layers.
This makes the entire visual data exploration and system implementation difficult for programmers and also sets a high entry barrier for urban experts outside of computer science.
arXiv Detail & Related papers (2023-08-15T13:43:04Z) - UrbanBIS: a Large-scale Benchmark for Fine-grained Urban Building
Instance Segmentation [50.52615875873055]
UrbanBIS comprises six real urban scenes, with 2.5 billion points, covering a vast area of 10.78 square kilometers.
UrbanBIS provides semantic-level annotations on a rich set of urban objects, including buildings, vehicles, vegetation, roads, and bridges.
UrbanBIS is the first 3D dataset that introduces fine-grained building sub-categories.
arXiv Detail & Related papers (2023-05-04T08:01:38Z) - A Contextual Master-Slave Framework on Urban Region Graph for Urban
Village Detection [68.84486900183853]
We build an urban region graph (URG) to model the urban area in a hierarchically structured way.
Then, we design a novel contextual master-slave framework to effectively detect the urban village from the URG.
The proposed framework can learn to balance the generality and specificity for UV detection in an urban area.
arXiv Detail & Related papers (2022-11-26T18:17:39Z) - Urban form and COVID-19 cases and deaths in Greater London: an urban
morphometric approach [63.29165619502806]
The COVID-19 pandemic generated a considerable debate in relation to urban density.
This is an old debate, originated in mid 19th century's England with the emergence of public health and urban planning disciplines.
We describe urban form at individual building level and then aggregate information for official neighbourhoods.
arXiv Detail & Related papers (2022-10-16T10:01:10Z) - Effective Urban Region Representation Learning Using Heterogeneous Urban
Graph Attention Network (HUGAT) [0.0]
We propose heterogeneous urban graph attention network (HUGAT) for learning the representations of urban regions.
In our experiments on NYC data, HUGAT outperformed all the state-of-the-art models.
arXiv Detail & Related papers (2022-02-18T04:59:20Z) - CitySurfaces: City-Scale Semantic Segmentation of Sidewalk Materials [6.573006589628846]
Most cities lack a spatial catalog of their surfaces due to the cost-prohibitive and time-consuming nature of data collection.
Recent advancements in computer vision, together with the availability of street-level images, provide new opportunities for cities to extract large-scale built environment data.
We propose CitySurfaces, an active learning-based framework that leverages computer vision techniques for classifying sidewalk materials.
arXiv Detail & Related papers (2022-01-06T21:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.