Self-supervised learning unveils change in urban housing from
street-level images
- URL: http://arxiv.org/abs/2309.11354v2
- Date: Thu, 21 Sep 2023 13:18:35 GMT
- Title: Self-supervised learning unveils change in urban housing from
street-level images
- Authors: Steven Stalder, Michele Volpi, Nicolas B\"uttner, Stephen Law, Kenneth
Harttgen, Esra Suel
- Abstract summary: Street2Vec embeds urban structure while being invariant to seasonal and daily changes without manual annotations.
It identified point-level change in London's housing supply from street-level images, and distinguished between major and minor change.
This capability can provide timely information for urban planning and policy decisions toward more liveable, equitable, and sustainable cities.
- Score: 2.0971479389679337
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cities around the world face a critical shortage of affordable and decent
housing. Despite its critical importance for policy, our ability to effectively
monitor and track progress in urban housing is limited. Deep learning-based
computer vision methods applied to street-level images have been successful in
the measurement of socioeconomic and environmental inequalities but did not
fully utilize temporal images to track urban change as time-varying labels are
often unavailable. We used self-supervised methods to measure change in London
using 15 million street images taken between 2008 and 2021. Our novel
adaptation of Barlow Twins, Street2Vec, embeds urban structure while being
invariant to seasonal and daily changes without manual annotations. It
outperformed generic embeddings, successfully identified point-level change in
London's housing supply from street-level images, and distinguished between
major and minor change. This capability can provide timely information for
urban planning and policy decisions toward more liveable, equitable, and
sustainable cities.
Related papers
- Streetscapes: Large-scale Consistent Street View Generation Using Autoregressive Video Diffusion [61.929653153389964]
We present a method for generating Streetscapes-long sequences of views through an on-the-fly synthesized city-scale scene.
Our method can scale to much longer-range camera trajectories, spanning several city blocks, while maintaining visual quality and consistency.
arXiv Detail & Related papers (2024-07-18T17:56:30Z) - MetaUrban: A Simulation Platform for Embodied AI in Urban Spaces [52.0930915607703]
Recent advances in Robotics and Embodied AI make public urban spaces no longer exclusive to humans.
Food delivery bots and electric wheelchairs have started sharing sidewalks with pedestrians, while diverse robot dogs and humanoids have recently emerged in the street.
Ensuring the generalizability and safety of these forthcoming mobile machines is crucial when navigating through the bustling streets in urban spaces.
We present MetaUrban, a compositional simulation platform for Embodied AI research in urban spaces.
arXiv Detail & Related papers (2024-07-11T17:56:49Z) - Eyes on the Streets: Leveraging Street-Level Imaging to Model Urban Crime Dynamics [0.0]
This study addresses the challenge of urban safety in New York City by examining the relationship between the built environment and crime rates.
We aim to identify how urban landscapes correlate with crime statistics, focusing on the characteristics of street views and their association with crime rates.
arXiv Detail & Related papers (2024-04-15T21:33:45Z) - CityPulse: Fine-Grained Assessment of Urban Change with Street View Time
Series [12.621355888239359]
Urban transformations have profound societal impact on both individuals and communities at large.
We propose an end-to-end change detection model to effectively capture physical alterations in the built environment at scale.
Our approach has the potential to supplement existing dataset and serve as a fine-grained and accurate assessment of urban change.
arXiv Detail & Related papers (2024-01-02T08:57:09Z) - UrbanBIS: a Large-scale Benchmark for Fine-grained Urban Building
Instance Segmentation [50.52615875873055]
UrbanBIS comprises six real urban scenes, with 2.5 billion points, covering a vast area of 10.78 square kilometers.
UrbanBIS provides semantic-level annotations on a rich set of urban objects, including buildings, vehicles, vegetation, roads, and bridges.
UrbanBIS is the first 3D dataset that introduces fine-grained building sub-categories.
arXiv Detail & Related papers (2023-05-04T08:01:38Z) - City-Wide Perceptions of Neighbourhood Quality using Street View Images [5.340189314359048]
This paper describes our methodology, based in London, including collection of images and ratings, web development, model training and mapping.
Perceived neighbourhood quality is a core component of urban vitality, influencing social cohesion, sense of community, safety, activity and mental health of residents.
arXiv Detail & Related papers (2022-11-22T10:16:35Z) - Urban form and COVID-19 cases and deaths in Greater London: an urban
morphometric approach [63.29165619502806]
The COVID-19 pandemic generated a considerable debate in relation to urban density.
This is an old debate, originated in mid 19th century's England with the emergence of public health and urban planning disciplines.
We describe urban form at individual building level and then aggregate information for official neighbourhoods.
arXiv Detail & Related papers (2022-10-16T10:01:10Z) - CitySurfaces: City-Scale Semantic Segmentation of Sidewalk Materials [6.573006589628846]
Most cities lack a spatial catalog of their surfaces due to the cost-prohibitive and time-consuming nature of data collection.
Recent advancements in computer vision, together with the availability of street-level images, provide new opportunities for cities to extract large-scale built environment data.
We propose CitySurfaces, an active learning-based framework that leverages computer vision techniques for classifying sidewalk materials.
arXiv Detail & Related papers (2022-01-06T21:58:37Z) - Modeling Fashion Influence from Photos [108.58097776743331]
We explore fashion influence along two channels: geolocation and fashion brands.
We leverage public large-scale datasets of 7.7M Instagram photos from 44 major world cities.
Our results indicate the advantage of grounding visual style evolution both spatially and temporally.
arXiv Detail & Related papers (2020-11-17T20:24:03Z) - Urban Mosaic: Visual Exploration of Streetscapes Using Large-Scale Image
Data [13.01318877814786]
Urban Mosaic is a tool for exploring the urban fabric through a spatially and temporally dense data set of 7.7 million street-level images from New York City.
arXiv Detail & Related papers (2020-08-31T02:23:12Z) - Learning to Factorize and Relight a City [70.81496092672421]
We propose a learning-based framework for disentangling outdoor scenes into temporally-varying illumination and permanent scene factors.
We show that our learned disentangled factors can be used to manipulate novel images in realistic ways, such as changing lighting effects and scene geometry.
arXiv Detail & Related papers (2020-08-06T17:59:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.