COHO: Context-Sensitive City-Scale Hierarchical Urban Layout Generation
- URL: http://arxiv.org/abs/2407.11294v1
- Date: Tue, 16 Jul 2024 00:49:53 GMT
- Title: COHO: Context-Sensitive City-Scale Hierarchical Urban Layout Generation
- Authors: Liu He, Daniel Aliaga,
- Abstract summary: We introduce a novel graph-based masked autoencoder (GMAE) for city-scale urban layout generation.
The method encodes attributed buildings, city blocks, communities and cities into a unified graph structure.
Our approach achieves good realism, semantic consistency, and correctness across the heterogeneous urban styles in 330 US cities.
- Score: 1.5745692520785073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The generation of large-scale urban layouts has garnered substantial interest across various disciplines. Prior methods have utilized procedural generation requiring manual rule coding or deep learning needing abundant data. However, prior approaches have not considered the context-sensitive nature of urban layout generation. Our approach addresses this gap by leveraging a canonical graph representation for the entire city, which facilitates scalability and captures the multi-layer semantics inherent in urban layouts. We introduce a novel graph-based masked autoencoder (GMAE) for city-scale urban layout generation. The method encodes attributed buildings, city blocks, communities and cities into a unified graph structure, enabling self-supervised masked training for graph autoencoder. Additionally, we employ scheduled iterative sampling for 2.5D layout generation, prioritizing the generation of important city blocks and buildings. Our approach achieves good realism, semantic consistency, and correctness across the heterogeneous urban styles in 330 US cities. Codes and datasets are released at https://github.com/Arking1995/COHO.
Related papers
- CityX: Controllable Procedural Content Generation for Unbounded 3D Cities [55.737060358043536]
We propose a novel multi-modal controllable procedural content generation method, named CityX.
It enhances realistic, unbounded 3D city generation guided by multiple layout conditions, including OSM, semantic maps, and satellite images.
Through this effective framework, CityX shows the potential to build an innovative ecosystem for 3D scene generation.
arXiv Detail & Related papers (2024-07-24T18:05:13Z) - Streetscapes: Large-scale Consistent Street View Generation Using Autoregressive Video Diffusion [61.929653153389964]
We present a method for generating Streetscapes-long sequences of views through an on-the-fly synthesized city-scale scene.
Our method can scale to much longer-range camera trajectories, spanning several city blocks, while maintaining visual quality and consistency.
arXiv Detail & Related papers (2024-07-18T17:56:30Z) - CityCraft: A Real Crafter for 3D City Generation [25.7885801163556]
CityCraft is an innovative framework designed to enhance both the diversity and quality of urban scene generation.
Our approach integrates three key stages: initially, a diffusion transformer (DiT) model is deployed to generate diverse and controllable 2D city layouts.
Based on the generated layout and city plan, we utilize the asset retrieval module and Blender for precise asset placement and scene construction.
arXiv Detail & Related papers (2024-06-07T14:49:00Z) - Urban Architect: Steerable 3D Urban Scene Generation with Layout Prior [43.14168074750301]
We introduce a compositional 3D layout representation into text-to-3D paradigm, serving as an additional prior.
It comprises a set of semantic primitives with simple geometric structures and explicit arrangement relationships.
We also present various scene editing demonstrations, showing the powers of steerable urban scene generation.
arXiv Detail & Related papers (2024-04-10T06:41:30Z) - Urban Scene Diffusion through Semantic Occupancy Map [49.20779809250597]
UrbanDiffusion is a 3D diffusion model conditioned on a Bird's-Eye View (BEV) map.
Our model learns the data distribution of scene-level structures within a latent space.
After training on real-world driving datasets, our model can generate a wide range of diverse urban scenes.
arXiv Detail & Related papers (2024-03-18T11:54:35Z) - CityGen: Infinite and Controllable 3D City Layout Generation [26.1563802843242]
CityGen is a novel end-to-end framework for infinite, diverse and controllable 3D city layout generation.
CityGen achieves state-of-the-art (SOTA) performance under FID and KID in generating an infinite and controllable 3D city layout.
arXiv Detail & Related papers (2023-12-03T21:16:37Z) - GlobalMapper: Arbitrary-Shaped Urban Layout Generation [1.5076964620370268]
A building layout consists of a set of buildings in city blocks defined by a network of roads.
We propose a fully automatic approach to building layout generation using graph attention networks.
Our results, including user study, demonstrate superior performance as compared to prior layout generation networks.
arXiv Detail & Related papers (2023-07-19T00:36:05Z) - UrbanBIS: a Large-scale Benchmark for Fine-grained Urban Building
Instance Segmentation [50.52615875873055]
UrbanBIS comprises six real urban scenes, with 2.5 billion points, covering a vast area of 10.78 square kilometers.
UrbanBIS provides semantic-level annotations on a rich set of urban objects, including buildings, vehicles, vegetation, roads, and bridges.
UrbanBIS is the first 3D dataset that introduces fine-grained building sub-categories.
arXiv Detail & Related papers (2023-05-04T08:01:38Z) - A Contextual Master-Slave Framework on Urban Region Graph for Urban
Village Detection [68.84486900183853]
We build an urban region graph (URG) to model the urban area in a hierarchically structured way.
Then, we design a novel contextual master-slave framework to effectively detect the urban village from the URG.
The proposed framework can learn to balance the generality and specificity for UV detection in an urban area.
arXiv Detail & Related papers (2022-11-26T18:17:39Z) - Effective Urban Region Representation Learning Using Heterogeneous Urban
Graph Attention Network (HUGAT) [0.0]
We propose heterogeneous urban graph attention network (HUGAT) for learning the representations of urban regions.
In our experiments on NYC data, HUGAT outperformed all the state-of-the-art models.
arXiv Detail & Related papers (2022-02-18T04:59:20Z) - Methodological Foundation of a Numerical Taxonomy of Urban Form [62.997667081978825]
We present a method for numerical taxonomy of urban form derived from biological systematics.
We derive homogeneous urban tissue types and, by determining overall morphological similarity between them, generate a hierarchical classification of urban form.
After framing and presenting the method, we test it on two cities - Prague and Amsterdam.
arXiv Detail & Related papers (2021-04-30T12:47:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.