Human-instructed Deep Hierarchical Generative Learning for Automated
Urban Planning
- URL: http://arxiv.org/abs/2212.00904v1
- Date: Thu, 1 Dec 2022 23:06:41 GMT
- Title: Human-instructed Deep Hierarchical Generative Learning for Automated
Urban Planning
- Authors: Dongjie Wang, Lingfei Wu, Denghui Zhang, Jingbo Zhou, Leilei Sun, and
Yanjie Fu
- Abstract summary: We develop a novel human-instructed deep hierarchical generative model to generate optimal urban plans.
The first stage is to label the grids of a target area with latent functionalities to discover functional zones.
The second stage is to perceive the planning requirements to form urban functionality projections.
The third stage is to leverage multi-attentions to model the zone-zone peer dependencies of the functionality projections to generate grid-level land-use configurations.
- Score: 57.91323079939641
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The essential task of urban planning is to generate the optimal land-use
configuration of a target area. However, traditional urban planning is
time-consuming and labor-intensive. Deep generative learning gives us hope that
we can automate this planning process and come up with the ideal urban plans.
While remarkable achievements have been obtained, they have exhibited
limitations in lacking awareness of: 1) the hierarchical dependencies between
functional zones and spatial grids; 2) the peer dependencies among functional
zones; and 3) human regulations to ensure the usability of generated
configurations. To address these limitations, we develop a novel
human-instructed deep hierarchical generative model. We rethink the urban
planning generative task from a unique functionality perspective, where we
summarize planning requirements into different functionality projections for
better urban plan generation. To this end, we develop a three-stage generation
process from a target area to zones to grids. The first stage is to label the
grids of a target area with latent functionalities to discover functional
zones. The second stage is to perceive the planning requirements to form urban
functionality projections. We propose a novel module: functionalizer to project
the embedding of human instructions and geospatial contexts to the zone-level
plan to obtain such projections. Each projection includes the information of
land-use portfolios and the structural dependencies across spatial grids in
terms of a specific urban function. The third stage is to leverage
multi-attentions to model the zone-zone peer dependencies of the functionality
projections to generate grid-level land-use configurations. Finally, we present
extensive experiments to demonstrate the effectiveness of our framework.
Related papers
- AgentGen: Enhancing Planning Abilities for Large Language Model based Agent via Environment and Task Generation [89.68433168477227]
Large Language Model (LLM) based agents have garnered significant attention and are becoming increasingly popular.
This paper investigates enhancing the planning abilities of LLMs through instruction tuning.
To address this limitation, this paper explores the automated synthesis of diverse environments and a gradual range of planning tasks.
arXiv Detail & Related papers (2024-08-01T17:59:46Z) - Dual-stage Flows-based Generative Modeling for Traceable Urban Planning [33.03616838528995]
We propose a novel generative framework based on normalizing flows, namely Dual-stage Urban Flows framework.
We employ an Information Fusion Module to capture the relationship among functional zones and fuse the information of different aspects.
Our framework can outperform compared to other generative models for the urban planning task.
arXiv Detail & Related papers (2023-10-03T21:49:49Z) - Compositional Foundation Models for Hierarchical Planning [52.18904315515153]
We propose a foundation model which leverages expert foundation model trained on language, vision and action data individually together to solve long-horizon tasks.
We use a large language model to construct symbolic plans that are grounded in the environment through a large video diffusion model.
Generated video plans are then grounded to visual-motor control, through an inverse dynamics model that infers actions from generated videos.
arXiv Detail & Related papers (2023-09-15T17:44:05Z) - Automated Urban Planning aware Spatial Hierarchies and Human
Instructions [33.06221365923015]
We propose a novel, deep, human-instructed urban planner based on generative adversarial networks (GANs)
GANs build urban functional zones based on information from human instructions and surrounding contexts.
We conduct extensive experiments to validate the efficacy of our work.
arXiv Detail & Related papers (2022-09-26T20:37:02Z) - Automated Urban Planning for Reimagining City Configuration via
Adversarial Learning: Quantification, Generation, and Evaluation [30.48671788567521]
Urban planning refers to the efforts of designing land-use configurations given a region.
To obtain effective urban plans, urban experts have to spend much time and effort analyzing sophisticated planning constraints.
We formulate the automated urban planning problem into a task of deep generative learning.
arXiv Detail & Related papers (2021-12-26T00:59:35Z) - Successor Feature Landmarks for Long-Horizon Goal-Conditioned
Reinforcement Learning [54.378444600773875]
We introduce Successor Feature Landmarks (SFL), a framework for exploring large, high-dimensional environments.
SFL drives exploration by estimating state-novelty and enables high-level planning by abstracting the state-space as a non-parametric landmark-based graph.
We show in our experiments on MiniGrid and ViZDoom that SFL enables efficient exploration of large, high-dimensional state spaces.
arXiv Detail & Related papers (2021-11-18T18:36:05Z) - Deep Human-guided Conditional Variational Generative Modeling for
Automated Urban Planning [30.614010268762115]
Urban planning designs land-use configurations and can benefit building livable, sustainable, safe communities.
Inspired by image generation, deep urban planning aims to leverage deep learning to generate land-use configurations.
This paper studies a novel deep human guided urban planning method to jointly solve the above challenges.
arXiv Detail & Related papers (2021-10-12T15:45:38Z) - Reimagining City Configuration: Automated Urban Planning via Adversarial
Learning [28.930624100994514]
Urban planning refers to the efforts of designing land-use configurations.
Recent advance of deep learning motivates us to ask: can machines learn at a human capability to automatically and quickly calculate land-use configuration.
arXiv Detail & Related papers (2020-08-22T21:15:39Z) - Long-Horizon Visual Planning with Goal-Conditioned Hierarchical
Predictors [124.30562402952319]
The ability to predict and plan into the future is fundamental for agents acting in the world.
Current learning approaches for visual prediction and planning fail on long-horizon tasks.
We propose a framework for visual prediction and planning that is able to overcome both of these limitations.
arXiv Detail & Related papers (2020-06-23T17:58:56Z) - Plan2Vec: Unsupervised Representation Learning by Latent Plans [106.37274654231659]
We introduce plan2vec, an unsupervised representation learning approach that is inspired by reinforcement learning.
Plan2vec constructs a weighted graph on an image dataset using near-neighbor distances, and then extrapolates this local metric to a global embedding by distilling path-integral over planned path.
We demonstrate the effectiveness of plan2vec on one simulated and two challenging real-world image datasets.
arXiv Detail & Related papers (2020-05-07T17:52:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.