AutoEncoding Tree for City Generation and Applications
- URL: http://arxiv.org/abs/2309.15941v1
- Date: Wed, 27 Sep 2023 18:36:56 GMT
- Title: AutoEncoding Tree for City Generation and Applications
- Authors: Wenyu Han, Congcong Wen, Lazarus Chok, Yan Liang Tan, Sheung Lung
Chan, Hang Zhao, Chen Feng
- Abstract summary: Huge volumes of spatial data in cities pose a challenge to the generative models.
Few publicly available 3D real-world city datasets also hinder the development of methods for city generation.
We propose AETree, a tree-structured auto-encoder neural network, for city generation.
- Score: 33.364915512018364
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: City modeling and generation have attracted an increased interest in various
applications, including gaming, urban planning, and autonomous driving. Unlike
previous works focused on the generation of single objects or indoor scenes,
the huge volumes of spatial data in cities pose a challenge to the generative
models. Furthermore, few publicly available 3D real-world city datasets also
hinder the development of methods for city generation. In this paper, we first
collect over 3,000,000 geo-referenced objects for the city of New York, Zurich,
Tokyo, Berlin, Boston and several other large cities. Based on this dataset, we
propose AETree, a tree-structured auto-encoder neural network, for city
generation. Specifically, we first propose a novel Spatial-Geometric Distance
(SGD) metric to measure the similarity between building layouts and then
construct a binary tree over the raw geometric data of building based on the
SGD metric. Next, we present a tree-structured network whose encoder learns to
extract and merge spatial information from bottom-up iteratively. The resulting
global representation is reversely decoded for reconstruction or generation. To
address the issue of long-dependency as the level of the tree increases, a Long
Short-Term Memory (LSTM) Cell is employed as a basic network element of the
proposed AETree. Moreover, we introduce a novel metric, Overlapping Area Ratio
(OAR), to quantitatively evaluate the generation results. Experiments on the
collected dataset demonstrate the effectiveness of the proposed model on 2D and
3D city generation. Furthermore, the latent features learned by AETree can
serve downstream urban planning applications.
Related papers
- COHO: Context-Sensitive City-Scale Hierarchical Urban Layout Generation [1.5745692520785073]
We introduce a novel graph-based masked autoencoder (GMAE) for city-scale urban layout generation.
The method encodes attributed buildings, city blocks, communities and cities into a unified graph structure.
Our approach achieves good realism, semantic consistency, and correctness across the heterogeneous urban styles in 330 US cities.
arXiv Detail & Related papers (2024-07-16T00:49:53Z) - Outdoor Scene Extrapolation with Hierarchical Generative Cellular Automata [70.9375320609781]
We aim to generate fine-grained 3D geometry from large-scale sparse LiDAR scans, abundantly captured by autonomous vehicles (AV)
We propose hierarchical Generative Cellular Automata (hGCA), a spatially scalable 3D generative model, which grows geometry with local kernels following, in a coarse-to-fine manner, equipped with a light-weight planner to induce global consistency.
arXiv Detail & Related papers (2024-06-12T14:56:56Z) - Building3D: An Urban-Scale Dataset and Benchmarks for Learning Roof
Structures from Point Clouds [4.38301148531795]
Existing datasets for 3D modeling mainly focus on common objects such as furniture or cars.
We present a urban-scale dataset consisting of more than 160 thousands buildings along with corresponding point clouds, mesh and wire-frame models, covering 16 cities in Estonia about 998 Km2.
Experimental results indicate that Building3D has challenges of high intra-class variance, data imbalance and large-scale noises.
arXiv Detail & Related papers (2023-07-21T21:38:57Z) - Semi-supervised Learning from Street-View Images and OpenStreetMap for
Automatic Building Height Estimation [59.6553058160943]
We propose a semi-supervised learning (SSL) method of automatically estimating building height from Mapillary SVI and OpenStreetMap data.
The proposed method leads to a clear performance boosting in estimating building heights with a Mean Absolute Error (MAE) around 2.1 meters.
The preliminary result is promising and motivates our future work in scaling up the proposed method based on low-cost VGI data.
arXiv Detail & Related papers (2023-07-05T18:16:30Z) - DeepTree: Modeling Trees with Situated Latents [8.372189962601073]
We propose a novel method for modeling trees based on learning developmental rules for branching structures instead of manually defining them.
We call our deep neural model situated latent because its behavior is determined by the intrinsic state.
Our method enables generating a wide variety of tree shapes without the need to define intricate parameters.
arXiv Detail & Related papers (2023-05-09T03:33:14Z) - SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point
Clouds [52.624157840253204]
We introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km2.
Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset.
arXiv Detail & Related papers (2022-01-12T14:48:11Z) - BuildingNet: Learning to Label 3D Buildings [19.641000866952815]
BuildingNet: (a) large-scale 3D building models whose exteriors consistently labeled, (b) a neural network that labels building analyzing and structural relations of their geometric primitives.
The dataset covers categories, such as houses, churches, skyscrapers, town halls and castles.
arXiv Detail & Related papers (2021-10-11T01:45:26Z) - Semantic Segmentation on Swiss3DCities: A Benchmark Study on Aerial
Photogrammetric 3D Pointcloud Dataset [67.44497676652173]
We introduce a new outdoor urban 3D pointcloud dataset, covering a total area of 2.7 $km2$, sampled from three Swiss cities.
The dataset is manually annotated for semantic segmentation with per-point labels, and is built using photogrammetry from images acquired by multirotors equipped with high-resolution cameras.
arXiv Detail & Related papers (2020-12-23T21:48:47Z) - PT2PC: Learning to Generate 3D Point Cloud Shapes from Part Tree
Conditions [66.87405921626004]
This paper investigates the novel problem of generating 3D shape point cloud geometry from a symbolic part tree representation.
We propose a conditional GAN "part tree"-to-"point cloud" model (PT2PC) that disentangles the structural and geometric factors.
arXiv Detail & Related papers (2020-03-19T08:27:25Z) - Building Footprint Generation by IntegratingConvolution Neural Network
with Feature PairwiseConditional Random Field (FPCRF) [21.698236040666675]
Building footprint maps are vital to many remote sensing applications, such as 3D building modeling, urban planning, and disaster management.
In this work, an end-to-end building footprint generation approach that integrates convolution neural network (CNN) and graph model is proposed.
arXiv Detail & Related papers (2020-02-11T18:51:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.