PolyGNN: Polyhedron-based Graph Neural Network for 3D Building
Reconstruction from Point Clouds
- URL: http://arxiv.org/abs/2307.08636v1
- Date: Mon, 17 Jul 2023 16:52:25 GMT
- Title: PolyGNN: Polyhedron-based Graph Neural Network for 3D Building
Reconstruction from Point Clouds
- Authors: Zhaiyu Chen, Yilei Shi, Liangliang Nan, Zhitong Xiong, Xiao Xiang Zhu
- Abstract summary: PolyGNN is a graph neural network for 3D building reconstruction from point clouds.
We learn to assemble primitives obtained by polyhedral decomposition via graph node classification.
We conduct a transferability analysis across cities and on real-world point clouds.
- Score: 20.299248281970957
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present PolyGNN, a polyhedron-based graph neural network for 3D building
reconstruction from point clouds. PolyGNN learns to assemble primitives
obtained by polyhedral decomposition via graph node classification, achieving a
watertight, compact, and weakly semantic reconstruction. To effectively
represent arbitrary-shaped polyhedra in the neural network, we propose three
different sampling strategies to select representative points as
polyhedron-wise queries, enabling efficient occupancy inference. Furthermore,
we incorporate the inter-polyhedron adjacency to enhance the classification of
the graph nodes. We also observe that existing city-building models are
abstractions of the underlying instances. To address this abstraction gap and
provide a fair evaluation of the proposed method, we develop our method on a
large-scale synthetic dataset covering 500k+ buildings with well-defined ground
truths of polyhedral class labels. We further conduct a transferability
analysis across cities and on real-world point clouds. Both qualitative and
quantitative results demonstrate the effectiveness of our method, particularly
its efficiency for large-scale reconstructions. The source code and data of our
work are available at https://github.com/chenzhaiyu/polygnn.
Related papers
- Learning Geometric Invariant Features for Classification of Vector Polygons with Graph Message-passing Neural Network [3.804240190982697]
We propose a novel graph message-passing neural network (PolyMP) to learn the geometric-invariant features for shape classification of polygons.
We show that the proposed graph-based PolyMP network enables the learning of expressive geometric features invariant to geometric transformations of polygons.
arXiv Detail & Related papers (2024-07-05T08:19:36Z) - GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - Anisotropic Multi-Scale Graph Convolutional Network for Dense Shape
Correspondence [3.45989531033125]
This paper studies 3D dense shape correspondence, a key shape analysis application in computer vision and graphics.
We introduce a novel hybrid geometric deep learning-based model that learns geometrically meaningful and discretization-independent features.
The resulting correspondence maps show state-of-the-art performance on the benchmark datasets.
arXiv Detail & Related papers (2022-10-17T22:40:50Z) - PolyWorld: Polygonal Building Extraction with Graph Neural Networks in
Satellite Images [10.661430927191205]
This paper introduces PolyWorld, a neural network that directly extracts building vertices from an image and connects them correctly to create precise polygons.
PolyWorld significantly outperforms the state-of-the-art in building polygonization.
arXiv Detail & Related papers (2021-11-30T15:23:17Z) - PolyNet: Polynomial Neural Network for 3D Shape Recognition with
PolyShape Representation [51.147664305955495]
3D shape representation and its processing have substantial effects on 3D shape recognition.
We propose a deep neural network-based method (PolyNet) and a specific polygon representation (PolyShape)
Our experiments demonstrate the strength and the advantages of PolyNet on both 3D shape classification and retrieval tasks.
arXiv Detail & Related papers (2021-10-15T06:45:59Z) - Learnable Triangulation for Deep Learning-based 3D Reconstruction of
Objects of Arbitrary Topology from Single RGB Images [12.693545159861857]
We propose a novel deep reinforcement learning-based approach for 3D object reconstruction from monocular images.
The proposed method outperforms the state-of-the-art in terms of visual quality, reconstruction accuracy, and computational time.
arXiv Detail & Related papers (2021-09-24T09:44:22Z) - Exploiting Local Geometry for Feature and Graph Construction for Better
3D Point Cloud Processing with Graph Neural Networks [22.936590869919865]
We propose improvements in point representations and local neighborhood graph construction within the general framework of graph neural networks.
We show that the proposed network achieves faster training convergence, i.e. 40% less epochs for classification.
arXiv Detail & Related papers (2021-03-28T21:34:59Z) - Mix Dimension in Poincar\'{e} Geometry for 3D Skeleton-based Action
Recognition [57.98278794950759]
Graph Convolutional Networks (GCNs) have already demonstrated their powerful ability to model the irregular data.
We present a novel spatial-temporal GCN architecture which is defined via the Poincar'e geometry.
We evaluate our method on two current largest scale 3D datasets.
arXiv Detail & Related papers (2020-07-30T18:23:18Z) - Local Grid Rendering Networks for 3D Object Detection in Point Clouds [98.02655863113154]
CNNs are powerful but it would be computationally costly to directly apply convolutions on point data after voxelizing the entire point clouds to a dense regular 3D grid.
We propose a novel and principled Local Grid Rendering (LGR) operation to render the small neighborhood of a subset of input points into a low-resolution 3D grid independently.
We validate LGR-Net for 3D object detection on the challenging ScanNet and SUN RGB-D datasets.
arXiv Detail & Related papers (2020-07-04T13:57:43Z) - Learning Local Neighboring Structure for Robust 3D Shape Representation [143.15904669246697]
Representation learning for 3D meshes is important in many computer vision and graphics applications.
We propose a local structure-aware anisotropic convolutional operation (LSA-Conv)
Our model produces significant improvement in 3D shape reconstruction compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-21T13:40:03Z) - PUGeo-Net: A Geometry-centric Network for 3D Point Cloud Upsampling [103.09504572409449]
We propose a novel deep neural network based method, called PUGeo-Net, to generate uniform dense point clouds.
Thanks to its geometry-centric nature, PUGeo-Net works well for both CAD models with sharp features and scanned models with rich geometric details.
arXiv Detail & Related papers (2020-02-24T14:13:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.