BuildAnyPoint: 3D Building Structured Abstraction from Diverse Point Clouds
- URL: http://arxiv.org/abs/2602.23645v1
- Date: Fri, 27 Feb 2026 03:31:56 GMT
- Title: BuildAnyPoint: 3D Building Structured Abstraction from Diverse Point Clouds
- Authors: Tongyan Hua, Haoran Gong, Yuan Liu, Di Wang, Ying-Cong Chen, Wufan Zhao,
- Abstract summary: We introduce BuildAnyPoint, a novel generative framework for structured 3D building reconstruction from point clouds with diverse distributions.<n>We first formulate distribution recovery as a conditional generation task by training latent diffusion models conditioned on input point clouds.<n>We then tailor a decoder-only transformer for conditional autoregressive mesh generation based on the recovered point clouds.
- Score: 35.066679627206526
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce BuildAnyPoint, a novel generative framework for structured 3D building reconstruction from point clouds with diverse distributions, such as those captured by airborne LiDAR and Structure-from-Motion. To recover artist-created building abstraction in this highly underconstrained setting, we capitalize on the role of explicit 3D generative priors in autoregressive mesh generation. Specifically, we design a Loosely Cascaded Diffusion Transformer (Loca-DiT) that initially recovers the underlying distribution from noisy or sparse points, followed by autoregressively encapsulating them into compact meshes. We first formulate distribution recovery as a conditional generation task by training latent diffusion models conditioned on input point clouds, and then tailor a decoder-only transformer for conditional autoregressive mesh generation based on the recovered point clouds. Our method delivers substantial qualitative and quantitative improvements over prior building abstraction methods. Furthermore, the effectiveness of our approach is evidenced by the strong performance of its recovered point clouds on building point cloud completion benchmarks, which exhibit improved surface accuracy and distribution uniformity.
Related papers
- Adaptive Point-Prompt Tuning: Fine-Tuning Heterogeneous Foundation Models for 3D Point Cloud Analysis [51.37795317716487]
We propose the Adaptive Point-Prompt Tuning (APPT) method, which fine-tunes pre-trained models with a modest number of parameters.<n>We convert raw point clouds into point embeddings by aggregating local geometry to capture spatial features followed by linear layers.<n>To calibrate self-attention across source domains of any modality to 3D, we introduce a prompt generator that shares weights with the point embedding module.
arXiv Detail & Related papers (2025-08-30T06:02:21Z) - 3D Point Cloud Generation via Autoregressive Up-sampling [60.05226063558296]
We introduce a pioneering autoregressive generative model for 3D point cloud generation.<n>Inspired by visual autoregressive modeling, we conceptualize point cloud generation as an autoregressive up-sampling process.<n>PointARU progressively refines 3D point clouds from coarse to fine scales.
arXiv Detail & Related papers (2025-03-11T16:30:45Z) - Hyperbolic-constraint Point Cloud Reconstruction from Single RGB-D Images [19.23499128175523]
We introduce hyperbolic space to 3D point cloud reconstruction, enabling the model to represent and understand complex hierarchical structures in point clouds with low distortion.<n>Our model outperforms most existing models, and ablation studies demonstrate the significance of our model and its components.
arXiv Detail & Related papers (2024-12-12T08:27:39Z) - Rendering-Oriented 3D Point Cloud Attribute Compression using Sparse Tensor-based Transformer [52.40992954884257]
3D visualization techniques have fundamentally transformed how we interact with digital content.<n>Massive data size of point clouds presents significant challenges in data compression.<n>We propose an end-to-end deep learning framework that seamlessly integrates PCAC with differentiable rendering.
arXiv Detail & Related papers (2024-11-12T16:12:51Z) - Point2Building: Reconstructing Buildings from Airborne LiDAR Point Clouds [23.897507889025817]
We present a learning-based approach to reconstruct buildings as 3D polygonal meshes from airborne LiDAR point clouds.
Our model learns directly from the point cloud data, thereby reducing error propagation and increasing the fidelity of the reconstruction.
We experimentally validate our method on a collection of airborne LiDAR data of Zurich, Berlin and Tallinn.
arXiv Detail & Related papers (2024-03-04T15:46:50Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - AdaPoinTr: Diverse Point Cloud Completion with Adaptive Geometry-Aware
Transformers [94.11915008006483]
We present a new method that reformulates point cloud completion as a set-to-set translation problem.
We design a new model, called PoinTr, which adopts a Transformer encoder-decoder architecture for point cloud completion.
Our method attains 6.53 CD on PCN, 0.81 CD on ShapeNet-55 and 0.392 MMD on real-world KITTI.
arXiv Detail & Related papers (2023-01-11T16:14:12Z) - SeedFormer: Patch Seeds based Point Cloud Completion with Upsample
Transformer [46.800630776714016]
We propose a novel SeedFormer to improve the ability of detail preservation and recovery in point cloud completion.
We introduce a new shape representation, namely Patch Seeds, which not only captures general structures from partial inputs but also preserves regional information of local patterns.
Our method outperforms state-of-the-art completion networks on several benchmark datasets.
arXiv Detail & Related papers (2022-07-21T06:15:59Z) - Autoregressive 3D Shape Generation via Canonical Mapping [92.91282602339398]
transformers have shown remarkable performances in a variety of generative tasks such as image, audio, and text generation.
In this paper, we aim to further exploit the power of transformers and employ them for the task of 3D point cloud generation.
Our model can be easily extended to multi-modal shape completion as an application for conditional shape generation.
arXiv Detail & Related papers (2022-04-05T03:12:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.