DANCE: Density-agnostic and Class-aware Network for Point Cloud Completion
- URL: http://arxiv.org/abs/2511.07978v2
- Date: Mon, 17 Nov 2025 01:28:46 GMT
- Title: DANCE: Density-agnostic and Class-aware Network for Point Cloud Completion
- Authors: Da-Yeong Kim, Yeong-Jun Cho,
- Abstract summary: Point cloud completion aims to recover missing geometric structures from incomplete 3D scans.<n>DANCE is a novel framework that completes only the missing regions while preserving the observed geometry.
- Score: 1.7188280334580195
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Point cloud completion aims to recover missing geometric structures from incomplete 3D scans, which often suffer from occlusions or limited sensor viewpoints. Existing methods typically assume fixed input/output densities or rely on image-based representations, making them less suitable for real-world scenarios with variable sparsity and limited supervision. In this paper, we introduce Density-agnostic and Class-aware Network (DANCE), a novel framework that completes only the missing regions while preserving the observed geometry. DANCE generates candidate points via ray-based sampling from multiple viewpoints. A transformer decoder then refines their positions and predicts opacity scores, which determine the validity of each point for inclusion in the final surface. To incorporate semantic guidance, a lightweight classification head is trained directly on geometric features, enabling category-consistent completion without external image supervision. Extensive experiments on the PCN and MVP benchmarks show that DANCE outperforms state-of-the-art methods in accuracy and structural consistency, while remaining robust to varying input densities and noise levels.
Related papers
- Neural Visibility of Point Sets [31.13434703858653]
We propose a novel approach to visibility determination in point clouds by formulating it as a binary classification task.<n>Our network is trained end-to-end with ground-truth visibility labels generated from rendered 3D models.<n>Our method significantly outperforms HPR in both accuracy and computational efficiency, achieving up to 126 times speedup on large point clouds.
arXiv Detail & Related papers (2025-09-29T00:54:00Z) - Cross-Modal Geometric Hierarchy Fusion: An Implicit-Submap Driven Framework for Resilient 3D Place Recognition [9.411542547451193]
We propose a novel framework that redefines 3D place recognition through density-agnostic geometric reasoning.<n>Specifically, we introduce an implicit 3D representation based on elastic points, which is immune to the interference of original scene point cloud density.<n>With the aid of these two types of information, we obtain descriptors that fuse geometric information from both bird's-eye view and 3D segment perspectives.
arXiv Detail & Related papers (2025-06-17T07:04:07Z) - CP-VoteNet: Contrastive Prototypical VoteNet for Few-Shot Point Cloud Object Detection [7.205000222081269]
Few-shot point cloud 3D object detection (FS3D) aims to identify and localise objects of novel classes from point clouds.
We introduce contrastive semantics mining, which enables the network to extract discriminative categorical features.
Through refined primitive geometric structures, the transferability of feature encoding from base to novel classes is significantly enhanced.
arXiv Detail & Related papers (2024-08-30T06:13:49Z) - Unsupervised 3D Point Cloud Completion via Multi-view Adversarial Learning [61.14132533712537]
We propose MAL-UPC, a framework that effectively leverages both region-level and category-specific geometric similarities to complete missing structures.<n>Our MAL-UPC does not require any 3D complete supervision and only necessitates single-view partial observations in the training set.
arXiv Detail & Related papers (2024-07-13T06:53:39Z) - Improving Gaussian Splatting with Localized Points Management [52.009874685460694]
Localized Point Management (LPM) is capable of identifying those error-contributing zones in greatest need for both point addition and geometry calibration.<n>LPM applies point densification in the identified zones and then reset the opacity of the points in front of these regions, creating a new opportunity to correct poorly conditioned points.<n> Notably, LPM improves both static 3DGS and dynamic SpaceTimeGS to achieve state-of-the-art rendering quality while retaining real-time speeds.
arXiv Detail & Related papers (2024-06-06T16:55:07Z) - Towards Better Gradient Consistency for Neural Signed Distance Functions
via Level Set Alignment [50.892158511845466]
We show that gradient consistency in the field, indicated by the parallelism of level sets, is the key factor affecting the inference accuracy.
We propose a level set alignment loss to evaluate the parallelism of level sets, which can be minimized to achieve better gradient consistency.
arXiv Detail & Related papers (2023-05-19T11:28:05Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - ME-PCN: Point Completion Conditioned on Mask Emptiness [50.414383063838336]
Main-stream methods predict missing shapes by decoding a global feature learned from the input point cloud.
We present ME-PCN, a point completion network that leverages emptiness' in 3D shape space.
arXiv Detail & Related papers (2021-08-18T15:02:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.