Convolutional Neural Network-based Efficient Dense Point Cloud
Generation using Unsigned Distance Fields
- URL: http://arxiv.org/abs/2203.11537v3
- Date: Wed, 13 Mar 2024 11:11:33 GMT
- Title: Convolutional Neural Network-based Efficient Dense Point Cloud
Generation using Unsigned Distance Fields
- Authors: Abol Basher and Jani Boutellier
- Abstract summary: We propose a lightweight Convolutional Neural Network that learns and predicts the unsigned distance field for arbitrary 3D shapes.
Experiments demonstrate that the proposed architecture outperforms the state of the art by 7.8x less model parameters, 2.4x faster inference time and up to 24.8% improved generation quality.
- Score: 3.198144010381572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dense point cloud generation from a sparse or incomplete point cloud is a
crucial and challenging problem in 3D computer vision and computer graphics. So
far, the existing methods are either computationally too expensive, suffer from
limited resolution, or both. In addition, some methods are strictly limited to
watertight surfaces -- another major obstacle for a number of applications. To
address these issues, we propose a lightweight Convolutional Neural Network
that learns and predicts the unsigned distance field for arbitrary 3D shapes
for dense point cloud generation using the recently emerged concept of implicit
function learning. Experiments demonstrate that the proposed architecture
outperforms the state of the art by 7.8x less model parameters, 2.4x faster
inference time and up to 24.8% improved generation quality compared to the
state-of-the-art.
Related papers
- Efficient and Scalable Point Cloud Generation with Sparse Point-Voxel Diffusion Models [6.795447206159906]
We propose a novel point cloud U-Net diffusion architecture for 3D generative modeling.
Our network employs a dual-branch architecture, combining the high-resolution representations of points with the computational efficiency of sparse voxels.
Our model excels in all tasks, establishing it as a state-of-the-art diffusion U-Net for point cloud generative modeling.
arXiv Detail & Related papers (2024-08-12T13:41:47Z) - GridPull: Towards Scalability in Learning Implicit Representations from
3D Point Clouds [60.27217859189727]
We propose GridPull to improve the efficiency of learning implicit representations from large scale point clouds.
Our novelty lies in the fast inference of a discrete distance field defined on grids without using any neural components.
We use uniform grids for a fast grid search to localize sampled queries, and organize surface points in a tree structure to speed up the calculation of distances to the surface.
arXiv Detail & Related papers (2023-08-25T04:52:52Z) - 4DSR-GCN: 4D Video Point Cloud Upsampling using Graph Convolutional
Networks [29.615723135027096]
We propose a new solution for upscaling and restoration of time-varying 3D video point clouds after they have been compressed.
Our model consists of a specifically designed Graph Convolutional Network (GCN) that combines Dynamic Edge Convolution and Graph Attention Networks.
arXiv Detail & Related papers (2023-06-01T18:43:16Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - PointAttN: You Only Need Attention for Point Cloud Completion [89.88766317412052]
Point cloud completion refers to completing 3D shapes from partial 3D point clouds.
We propose a novel neural network for processing point cloud in a per-point manner to eliminate kNNs.
The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes.
arXiv Detail & Related papers (2022-03-16T09:20:01Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - Revisiting Point Cloud Simplification: A Learnable Feature Preserving
Approach [57.67932970472768]
Mesh and Point Cloud simplification methods aim to reduce the complexity of 3D models while retaining visual quality and relevant salient features.
We propose a fast point cloud simplification method by learning to sample salient points.
The proposed method relies on a graph neural network architecture trained to select an arbitrary, user-defined, number of points from the input space and to re-arrange their positions so as to minimize the visual perception error.
arXiv Detail & Related papers (2021-09-30T10:23:55Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.