Attention-based Transformation from Latent Features to Point Clouds
- URL: http://arxiv.org/abs/2112.05324v1
- Date: Fri, 10 Dec 2021 03:59:04 GMT
- Title: Attention-based Transformation from Latent Features to Point Clouds
- Authors: Kaiyi Zhang, Ximing Yang, Yuan Wu, Cheng Jin
- Abstract summary: AXform is an attention-based method to transform latent features to point clouds.
It takes both parameter sharing and data flow into account, which makes it has fewer outliers, fewer network parameters, and a faster convergence speed.
AXform does not have the strong 2-manifold constraint, which improves the generation of non-smooth surfaces.
Considerable experiments on different datasets show that our methods achieve state-of-the-art results.
- Score: 6.547680885781582
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In point cloud generation and completion, previous methods for transforming
latent features to point clouds are generally based on fully connected layers
(FC-based) or folding operations (Folding-based). However, point clouds
generated by FC-based methods are usually troubled by outliers and rough
surfaces. For folding-based methods, their data flow is large, convergence
speed is slow, and they are also hard to handle the generation of non-smooth
surfaces. In this work, we propose AXform, an attention-based method to
transform latent features to point clouds. AXform first generates points in an
interim space, using a fully connected layer. These interim points are then
aggregated to generate the target point cloud. AXform takes both parameter
sharing and data flow into account, which makes it has fewer outliers, fewer
network parameters, and a faster convergence speed. The points generated by
AXform do not have the strong 2-manifold constraint, which improves the
generation of non-smooth surfaces. When AXform is expanded to multiple branches
for local generations, the centripetal constraint makes it has properties of
self-clustering and space consistency, which further enables unsupervised
semantic segmentation. We also adopt this scheme and design AXformNet for point
cloud completion. Considerable experiments on different datasets show that our
methods achieve state-of-the-art results.
Related papers
- Point Cloud Compression with Implicit Neural Representations: A Unified Framework [54.119415852585306]
We present a pioneering point cloud compression framework capable of handling both geometry and attribute components.
Our framework utilizes two coordinate-based neural networks to implicitly represent a voxelized point cloud.
Our method exhibits high universality when contrasted with existing learning-based techniques.
arXiv Detail & Related papers (2024-05-19T09:19:40Z) - Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - Patch-Wise Point Cloud Generation: A Divide-and-Conquer Approach [83.05340155068721]
We devise a new 3d point cloud generation framework using a divide-and-conquer approach.
All patch generators are based on learnable priors, which aim to capture the information of geometry primitives.
Experimental results on a variety of object categories from the most popular point cloud dataset, ShapeNet, show the effectiveness of the proposed patch-wise point cloud generation.
arXiv Detail & Related papers (2023-07-22T11:10:39Z) - DualGenerator: Information Interaction-based Generative Network for
Point Cloud Completion [25.194587599472147]
Point cloud completion estimates complete shapes from incomplete point clouds to obtain higher-quality point cloud data.
Most existing methods only consider global object features, ignoring spatial and semantic information of adjacent points.
We propose an information interaction-based generative network for point cloud completion.
arXiv Detail & Related papers (2023-05-16T03:25:38Z) - Controllable Mesh Generation Through Sparse Latent Point Diffusion
Models [105.83595545314334]
We design a novel sparse latent point diffusion model for mesh generation.
Our key insight is to regard point clouds as an intermediate representation of meshes, and model the distribution of point clouds instead.
Our proposed sparse latent point diffusion model achieves superior performance in terms of generation quality and controllability.
arXiv Detail & Related papers (2023-03-14T14:25:29Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - Fast Point Voxel Convolution Neural Network with Selective Feature
Fusion for Point Cloud Semantic Segmentation [7.557684072809662]
We present a novel lightweight convolutional neural network for point cloud analysis.
Our method operates on the entire point sets without sampling and achieves good performances efficiently.
arXiv Detail & Related papers (2021-09-23T19:39:01Z) - Representing Point Clouds with Generative Conditional Invertible Flow
Networks [15.280751949071016]
We propose a simple yet effective method to represent point clouds as sets of samples drawn from a cloud-specific probability distribution.
We show that our method leverages generative invertible flow networks to learn embeddings as well as to generate point clouds.
Our model offers competitive or superior quantitative results on benchmark datasets.
arXiv Detail & Related papers (2020-10-07T18:30:47Z) - DeepCLR: Correspondence-Less Architecture for Deep End-to-End Point
Cloud Registration [12.471564670462344]
This work addresses the problem of point cloud registration using deep neural networks.
We propose an approach to predict the alignment between two point clouds with overlapping data content, but displaced origins.
Our approach achieves state-of-the-art accuracy and the lowest run-time of the compared methods.
arXiv Detail & Related papers (2020-07-22T08:20:57Z) - Meshing Point Clouds with Predicted Intrinsic-Extrinsic Ratio Guidance [30.863194319818223]
We propose to leverage the input point cloud as much as possible, by only adding connectivity information to existing points.
Our key innovation is a surrogate of local connectivity, calculated by comparing the intrinsic/extrinsic metrics.
We demonstrate that our method can not only preserve details, handle ambiguous structures, but also possess strong generalizability to unseen categories.
arXiv Detail & Related papers (2020-07-17T22:36:00Z) - Point2Mesh: A Self-Prior for Deformable Meshes [83.31236364265403]
We introduce Point2Mesh, a technique for reconstructing a surface mesh from an input point cloud.
The self-prior encapsulates reoccurring geometric repetitions from a single shape within the weights of a deep neural network.
We show that Point2Mesh converges to a desirable solution; compared to a prescribed smoothness prior, which often becomes trapped in undesirable local minima.
arXiv Detail & Related papers (2020-05-22T10:01:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.