A Hybrid Generative and Discriminative PointNet on Unordered Point Sets
- URL: http://arxiv.org/abs/2404.12925v1
- Date: Fri, 19 Apr 2024 14:52:25 GMT
- Title: A Hybrid Generative and Discriminative PointNet on Unordered Point Sets
- Authors: Yang Ye, Shihao Ji,
- Abstract summary: This paper proposes GDPNet, the first hybrid Generative and Discriminative PointNet.
Our GDPNet retains strong discriminative power of modern PointNet classifiers, while generating point cloud samples rivaling state-of-the-art generative approaches.
- Score: 6.282930329443868
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As point cloud provides a natural and flexible representation usable in myriad applications (e.g., robotics and self-driving cars), the ability to synthesize point clouds for analysis becomes crucial. Recently, Xie et al. propose a generative model for unordered point sets in the form of an energy-based model (EBM). Despite the model achieving an impressive performance for point cloud generation, one separate model needs to be trained for each category to capture the complex point set distributions. Besides, their method is unable to classify point clouds directly and requires additional fine-tuning for classification. One interesting question is: Can we train a single network for a hybrid generative and discriminative model of point clouds? A similar question has recently been answered in the affirmative for images, introducing the framework of Joint Energy-based Model (JEM), which achieves high performance in image classification and generation simultaneously. This paper proposes GDPNet, the first hybrid Generative and Discriminative PointNet that extends JEM for point cloud classification and generation. Our GDPNet retains strong discriminative power of modern PointNet classifiers, while generating point cloud samples rivaling state-of-the-art generative approaches.
Related papers
- Efficient and Scalable Point Cloud Generation with Sparse Point-Voxel Diffusion Models [6.795447206159906]
We propose a novel point cloud U-Net diffusion architecture for 3D generative modeling.
Our network employs a dual-branch architecture, combining the high-resolution representations of points with the computational efficiency of sparse voxels.
Our model excels in all tasks, establishing it as a state-of-the-art diffusion U-Net for point cloud generative modeling.
arXiv Detail & Related papers (2024-08-12T13:41:47Z) - Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - General Point Model with Autoencoding and Autoregressive [55.051626723729896]
We propose a General Point Model which seamlessly integrates autoencoding and autoregressive tasks in point cloud transformer.
This model is versatile, allowing fine-tuning for downstream point cloud representation tasks, as well as unconditional and conditional generation tasks.
arXiv Detail & Related papers (2023-10-25T06:08:24Z) - Patch-Wise Point Cloud Generation: A Divide-and-Conquer Approach [83.05340155068721]
We devise a new 3d point cloud generation framework using a divide-and-conquer approach.
All patch generators are based on learnable priors, which aim to capture the information of geometry primitives.
Experimental results on a variety of object categories from the most popular point cloud dataset, ShapeNet, show the effectiveness of the proposed patch-wise point cloud generation.
arXiv Detail & Related papers (2023-07-22T11:10:39Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - Controllable Mesh Generation Through Sparse Latent Point Diffusion
Models [105.83595545314334]
We design a novel sparse latent point diffusion model for mesh generation.
Our key insight is to regard point clouds as an intermediate representation of meshes, and model the distribution of point clouds instead.
Our proposed sparse latent point diffusion model achieves superior performance in terms of generation quality and controllability.
arXiv Detail & Related papers (2023-03-14T14:25:29Z) - Representing Point Clouds with Generative Conditional Invertible Flow
Networks [15.280751949071016]
We propose a simple yet effective method to represent point clouds as sets of samples drawn from a cloud-specific probability distribution.
We show that our method leverages generative invertible flow networks to learn embeddings as well as to generate point clouds.
Our model offers competitive or superior quantitative results on benchmark datasets.
arXiv Detail & Related papers (2020-10-07T18:30:47Z) - Multi-scale Receptive Fields Graph Attention Network for Point Cloud
Classification [35.88116404702807]
The proposed MRFGAT architecture is tested on ModelNet10 and ModelNet40 datasets.
Results show it achieves state-of-the-art performance in shape classification tasks.
arXiv Detail & Related papers (2020-09-28T13:01:28Z) - Generative PointNet: Deep Energy-Based Learning on Unordered Point Sets
for 3D Generation, Reconstruction and Classification [136.57669231704858]
We propose a generative model of unordered point sets, such as point clouds, in the form of an energy-based model.
We call our model the Generative PointNet because it can be derived from the discriminative PointNet.
arXiv Detail & Related papers (2020-04-02T23:08:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.