LPMNet: Latent Part Modification and Generation for 3D Point Clouds
- URL: http://arxiv.org/abs/2008.03560v3
- Date: Thu, 25 Feb 2021 15:44:08 GMT
- Title: LPMNet: Latent Part Modification and Generation for 3D Point Clouds
- Authors: Cihan \"Ong\"un, Alptekin Temizel
- Abstract summary: We propose a single end-to-end Autoencoder model that can handle generation and modification of both semantic parts, and global shapes.
The proposed method supports part exchange between 3D point cloud models and composition by different parts to form new models by directly editing latent representations.
- Score: 3.04585143845864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we focus on latent modification and generation of 3D point
cloud object models with respect to their semantic parts. Different to the
existing methods which use separate networks for part generation and assembly,
we propose a single end-to-end Autoencoder model that can handle generation and
modification of both semantic parts, and global shapes. The proposed method
supports part exchange between 3D point cloud models and composition by
different parts to form new models by directly editing latent representations.
This holistic approach does not need part-based training to learn part
representations and does not introduce any extra loss besides the standard
reconstruction loss. The experiments demonstrate the robustness of the proposed
method with different object categories and varying number of points. The
method can generate new models by integration of generative models such as GANs
and VAEs and can work with unannotated point clouds by integration of a
segmentation module.
Related papers
- Part123: Part-aware 3D Reconstruction from a Single-view Image [54.589723979757515]
Part123 is a novel framework for part-aware 3D reconstruction from a single-view image.
We introduce contrastive learning into a neural rendering framework to learn a part-aware feature space.
A clustering-based algorithm is also developed to automatically derive 3D part segmentation results from the reconstructed models.
arXiv Detail & Related papers (2024-05-27T07:10:21Z) - DiffFacto: Controllable Part-Based 3D Point Cloud Generation with Cross
Diffusion [68.39543754708124]
We introduce DiffFacto, a novel probabilistic generative model that learns the distribution of shapes with part-level control.
Experiments show that our method is able to generate novel shapes with multiple axes of control.
It achieves state-of-the-art part-level generation quality and generates plausible and coherent shapes.
arXiv Detail & Related papers (2023-05-03T06:38:35Z) - Variational Relational Point Completion Network for Robust 3D
Classification [59.80993960827833]
Vari point cloud completion methods tend to generate global shape skeletons hence lack fine local details.
This paper proposes a variational framework, point Completion Network (VRCNet) with two appealing properties.
VRCNet shows great generalizability and robustness on real-world point cloud scans.
arXiv Detail & Related papers (2023-04-18T17:03:20Z) - Number-Adaptive Prototype Learning for 3D Point Cloud Semantic
Segmentation [46.610620464184926]
We propose to use an adaptive number of prototypes to dynamically describe the different point patterns within a semantic class.
Our method achieves 2.3% mIoU improvement over the baseline model based on the point-wise classification paradigm.
arXiv Detail & Related papers (2022-10-18T15:57:20Z) - EditVAE: Unsupervised Part-Aware Controllable 3D Point Cloud Shape
Generation [19.817166425038753]
This paper tackles the problem of parts-aware point cloud generation.
A simple modification of the Variational Auto-Encoder yields a joint model of the point cloud itself.
In addition to the flexibility afforded by our disentangled representation, the inductive bias introduced by our joint modelling approach yields the state-of-the-art experimental results on the ShapeNet dataset.
arXiv Detail & Related papers (2021-10-13T12:38:01Z) - SP-GAN: Sphere-Guided 3D Shape Generation and Manipulation [50.53931728235875]
We present SP-GAN, a new unsupervised sphere-guided generative model for direct synthesis of 3D shapes in the form of point clouds.
Compared with existing models, SP-GAN is able to synthesize diverse and high-quality shapes with fine details.
arXiv Detail & Related papers (2021-08-10T06:49:45Z) - 3D Object Classification on Partial Point Clouds: A Practical
Perspective [91.81377258830703]
A point cloud is a popular shape representation adopted in 3D object classification.
This paper introduces a practical setting to classify partial point clouds of object instances under any poses.
A novel algorithm in an alignment-classification manner is proposed in this paper.
arXiv Detail & Related papers (2020-12-18T04:00:56Z) - Discrete Point Flow Networks for Efficient Point Cloud Generation [36.03093265136374]
Generative models have proven effective at modeling 3D shapes and their statistical variations.
We introduce a latent variable model that builds on normalizing flows to generate 3D point clouds of an arbitrary size.
For single-view shape reconstruction we also obtain results on par with state-of-the-art voxel, point cloud, and mesh-based methods.
arXiv Detail & Related papers (2020-07-20T14:48:00Z) - Monocular Human Pose and Shape Reconstruction using Part Differentiable
Rendering [53.16864661460889]
Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth.
In this paper, we introduce body segmentation as critical supervision.
To improve the reconstruction with part segmentation, we propose a part-level differentiable part that enables part-based models to be supervised by part segmentation.
arXiv Detail & Related papers (2020-03-24T14:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.