Autoregressive 3D Shape Generation via Canonical Mapping
- URL: http://arxiv.org/abs/2204.01955v1
- Date: Tue, 5 Apr 2022 03:12:29 GMT
- Title: Autoregressive 3D Shape Generation via Canonical Mapping
- Authors: An-Chieh Cheng, Xueting Li, Sifei Liu, Min Sun, Ming-Hsuan Yang
- Abstract summary: transformers have shown remarkable performances in a variety of generative tasks such as image, audio, and text generation.
In this paper, we aim to further exploit the power of transformers and employ them for the task of 3D point cloud generation.
Our model can be easily extended to multi-modal shape completion as an application for conditional shape generation.
- Score: 92.91282602339398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the capacity of modeling long-range dependencies in sequential data,
transformers have shown remarkable performances in a variety of generative
tasks such as image, audio, and text generation. Yet, taming them in generating
less structured and voluminous data formats such as high-resolution point
clouds have seldom been explored due to ambiguous sequentialization processes
and infeasible computation burden. In this paper, we aim to further exploit the
power of transformers and employ them for the task of 3D point cloud
generation. The key idea is to decompose point clouds of one category into
semantically aligned sequences of shape compositions, via a learned canonical
space. These shape compositions can then be quantized and used to learn a
context-rich composition codebook for point cloud generation. Experimental
results on point cloud reconstruction and unconditional generation show that
our model performs favorably against state-of-the-art approaches. Furthermore,
our model can be easily extended to multi-modal shape completion as an
application for conditional shape generation.
Related papers
- Patch-Wise Point Cloud Generation: A Divide-and-Conquer Approach [83.05340155068721]
We devise a new 3d point cloud generation framework using a divide-and-conquer approach.
All patch generators are based on learnable priors, which aim to capture the information of geometry primitives.
Experimental results on a variety of object categories from the most popular point cloud dataset, ShapeNet, show the effectiveness of the proposed patch-wise point cloud generation.
arXiv Detail & Related papers (2023-07-22T11:10:39Z) - DiffFacto: Controllable Part-Based 3D Point Cloud Generation with Cross
Diffusion [68.39543754708124]
We introduce DiffFacto, a novel probabilistic generative model that learns the distribution of shapes with part-level control.
Experiments show that our method is able to generate novel shapes with multiple axes of control.
It achieves state-of-the-art part-level generation quality and generates plausible and coherent shapes.
arXiv Detail & Related papers (2023-05-03T06:38:35Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - Learning Versatile 3D Shape Generation with Improved AR Models [91.87115744375052]
Auto-regressive (AR) models have achieved impressive results in 2D image generation by modeling joint distributions in the grid space.
We propose the Improved Auto-regressive Model (ImAM) for 3D shape generation, which applies discrete representation learning based on a latent vector instead of volumetric grids.
arXiv Detail & Related papers (2023-03-26T12:03:18Z) - Self-Supervised Learning for Multimodal Non-Rigid 3D Shape Matching [15.050801537501462]
We introduce a self-supervised multimodal learning strategy that combines mesh-based functional map regularisation with a contrastive loss that couples mesh and point cloud data.
Our shape matching approach allows to obtain intramodal correspondences for triangle meshes, complete point clouds, and partially observed point clouds.
We demonstrate that our method achieves state-of-the-art results on several challenging benchmark datasets.
arXiv Detail & Related papers (2023-03-20T09:47:02Z) - Point Cloud Generation with Continuous Conditioning [2.9238500578557303]
We propose a novel generative adversarial network (GAN) setup that generates 3D point cloud shapes conditioned on a continuous parameter.
In an exemplary application, we use this to guide the generative process to create a 3D object with a custom-fit shape.
arXiv Detail & Related papers (2022-02-17T09:05:10Z) - EditVAE: Unsupervised Part-Aware Controllable 3D Point Cloud Shape
Generation [19.817166425038753]
This paper tackles the problem of parts-aware point cloud generation.
A simple modification of the Variational Auto-Encoder yields a joint model of the point cloud itself.
In addition to the flexibility afforded by our disentangled representation, the inductive bias introduced by our joint modelling approach yields the state-of-the-art experimental results on the ShapeNet dataset.
arXiv Detail & Related papers (2021-10-13T12:38:01Z) - Differentiable Convolution Search for Point Cloud Processing [114.66038862207118]
We propose a novel differential convolution search paradigm on point clouds.
It can work in a purely data-driven manner and thus is capable of auto-creating a group of suitable convolutions for geometric shape modeling.
We also propose a joint optimization framework for simultaneous search of internal convolution and external architecture, and introduce epsilon-greedy algorithm to alleviate the effect of discretization error.
arXiv Detail & Related papers (2021-08-29T14:42:03Z) - Discrete Point Flow Networks for Efficient Point Cloud Generation [36.03093265136374]
Generative models have proven effective at modeling 3D shapes and their statistical variations.
We introduce a latent variable model that builds on normalizing flows to generate 3D point clouds of an arbitrary size.
For single-view shape reconstruction we also obtain results on par with state-of-the-art voxel, point cloud, and mesh-based methods.
arXiv Detail & Related papers (2020-07-20T14:48:00Z) - Adversarial Generation of Continuous Implicit Shape Representations [9.478108870211365]
This work presents a generative adversarial architecture for generating 3D shapes based on signed distance representations.
We train our approach on the ShapeNet benchmark dataset and validate, both quantitatively and qualitatively, its performance in generating realistic 3D shapes.
arXiv Detail & Related papers (2020-02-02T08:20:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.