ShapeFormer: Transformer-based Shape Completion via Sparse
Representation
- URL: http://arxiv.org/abs/2201.10326v1
- Date: Tue, 25 Jan 2022 13:58:30 GMT
- Title: ShapeFormer: Transformer-based Shape Completion via Sparse
Representation
- Authors: Xingguang Yan, Liqiang Lin, Niloy J. Mitra, Dani Lischinski, Danny
Cohen-Or, Hui Huang
- Abstract summary: We present ShapeFormer, a network that produces a distribution of object completions conditioned on incomplete, and possibly noisy, point clouds.
The resultant distribution can then be sampled to generate likely completions, each exhibiting plausible shape details while being faithful to the input.
- Score: 41.33457875133559
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present ShapeFormer, a transformer-based network that produces a
distribution of object completions, conditioned on incomplete, and possibly
noisy, point clouds. The resultant distribution can then be sampled to generate
likely completions, each exhibiting plausible shape details while being
faithful to the input. To facilitate the use of transformers for 3D, we
introduce a compact 3D representation, vector quantized deep implicit function,
that utilizes spatial sparsity to represent a close approximation of a 3D shape
by a short sequence of discrete variables. Experiments demonstrate that
ShapeFormer outperforms prior art for shape completion from ambiguous partial
inputs in terms of both completion quality and diversity. We also show that our
approach effectively handles a variety of shape types, incomplete patterns, and
real-world scans.
Related papers
- Explorable Mesh Deformation Subspaces from Unstructured Generative
Models [53.23510438769862]
Deep generative models of 3D shapes often feature continuous latent spaces that can be used to explore potential variations.
We present a method to explore variations among a given set of landmark shapes by constructing a mapping from an easily-navigable 2D exploration space to a subspace of a pre-trained generative model.
arXiv Detail & Related papers (2023-10-11T18:53:57Z) - DiffComplete: Diffusion-based Generative 3D Shape Completion [114.43353365917015]
We introduce a new diffusion-based approach for shape completion on 3D range scans.
We strike a balance between realism, multi-modality, and high fidelity.
DiffComplete sets a new SOTA performance on two large-scale 3D shape completion benchmarks.
arXiv Detail & Related papers (2023-06-28T16:07:36Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation, Inversion, and
Manipulation [54.09274684734721]
We present a new approach for 3D shape generation, inversion, and manipulation, through a direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
We may jointly train an encoder network to learn a latent space for inverting shapes, allowing us to enable a rich variety of whole-shape and region-aware shape manipulations.
arXiv Detail & Related papers (2023-02-01T02:47:53Z) - Probabilistic Implicit Scene Completion [6.954686339092988]
We propose a probabilistic shape completion method extended to the continuous geometry of large-scale 3D scenes.
We employ the Generative Cellular Automata that learns the multi-modal distribution and transform the formulation to process large-scale continuous geometry.
Experiments show that our model successfully generates diverse plausible scenes faithful to the input, especially when the input suffers from a significant amount of missing data.
arXiv Detail & Related papers (2022-04-04T06:16:54Z) - Unsupervised 3D Shape Completion through GAN Inversion [116.27680045885849]
We present ShapeInversion, which introduces Generative Adrial Network (GAN) inversion to shape completion for the first time.
ShapeInversion uses a GAN pre-trained on complete shapes by searching for a latent code that gives a complete shape that best fits the given partial input.
On the ShapeNet benchmark, the proposed ShapeInversion outperforms the SOTA unsupervised method, and is comparable with supervised methods that are learned using paired data.
arXiv Detail & Related papers (2021-04-27T17:53:46Z) - 3D Shape Generation and Completion through Point-Voxel Diffusion [24.824065748889048]
We propose a novel approach for probabilistic generative modeling of 3D shapes.
Point-Voxel Diffusion (PVD) is a unified, probabilistic formulation for unconditional shape generation and conditional, multimodal shape completion.
PVD can be viewed as a series of denoising steps, reversing the diffusion process from observed point cloud data to Gaussian noise, and is trained by optimizing a variational lower bound to the (conditional) likelihood function.
arXiv Detail & Related papers (2021-04-08T10:38:03Z) - Deep Implicit Templates for 3D Shape Representation [70.9789507686618]
We propose a new 3D shape representation that supports explicit correspondence reasoning in deep implicit representations.
Our key idea is to formulate DIFs as conditional deformations of a template implicit function.
We show that our method can not only learn a common implicit template for a collection of shapes, but also establish dense correspondences across all the shapes simultaneously without any supervision.
arXiv Detail & Related papers (2020-11-30T06:01:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.