Generative Topology for Shape Synthesis
- URL: http://arxiv.org/abs/2410.18987v1
- Date: Wed, 09 Oct 2024 17:19:22 GMT
- Title: Generative Topology for Shape Synthesis
- Authors: Ernst Röell, Bastian Rieck,
- Abstract summary: We develop a novel framework for shape generation tasks on point clouds.
Our model exhibits high quality in reconstruction and generation tasks, affords efficient latent-space, and is orders of magnitude faster than existing methods.
- Score: 13.608942872770855
- License:
- Abstract: The Euler Characteristic Transform (ECT) is a powerful invariant for assessing geometrical and topological characteristics of a large variety of objects, including graphs and embedded simplicial complexes. Although the ECT is invertible in theory, no explicit algorithm for general data sets exists. In this paper, we address this lack and demonstrate that it is possible to learn the inversion, permitting us to develop a novel framework for shape generation tasks on point clouds. Our model exhibits high quality in reconstruction and generation tasks, affords efficient latent-space interpolation, and is orders of magnitude faster than existing methods.
Related papers
- Differentiable Euler Characteristic Transforms for Shape Classification [13.608942872770855]
The Euler Characteristic Transform (ECT) has proven to be a powerful representation, combining geometrical and topological characteristics of shapes and graphs.
We develop a novel computational layer that enables learning the ECT in an end-to-end fashion.
arXiv Detail & Related papers (2023-10-11T16:23:07Z) - Curve Your Attention: Mixed-Curvature Transformers for Graph
Representation Learning [77.1421343649344]
We propose a generalization of Transformers towards operating entirely on the product of constant curvature spaces.
We also provide a kernelized approach to non-Euclidean attention, which enables our model to run in time and memory cost linear to the number of nodes and edges.
arXiv Detail & Related papers (2023-09-08T02:44:37Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained
Diffusion [66.21290235237808]
We introduce an energy constrained diffusion model which encodes a batch of instances from a dataset into evolutionary states.
We provide rigorous theory that implies closed-form optimal estimates for the pairwise diffusion strength among arbitrary instance pairs.
Experiments highlight the wide applicability of our model as a general-purpose encoder backbone with superior performance in various tasks.
arXiv Detail & Related papers (2023-01-23T15:18:54Z) - Atomic structure generation from reconstructing structural fingerprints [1.2128971613239876]
We present an end-to-end structure generation approach using atom-centered symmetry functions as the representation and conditional variational autoencoders as the generative model.
We are able to successfully generate novel and valid atomic structures of sub-nanometer Pt nanoparticles as a proof of concept.
arXiv Detail & Related papers (2022-07-27T00:42:59Z) - ComplexGen: CAD Reconstruction by B-Rep Chain Complex Generation [28.445041795260906]
We view the reconstruction of CAD models in the boundary representation (B-Rep) as the detection of geometric primitives of different orders.
We show that by modeling such comprehensive structures more complete and regularized reconstructions can be achieved.
arXiv Detail & Related papers (2022-05-29T05:30:33Z) - Autoregressive 3D Shape Generation via Canonical Mapping [92.91282602339398]
transformers have shown remarkable performances in a variety of generative tasks such as image, audio, and text generation.
In this paper, we aim to further exploit the power of transformers and employ them for the task of 3D point cloud generation.
Our model can be easily extended to multi-modal shape completion as an application for conditional shape generation.
arXiv Detail & Related papers (2022-04-05T03:12:29Z) - Dist2Cycle: A Simplicial Neural Network for Homology Localization [66.15805004725809]
Simplicial complexes can be viewed as high dimensional generalizations of graphs that explicitly encode multi-way ordered relations.
We propose a graph convolutional model for learning functions parametrized by the $k$-homological features of simplicial complexes.
arXiv Detail & Related papers (2021-10-28T14:59:41Z) - Disentangling Geometric Deformation Spaces in Generative Latent Shape
Models [5.582957809895198]
A complete representation of 3D objects requires characterizing the space of deformations in an interpretable manner.
We improve on a prior generative model of disentanglement for 3D shapes, wherein the space of object geometry is factorized into rigid orientation, non-rigid pose, and intrinsic shape.
The resulting model can be trained from raw 3D shapes, without correspondences, labels, or even rigid alignment.
arXiv Detail & Related papers (2021-02-27T06:54:31Z) - Dense Non-Rigid Structure from Motion: A Manifold Viewpoint [162.88686222340962]
Non-Rigid Structure-from-Motion (NRSfM) problem aims to recover 3D geometry of a deforming object from its 2D feature correspondences across multiple frames.
We show that our approach significantly improves accuracy, scalability, and robustness against noise.
arXiv Detail & Related papers (2020-06-15T09:15:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.