SolidGen: An Autoregressive Model for Direct B-rep Synthesis
- URL: http://arxiv.org/abs/2203.13944v1
- Date: Sat, 26 Mar 2022 00:00:45 GMT
- Title: SolidGen: An Autoregressive Model for Direct B-rep Synthesis
- Authors: Pradeep Kumar Jayaraman, Joseph G. Lambourne, Nishkrit Desai, Karl
D.D. Willis, Aditya Sanghi, Nigel J.W. Morris
- Abstract summary: Boundary representation (B-rep) format is de-facto shape representation in computer-aided design (CAD)
Recent approaches to generating CAD models have focused on learning sketch-and-extrude modeling sequences.
We present a new approach that enables learning from and synthesizing B-reps without the need for supervision.
- Score: 15.599363091502365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Boundary representation (B-rep) format is the de-facto shape
representation in computer-aided design (CAD) to model watertight solid
objects. Recent approaches to generating CAD models have focused on learning
sketch-and-extrude modeling sequences that are executed by a solid modeling
kernel in postprocess to recover a B-rep. In this paper we present a new
approach that enables learning from and synthesizing B-reps without the need
for supervision through CAD modeling sequence data. Our method SolidGen, is an
autoregressive neural network that models the B-rep directly by predicting the
vertices, edges and faces using Transformer-based and pointer neural networks.
Key to achieving this is our Indexed Boundary Representation that references
B-rep vertices, edges and faces in a well-defined hierarchy to capture the
geometric and topological relations suitable for use with machine learning.
SolidGen can be easily conditioned on contexts e.g., class labels thanks to its
probabilistic modeling of the B-rep distribution. We demonstrate qualitatively,
quantitatively and through perceptual evaluation by human subjects that
SolidGen can produce high quality, realistic looking CAD models.
Related papers
- Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective [52.778766190479374]
Latent-based image generative models have achieved notable success in image generation tasks.
Despite sharing the same latent space, autoregressive models significantly lag behind LDMs and MIMs in image generation.
We propose a simple but effective discrete image tokenizer to stabilize the latent space for image generative modeling.
arXiv Detail & Related papers (2024-10-16T12:13:17Z) - SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - Split-and-Fit: Learning B-Reps via Structure-Aware Voronoi Partitioning [50.684254969269546]
We introduce a novel method for acquiring boundary representations (B-Reps) of 3D CAD models.
We apply a spatial partitioning to derive a single primitive within each partition.
We show that our network, coined NVD-Net for neural Voronoi diagrams, can effectively learn Voronoi partitions for CAD models from training data.
arXiv Detail & Related papers (2024-06-07T21:07:49Z) - BrepGen: A B-rep Generative Diffusion Model with Structured Latent Geometry [24.779824909395245]
BrepGen is a diffusion-based generative approach that directly outputs a Boundary representation (Brep) Computer-Aided Design (CAD) model.
BrepGen represents a B-rep model as a novel structured latent geometry in a hierarchical tree.
arXiv Detail & Related papers (2024-01-28T04:07:59Z) - Learning Versatile 3D Shape Generation with Improved AR Models [91.87115744375052]
Auto-regressive (AR) models have achieved impressive results in 2D image generation by modeling joint distributions in the grid space.
We propose the Improved Auto-regressive Model (ImAM) for 3D shape generation, which applies discrete representation learning based on a latent vector instead of volumetric grids.
arXiv Detail & Related papers (2023-03-26T12:03:18Z) - SCGG: A Deep Structure-Conditioned Graph Generative Model [9.046174529859524]
A conditional deep graph generation method called SCGG considers a particular type of structural conditions.
The architecture of SCGG consists of a graph representation learning network and an autoregressive generative model, which is trained end-to-end.
Experimental results on both synthetic and real-world datasets demonstrate the superiority of our method compared with state-of-the-art baselines.
arXiv Detail & Related papers (2022-09-20T12:33:50Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - BRepNet: A topological message passing system for solid models [6.214548392474976]
Boundary representation (B-rep) models are the standard way 3D shapes are described in Computer-Aided Design (CAD) applications.
We introduce BRepNet, a neural network architecture designed to operate directly on B-rep data structures.
arXiv Detail & Related papers (2021-04-01T18:16:03Z) - UV-Net: Learning from Boundary Representations [17.47054752280569]
We introduce UV-Net, a novel neural network architecture and representation designed to operate directly on Boundary representation (B-rep) data from 3D CAD models.
B-rep data presents some unique challenges when used with modern machine learning due to the complexity of the data structure and its support for both continuous non-Euclidean geometric entities and discrete topological entities.
arXiv Detail & Related papers (2020-06-18T00:12:52Z) - CoSE: Compositional Stroke Embeddings [52.529172734044664]
We present a generative model for complex free-form structures such as stroke-based drawing tasks.
Our approach is suitable for interactive use cases such as auto-completing diagrams.
arXiv Detail & Related papers (2020-06-17T15:22:54Z) - PolyGen: An Autoregressive Generative Model of 3D Meshes [22.860421649320287]
We present an approach which models the mesh directly using a Transformer-based architecture.
Our model can condition on a range of inputs, including object classes, voxels, and images.
We show that the model is capable of producing high-quality, usable meshes, and establish log-likelihood benchmarks for the mesh-modelling task.
arXiv Detail & Related papers (2020-02-23T17:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.