DeepCAD: A Deep Generative Network for Computer-Aided Design Models
- URL: http://arxiv.org/abs/2105.09492v1
- Date: Thu, 20 May 2021 03:29:18 GMT
- Title: DeepCAD: A Deep Generative Network for Computer-Aided Design Models
- Authors: Rundi Wu, Chang Xiao, Changxi Zheng
- Abstract summary: We present the first 3D generative model for a drastically different shape representation -- describing a shape as a sequence of computer-aided design (CAD) operations.
Drawing an analogy between CAD operations and natural language, we propose a CAD generative network based on the Transformer.
- Score: 37.655225142981564
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep generative models of 3D shapes have received a great deal of research
interest. Yet, almost all of them generate discrete shape representations, such
as voxels, point clouds, and polygon meshes. We present the first 3D generative
model for a drastically different shape representation -- describing a shape as
a sequence of computer-aided design (CAD) operations. Unlike meshes and point
clouds, CAD models encode the user creation process of 3D shapes, widely used
in numerous industrial and engineering design tasks. However, the sequential
and irregular structure of CAD operations poses significant challenges for
existing 3D generative models. Drawing an analogy between CAD operations and
natural language, we propose a CAD generative network based on the Transformer.
We demonstrate the performance of our model for both shape autoencoding and
random shape generation. To train our network, we create a new CAD dataset
consisting of 179,133 models and their CAD construction sequences. We have made
this dataset publicly available to promote future research on this topic.
Related papers
- OpenECAD: An Efficient Visual Language Model for Computer-Aided Design [1.481550828146527]
We fine-tuned pre-trained models to create OpenECAD, leveraging the visual, logical, coding, and general capabilities of visual language models.
OpenECAD can process images of 3D designs as input and generate highly structured 2D sketches and 3D construction commands.
arXiv Detail & Related papers (2024-06-14T10:47:52Z) - PS-CAD: Local Geometry Guidance via Prompting and Selection for CAD Reconstruction [86.726941702182]
We introduce geometric guidance into the reconstruction network PS-CAD.
We provide the geometry of surfaces where the current reconstruction differs from the complete model as a point cloud.
Second, we use geometric analysis to extract a set of planar prompts, that correspond to candidate surfaces.
arXiv Detail & Related papers (2024-05-24T03:43:55Z) - Geometric Deep Learning for Computer-Aided Design: A Survey [85.79012726689511]
This survey offers a comprehensive overview of learning-based methods in computer-aided design.
It includes similarity analysis and retrieval, 2D and 3D CAD model synthesis, and CAD generation from point clouds.
It provides a complete list of benchmark datasets and their characteristics, along with open-source codes that have propelled research in this domain.
arXiv Detail & Related papers (2024-02-27T17:11:35Z) - CAD-SIGNet: CAD Language Inference from Point Clouds using Layer-wise
Sketch Instance Guided Attention [13.227571488321358]
We propose an end-to-end trainable and auto-regressive architecture to recover the design history of a CAD model.
Our model learns visual-language representations by layer-wise cross-attention between point cloud and CAD language embedding.
Thanks to its auto-regressive nature, CAD-SIGNet not only reconstructs a unique full design history of the corresponding CAD model given an input point cloud but also provides multiple plausible design choices.
arXiv Detail & Related papers (2024-02-27T16:53:16Z) - Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability [118.26563926533517]
Auto-regressive models have achieved impressive results in 2D image generation by modeling joint distributions in grid space.
We extend auto-regressive models to 3D domains, and seek a stronger ability of 3D shape generation by improving auto-regressive models at capacity and scalability simultaneously.
arXiv Detail & Related papers (2024-02-19T15:33:09Z) - Hierarchical Neural Coding for Controllable CAD Model Generation [34.14256897199849]
This paper presents a novel generative model for Computer Aided Design (CAD)
It represents high-level design concepts of a CAD model as a three-level hierarchical tree of neural codes.
It controls the generation or completion of CAD models by specifying the target design using a code tree.
arXiv Detail & Related papers (2023-06-30T21:49:41Z) - Pushing the Limits of 3D Shape Generation at Scale [65.24420181727615]
We present a significant breakthrough in 3D shape generation by scaling it to unprecedented dimensions.
We have developed a model with an astounding 3.6 billion trainable parameters, establishing it as the largest 3D shape generation model to date, named Argus-3D.
arXiv Detail & Related papers (2023-06-20T13:01:19Z) - SECAD-Net: Self-Supervised CAD Reconstruction by Learning Sketch-Extrude
Operations [21.000539206470897]
SECAD-Net is an end-to-end neural network aimed at reconstructing compact and easy-to-edit CAD models.
We show superiority over state-of-the-art alternatives including the closely related method for supervised CAD reconstruction.
arXiv Detail & Related papers (2023-03-19T09:26:03Z) - PvDeConv: Point-Voxel Deconvolution for Autoencoding CAD Construction in
3D [23.87757211847093]
We learn to synthesize high-resolution point clouds of 10k points that densely describe the underlying geometry of Computer Aided Design (CAD) models.
We introduce a new dedicated dataset, the CC3D, containing 50k+ pairs of CAD models and their corresponding 3D meshes.
This dataset is used to learn a convolutional autoencoder for point clouds sampled from the pairs of 3D scans - CAD models.
arXiv Detail & Related papers (2021-01-12T14:14:13Z) - Mask2CAD: 3D Shape Prediction by Learning to Segment and Retrieve [54.054575408582565]
We propose to leverage existing large-scale datasets of 3D models to understand the underlying 3D structure of objects seen in an image.
We present Mask2CAD, which jointly detects objects in real-world images and for each detected object, optimize for the most similar CAD model and its pose.
This produces a clean, lightweight representation of the objects in an image.
arXiv Detail & Related papers (2020-07-26T00:08:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.