GenCAD: Image-Conditioned Computer-Aided Design Generation with Transformer-Based Contrastive Representation and Diffusion Priors
- URL: http://arxiv.org/abs/2409.16294v2
- Date: Tue, 08 Apr 2025 18:30:54 GMT
- Title: GenCAD: Image-Conditioned Computer-Aided Design Generation with Transformer-Based Contrastive Representation and Diffusion Priors
- Authors: Md Ferdous Alam, Faez Ahmed,
- Abstract summary: The creation of manufacturable and editable 3D shapes through Computer-Aided Design (CAD) remains a highly manual and time-consuming task.<n>This paper introduces GenCAD, a generative model that employs autoregressive transformers with a contrastive learning framework and latent diffusion models to transform image inputs into parametric CAD command sequences.
- Score: 3.796768352477804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The creation of manufacturable and editable 3D shapes through Computer-Aided Design (CAD) remains a highly manual and time-consuming task, hampered by the complex topology of boundary representations of 3D solids and unintuitive design tools. While most work in the 3D shape generation literature focuses on representations like meshes, voxels, or point clouds, practical engineering applications demand the modifiability and manufacturability of CAD models and the ability for multi-modal conditional CAD model generation. This paper introduces GenCAD, a generative model that employs autoregressive transformers with a contrastive learning framework and latent diffusion models to transform image inputs into parametric CAD command sequences, resulting in editable 3D shape representations. Extensive evaluations demonstrate that GenCAD significantly outperforms existing state-of-the-art methods in terms of the unconditional and conditional generations of CAD models. Additionally, the contrastive learning framework of GenCAD facilitates the retrieval of CAD models using image queries from large CAD databases, which is a critical challenge within the CAD community. Our results provide a significant step forward in highlighting the potential of generative models to expedite the entire design-to-production pipeline and seamlessly integrate different design modalities.
Related papers
- CADCrafter: Generating Computer-Aided Design Models from Unconstrained Images [69.7768227804928]
CADCrafter is an image-to-parametric CAD model generation framework that trains solely on synthetic textureless CAD data.
We introduce a geometry encoder to accurately capture diverse geometric features.
Our approach can robustly handle real unconstrained CAD images, and even generalize to unseen general objects.
arXiv Detail & Related papers (2025-04-07T06:01:35Z) - Image2CADSeq: Computer-Aided Design Sequence and Knowledge Inference from Product Images [0.7673339435080445]
In scenarios where digital CAD files are not accessible, reverse engineering (RE) has been used to reconstruct 3D CAD models.
Recent advances have seen the rise of data-driven approaches for RE, with a primary focus on converting 3D data, such as point clouds, into 3D models in boundary representation (B-rep) format.
Our research introduces a novel data-driven approach with an Image2CADSeq neural network model.
arXiv Detail & Related papers (2025-01-09T02:36:21Z) - Text2CAD: Text to 3D CAD Generation via Technical Drawings [45.3611544056261]
Text2CAD is a novel framework that employs stable diffusion models tailored to automate the generation process.
We show that Text2CAD effectively generates technical drawings that are accurately translated into high-quality 3D CAD models.
arXiv Detail & Related papers (2024-11-09T15:12:06Z) - Img2CAD: Reverse Engineering 3D CAD Models from Images through VLM-Assisted Conditional Factorization [12.12975824816803]
Reverse engineering 3D computer-aided design (CAD) models from images is an important task for many downstream applications.
In this work, we introduce a novel approach that conditionally factorizes the task into two sub-problems.
We propose TrAssembler that conditioned on the discrete structure with semantics predicts the continuous attribute values.
arXiv Detail & Related papers (2024-07-19T06:53:30Z) - OpenECAD: An Efficient Visual Language Model for Editable 3D-CAD Design [1.481550828146527]
We fine-tuned pre-trained models to create OpenECAD models (0.55B, 0.89B, 2.4B and 3.1B)
OpenECAD models can process images of 3D designs as input and generate highly structured 2D sketches and 3D construction commands.
These outputs can be directly used with existing CAD tools' APIs to generate project files.
arXiv Detail & Related papers (2024-06-14T10:47:52Z) - Geometric Deep Learning for Computer-Aided Design: A Survey [85.79012726689511]
This survey offers a comprehensive overview of learning-based methods in computer-aided design.
It includes similarity analysis and retrieval, 2D and 3D CAD model synthesis, and CAD generation from point clouds.
It provides a complete list of benchmark datasets and their characteristics, along with open-source codes that have propelled research in this domain.
arXiv Detail & Related papers (2024-02-27T17:11:35Z) - Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability [118.26563926533517]
Auto-regressive models have achieved impressive results in 2D image generation by modeling joint distributions in grid space.
We extend auto-regressive models to 3D domains, and seek a stronger ability of 3D shape generation by improving auto-regressive models at capacity and scalability simultaneously.
arXiv Detail & Related papers (2024-02-19T15:33:09Z) - Pushing the Limits of 3D Shape Generation at Scale [65.24420181727615]
We present a significant breakthrough in 3D shape generation by scaling it to unprecedented dimensions.
We have developed a model with an astounding 3.6 billion trainable parameters, establishing it as the largest 3D shape generation model to date, named Argus-3D.
arXiv Detail & Related papers (2023-06-20T13:01:19Z) - Learning Versatile 3D Shape Generation with Improved AR Models [91.87115744375052]
Auto-regressive (AR) models have achieved impressive results in 2D image generation by modeling joint distributions in the grid space.
We propose the Improved Auto-regressive Model (ImAM) for 3D shape generation, which applies discrete representation learning based on a latent vector instead of volumetric grids.
arXiv Detail & Related papers (2023-03-26T12:03:18Z) - AutoCAD: Automatically Generating Counterfactuals for Mitigating
Shortcut Learning [70.70393006697383]
We present AutoCAD, a fully automatic and task-agnostic CAD generation framework.
In this paper, we present AutoCAD, a fully automatic and task-agnostic CAD generation framework.
arXiv Detail & Related papers (2022-11-29T13:39:53Z) - HybridSDF: Combining Free Form Shapes and Geometric Primitives for
effective Shape Manipulation [58.411259332760935]
Deep-learning based 3D surface modeling has opened new shape design avenues.
These advances have not yet been accepted by the CAD community because they cannot be integrated into engineering.
We propose a novel approach to effectively combining geometric primitives and free-form surfaces represented by implicit surfaces for accurate modeling.
arXiv Detail & Related papers (2021-09-22T14:45:19Z) - DeepCAD: A Deep Generative Network for Computer-Aided Design Models [37.655225142981564]
We present the first 3D generative model for a drastically different shape representation -- describing a shape as a sequence of computer-aided design (CAD) operations.
Drawing an analogy between CAD operations and natural language, we propose a CAD generative network based on the Transformer.
arXiv Detail & Related papers (2021-05-20T03:29:18Z) - CAD-Deform: Deformable Fitting of CAD Models to 3D Scans [30.451330075135076]
We introduce CAD-Deform, a method which obtains more accurate CAD-to-scan fits by non-rigidly deforming retrieved CAD models.
A series of experiments demonstrate that our method achieves significantly tighter scan-to-CAD fits, allowing a more accurate digital replica of the scanned real-world environment.
arXiv Detail & Related papers (2020-07-23T12:30:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.