CAD-Tokenizer: Towards Text-based CAD Prototyping via Modality-Specific Tokenization
- URL: http://arxiv.org/abs/2509.21150v1
- Date: Thu, 25 Sep 2025 13:38:36 GMT
- Title: CAD-Tokenizer: Towards Text-based CAD Prototyping via Modality-Specific Tokenization
- Authors: Ruiyu Wang, Shizhao Sun, Weijian Ma, Jiang Bian,
- Abstract summary: CAD-Tokenizer represents CAD data with modality-specific tokens using a sequence-based VQ-VAE with primitive-level pooling and constrained decoding.<n>This design produces compact, primitive-aware representations that align with CAD's structural nature.
- Score: 16.26305802216836
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Computer-Aided Design (CAD) is a foundational component of industrial prototyping, where models are defined not by raw coordinates but by construction sequences such as sketches and extrusions. This sequential structure enables both efficient prototype initialization and subsequent editing. Text-guided CAD prototyping, which unifies Text-to-CAD generation and CAD editing, has the potential to streamline the entire design pipeline. However, prior work has not explored this setting, largely because standard large language model (LLM) tokenizers decompose CAD sequences into natural-language word pieces, failing to capture primitive-level CAD semantics and hindering attention modules from modeling geometric structure. We conjecture that a multimodal tokenization strategy, aligned with CAD's primitive and structural nature, can provide more effective representations. To this end, we propose CAD-Tokenizer, a framework that represents CAD data with modality-specific tokens using a sequence-based VQ-VAE with primitive-level pooling and constrained decoding. This design produces compact, primitive-aware representations that align with CAD's structural nature. Applied to unified text-guided CAD prototyping, CAD-Tokenizer significantly improves instruction following and generation quality, achieving better quantitative and qualitative performance over both general-purpose LLMs and task-specific baselines.
Related papers
- Pointer-CAD: Unifying B-Rep and Command Sequences via Pointer-based Edges & Faces Selection [36.418031479264585]
Large Language Models (LLMs) have inspired the LLM-based CAD generation by representing CAD as command sequences.<n>We present Pointer-CAD, a novel LLM-based CAD generation framework that incorporates the geometric information of B-rep models into sequential modeling.<n>Experiments demonstrate that Pointer-CAD effectively supports the generation of complex geometric structures and reduces segmentation error to an extremely low level.
arXiv Detail & Related papers (2026-03-04T17:55:01Z) - CADKnitter: Compositional CAD Generation from Text and Geometry Guidance [8.644079160190175]
We propose CADKnitter, a compositional CAD generation framework with a geometry-guided diffusion sampling strategy.<n>CADKnitter is able to generate a complementary CAD part that follows both the geometric constraints of the given CAD model and the semantic constraints of the desired design text prompt.<n>We also curate a dataset, so-called KnitCAD, containing over 310,000 samples of CAD models, along with textual prompts and assembly metadata.
arXiv Detail & Related papers (2025-12-12T01:06:38Z) - HistCAD: Geometrically Constrained Parametric History-based CAD Dataset [7.7008607520955]
HistCAD is a large-scale dataset featuring constraint-aware modeling sequences.<n>HistCAD provides a unified benchmark for advancing editable, constraint-aware, and semantically enriched generative CAD modeling.
arXiv Detail & Related papers (2025-12-08T05:52:14Z) - From Intent to Execution: Multimodal Chain-of-Thought Reinforcement Learning for Precise CAD Code Generation [47.67703214044401]
We propose CAD-RL, a multimodal Chain-of-Thought guided reinforcement learning framework for CAD modeling code generation.<n>Our method combines Cold Start with goal-driven reinforcement learning post training using three task-specific rewards.<n>Experiments demonstrate that CAD-RL achieves significant improvements in reasoning quality, output precision, and code executability.
arXiv Detail & Related papers (2025-08-13T18:30:49Z) - CADmium: Fine-Tuning Code Language Models for Text-Driven Sequential CAD Design [10.105055422074734]
We introduce a new large-scale pipeline of more than 170k CAD models annotated with human-like descriptions.<n>Our experiments and ablation studies on both synthetic and human-annotated data demonstrate that CADmium is able to automate CAD design.
arXiv Detail & Related papers (2025-07-13T21:11:53Z) - CADCrafter: Generating Computer-Aided Design Models from Unconstrained Images [69.7768227804928]
CADCrafter is an image-to-parametric CAD model generation framework that trains solely on synthetic textureless CAD data.<n>We introduce a geometry encoder to accurately capture diverse geometric features.<n>Our approach can robustly handle real unconstrained CAD images, and even generalize to unseen general objects.
arXiv Detail & Related papers (2025-04-07T06:01:35Z) - PHT-CAD: Efficient CAD Parametric Primitive Analysis with Progressive Hierarchical Tuning [52.681829043446044]
ParaCAD comprises over 10 million annotated drawings for training and 3,000 real-world industrial drawings with complex topological structures and physical constraints for test.<n> PHT-CAD is a novel 2D PPA framework that harnesses the modality alignment and reasoning capabilities of Vision-Language Models.
arXiv Detail & Related papers (2025-03-23T17:24:32Z) - CADSpotting: Robust Panoptic Symbol Spotting on Large-Scale CAD Drawings [56.05238657033198]
We introduce CADSpotting, an effective method for panoptic symbol spotting in large-scale architectural CAD drawings.<n>We also propose a novel Sliding Window Aggregation (SWA) technique that combines weighted voting and Non-Maximum Suppression (NMS)<n>Experiments on FloorPlanCAD and LS-CAD demonstrate that CADSpotting significantly outperforms existing methods.
arXiv Detail & Related papers (2024-12-10T10:22:17Z) - CAD-MLLM: Unifying Multimodality-Conditioned CAD Generation With MLLM [39.113795259823476]
We introduce the CAD-MLLM, the first system capable of generating parametric CAD models conditioned on the multimodal input.<n>We use advanced large language models (LLMs) to align the feature space across diverse multi-modalities data and CAD models' vectorized representations.<n>Our resulting dataset, named Omni-CAD, is the first multimodal CAD dataset that contains textual description, multi-view images, points, and command sequence for each CAD model.
arXiv Detail & Related papers (2024-11-07T18:31:08Z) - GenCAD: Image-Conditioned Computer-Aided Design Generation with Transformer-Based Contrastive Representation and Diffusion Priors [3.796768352477804]
The creation of manufacturable and editable 3D shapes through Computer-Aided Design (CAD) remains a highly manual and time-consuming task.<n>This paper introduces GenCAD, a generative model that employs autoregressive transformers with a contrastive learning framework and latent diffusion models to transform image inputs into parametric CAD command sequences.
arXiv Detail & Related papers (2024-09-08T23:49:11Z) - PS-CAD: Local Geometry Guidance via Prompting and Selection for CAD Reconstruction [86.726941702182]
We introduce geometric guidance into the reconstruction network PS-CAD.
We provide the geometry of surfaces where the current reconstruction differs from the complete model as a point cloud.
Second, we use geometric analysis to extract a set of planar prompts, that correspond to candidate surfaces.
arXiv Detail & Related papers (2024-05-24T03:43:55Z) - AutoCAD: Automatically Generating Counterfactuals for Mitigating
Shortcut Learning [70.70393006697383]
We present AutoCAD, a fully automatic and task-agnostic CAD generation framework.
In this paper, we present AutoCAD, a fully automatic and task-agnostic CAD generation framework.
arXiv Detail & Related papers (2022-11-29T13:39:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.