CAD-Editor: A Locate-then-Infill Framework with Automated Training Data Synthesis for Text-Based CAD Editing
- URL: http://arxiv.org/abs/2502.03997v1
- Date: Thu, 06 Feb 2025 11:57:14 GMT
- Title: CAD-Editor: A Locate-then-Infill Framework with Automated Training Data Synthesis for Text-Based CAD Editing
- Authors: Yu Yuan, Shizhao Sun, Qi Liu, Jiang Bian,
- Abstract summary: We introduce emphCAD-Editor, the first framework for text-based CAD editing.
To tackle the composite nature of text-based CAD editing, we propose a locate-then-infill framework.
Experiments show that CAD-Editor achieves superior performance both quantitatively and qualitatively.
- Score: 12.277838798842689
- License:
- Abstract: Computer Aided Design (CAD) is indispensable across various industries. \emph{Text-based CAD editing}, which automates the modification of CAD models based on textual instructions, holds great potential but remains underexplored. Existing methods primarily focus on design variation generation or text-based CAD generation, either lacking support for text-based control or neglecting existing CAD models as constraints. We introduce \emph{CAD-Editor}, the first framework for text-based CAD editing. To address the challenge of demanding triplet data with accurate correspondence for training, we propose an automated data synthesis pipeline. This pipeline utilizes design variation models to generate pairs of original and edited CAD models and employs Large Vision-Language Models (LVLMs) to summarize their differences into editing instructions. To tackle the composite nature of text-based CAD editing, we propose a locate-then-infill framework that decomposes the task into two focused sub-tasks: locating regions requiring modification and infilling these regions with appropriate edits. Large Language Models (LLMs) serve as the backbone for both sub-tasks, leveraging their capabilities in natural language understanding and CAD knowledge. Experiments show that CAD-Editor achieves superior performance both quantitatively and qualitatively.
Related papers
- K-Edit: Language Model Editing with Contextual Knowledge Awareness [71.73747181407323]
Knowledge-based model editing enables precise modifications to the weights of large language models.
We present K-Edit, an effective approach to generating contextually consistent knowledge edits.
arXiv Detail & Related papers (2025-02-15T01:35:13Z) - Text2CAD: Text to 3D CAD Generation via Technical Drawings [45.3611544056261]
Text2CAD is a novel framework that employs stable diffusion models tailored to automate the generation process.
We show that Text2CAD effectively generates technical drawings that are accurately translated into high-quality 3D CAD models.
arXiv Detail & Related papers (2024-11-09T15:12:06Z) - CAD-MLLM: Unifying Multimodality-Conditioned CAD Generation With MLLM [39.113795259823476]
We introduce the CAD-MLLM, the first system capable of generating parametric CAD models conditioned on the multimodal input.
We use advanced large language models (LLMs) to align the feature space across diverse multi-modalities data and CAD models' vectorized representations.
Our resulting dataset, named Omni-CAD, is the first multimodal CAD dataset that contains textual description, multi-view images, points, and command sequence for each CAD model.
arXiv Detail & Related papers (2024-11-07T18:31:08Z) - FlexCAD: Unified and Versatile Controllable CAD Generation with Fine-tuned Large Language Models [22.010338370150738]
We propose FlexCAD, a unified model by fine-tuning large language models (LLMs)
We represent a CAD model as a structured text by abstracting each hierarchy as a sequence of text tokens.
During inference, the user intent is converted into a CAD text with a mask token replacing the part the user wants to modify.
arXiv Detail & Related papers (2024-11-05T05:45:26Z) - Text2CAD: Generating Sequential CAD Models from Beginner-to-Expert Level Text Prompts [12.63158811936688]
We propose Text2CAD, the first AI framework for generating text-to-parametric CAD models.
Our proposed framework shows great potential in AI-aided design applications.
arXiv Detail & Related papers (2024-09-25T17:19:33Z) - GenCAD: Image-Conditioned Computer-Aided Design Generation with
Transformer-Based Contrastive Representation and Diffusion Priors [4.485378844492069]
GenCAD is a generative model that transforms image inputs into parametric CAD command sequences.
It significantly outperforms existing state-of-the-art methods in terms of the precision and modifiability of generated 3D shapes.
arXiv Detail & Related papers (2024-09-08T23:49:11Z) - PS-CAD: Local Geometry Guidance via Prompting and Selection for CAD Reconstruction [86.726941702182]
We introduce geometric guidance into the reconstruction network PS-CAD.
We provide the geometry of surfaces where the current reconstruction differs from the complete model as a point cloud.
Second, we use geometric analysis to extract a set of planar prompts, that correspond to candidate surfaces.
arXiv Detail & Related papers (2024-05-24T03:43:55Z) - Geometric Deep Learning for Computer-Aided Design: A Survey [85.79012726689511]
This survey offers a comprehensive overview of learning-based methods in computer-aided design.
It includes similarity analysis and retrieval, 2D and 3D CAD model synthesis, and CAD generation from point clouds.
It provides a complete list of benchmark datasets and their characteristics, along with open-source codes that have propelled research in this domain.
arXiv Detail & Related papers (2024-02-27T17:11:35Z) - AutoCAD: Automatically Generating Counterfactuals for Mitigating
Shortcut Learning [70.70393006697383]
We present AutoCAD, a fully automatic and task-agnostic CAD generation framework.
In this paper, we present AutoCAD, a fully automatic and task-agnostic CAD generation framework.
arXiv Detail & Related papers (2022-11-29T13:39:53Z) - Patch2CAD: Patchwise Embedding Learning for In-the-Wild Shape Retrieval
from a Single Image [58.953160501596805]
We propose a novel approach towards constructing a joint embedding space between 2D images and 3D CAD models in a patch-wise fashion.
Our approach is more robust than state of the art in real-world scenarios without any exact CAD matches.
arXiv Detail & Related papers (2021-08-20T20:58:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.