Discovering Design Concepts for CAD Sketches
- URL: http://arxiv.org/abs/2210.14451v1
- Date: Wed, 26 Oct 2022 03:53:33 GMT
- Title: Discovering Design Concepts for CAD Sketches
- Authors: Yuezhi Yang, Hao Pan
- Abstract summary: We propose a learning based approach that discovers the modular concepts by induction over raw sketches.
We demonstrate the design concept learning on a large scale CAD sketch dataset and show its applications for design intent interpretation and auto-completion.
- Score: 13.140310747416983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sketch design concepts are recurring patterns found in parametric CAD
sketches. Though rarely explicitly formalized by the CAD designers, these
concepts are implicitly used in design for modularity and regularity. In this
paper, we propose a learning based approach that discovers the modular concepts
by induction over raw sketches. We propose the dual implicit-explicit
representation of concept structures that allows implicit detection and
explicit generation, and the separation of structure generation and parameter
instantiation for parameterized concept generation, to learn modular concepts
by end-to-end training. We demonstrate the design concept learning on a large
scale CAD sketch dataset and show its applications for design intent
interpretation and auto-completion.
Related papers
- Assessing the Variety of a Concept Space Using an Unbiased Estimate of Rao's Quadratic Index [0.0]
'Variety' is one of the parameters by which one can quantify the breadth of a concept space explored by the designers.
This article elaborates on and critically examines the existing variety metrics from the engineering design literature.
A new distance-based variety metric is proposed, along with a prescriptive framework to support the assessment process.
arXiv Detail & Related papers (2024-08-01T16:25:54Z) - Bridging Design Gaps: A Parametric Data Completion Approach With Graph Guided Diffusion Models [9.900586490845694]
This study introduces a generative imputation model leveraging graph attention networks and tabular diffusion models for completing missing parametric data in engineering designs.
We demonstrate our model significantly outperforms existing classical methods, such as MissForest, hotDeck, PPCA, and TabCSDI in both the accuracy and diversity of imputation options.
The graph model helps accurately capture and impute complex parametric interdependencies from an assembly graph, which is key for design problems.
arXiv Detail & Related papers (2024-06-17T16:03:17Z) - Geometric Deep Learning for Computer-Aided Design: A Survey [85.79012726689511]
This survey offers a comprehensive overview of learning-based methods in computer-aided design.
It includes similarity analysis and retrieval, 2D and 3D CAD model synthesis, and CAD generation from point clouds.
It provides a complete list of benchmark datasets and their characteristics, along with open-source codes that have propelled research in this domain.
arXiv Detail & Related papers (2024-02-27T17:11:35Z) - Simple Mechanisms for Representing, Indexing and Manipulating Concepts [46.715152257557804]
We will argue that learning a concept could be done by looking at its moment statistics matrix to generate a concrete representation or signature of that concept.
When the concepts are intersected', signatures of the concepts can be used to find a common theme across a number of related intersected' concepts.
arXiv Detail & Related papers (2023-10-18T17:54:29Z) - Design Space Exploration and Explanation via Conditional Variational
Autoencoders in Meta-model-based Conceptual Design of Pedestrian Bridges [52.77024349608834]
This paper provides a performance-driven design exploration framework to augment the human designer through a Conditional Variational Autoencoder (CVAE)
The CVAE is trained on 18'000 synthetically generated instances of a pedestrian bridge in Switzerland.
arXiv Detail & Related papers (2022-11-29T17:28:31Z) - Concept Gradient: Concept-based Interpretation Without Linear Assumption [77.96338722483226]
Concept Activation Vector (CAV) relies on learning a linear relation between some latent representation of a given model and concepts.
We proposed Concept Gradient (CG), extending concept-based interpretation beyond linear concept functions.
We demonstrated CG outperforms CAV in both toy examples and real world datasets.
arXiv Detail & Related papers (2022-08-31T17:06:46Z) - Concept Activation Vectors for Generating User-Defined 3D Shapes [11.325580593182414]
We explore the interpretability of 3D geometric deep learning models in the context of Computer-Aided Design (CAD)
We use a deep learning architectures to encode high dimensional 3D shapes into a vectorized latent representation that can be used to describe arbitrary concepts.
arXiv Detail & Related papers (2022-04-29T13:09:18Z) - The Conceptual VAE [7.15767183672057]
We present a new model of concepts, based on the framework of variational autoencoders.
The model is inspired by, and closely related to, the Beta-VAE model of concepts.
We show how the model can be used as a concept classifier, and how it can be adapted to learn from fewer labels per instance.
arXiv Detail & Related papers (2022-03-21T17:27:28Z) - Vitruvion: A Generative Model of Parametric CAD Sketches [22.65229769427499]
We present an approach to generative modeling of parametric CAD sketches.
Our model, trained on real-world designs from the SketchGraphs dataset, autoregressively synthesizes sketches as sequences of primitives.
We condition the model on various contexts, including partial sketches (primers) and images of hand-drawn sketches.
arXiv Detail & Related papers (2021-09-29T01:02:30Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z) - Interpretable Visual Reasoning via Induced Symbolic Space [75.95241948390472]
We study the problem of concept induction in visual reasoning, i.e., identifying concepts and their hierarchical relationships from question-answer pairs associated with images.
We first design a new framework named object-centric compositional attention model (OCCAM) to perform the visual reasoning task with object-level visual features.
We then come up with a method to induce concepts of objects and relations using clues from the attention patterns between objects' visual features and question words.
arXiv Detail & Related papers (2020-11-23T18:21:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.