PICASSO: A Feed-Forward Framework for Parametric Inference of CAD Sketches via Rendering Self-Supervision
- URL: http://arxiv.org/abs/2407.13394v1
- Date: Thu, 18 Jul 2024 11:02:52 GMT
- Title: PICASSO: A Feed-Forward Framework for Parametric Inference of CAD Sketches via Rendering Self-Supervision
- Authors: Ahmet Serdar Karadeniz, Dimitrios Mallis, Nesryne Mejri, Kseniya Cherenkova, Anis Kacem, Djamila Aouada,
- Abstract summary: Given a drawing of a CAD sketch, the proposed framework turns it into parametric primitives that can be imported into CAD software.
PICASSO enables the learning of parametric CAD sketches from either precise or hand-drawn sketch images.
- Score: 12.644368401427135
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose PICASSO, a novel framework CAD sketch parameterization from hand-drawn or precise sketch images via rendering self-supervision. Given a drawing of a CAD sketch, the proposed framework turns it into parametric primitives that can be imported into CAD software. Compared to existing methods, PICASSO enables the learning of parametric CAD sketches from either precise or hand-drawn sketch images, even in cases where annotations at the parameter level are scarce or unavailable. This is achieved by leveraging the geometric characteristics of sketches as a learning cue to pre-train a CAD parameterization network. Specifically, PICASSO comprises two primary components: (1) a Sketch Parameterization Network (SPN) that predicts a series of parametric primitives from CAD sketch images, and (2) a Sketch Rendering Network (SRN) that renders parametric CAD sketches in a differentiable manner. SRN facilitates the computation of a image-to-image loss, which can be utilized to pre-train SPN, thereby enabling zero- and few-shot learning scenarios for the parameterization of hand-drawn sketches. Extensive evaluation on the widely used SketchGraphs dataset validates the effectiveness of the proposed framework.
Related papers
- DAVINCI: A Single-Stage Architecture for Constrained CAD Sketch Inference [12.644368401427135]
DAVINCI is a unified architecture for single-stage Computer-Aided Design (CAD) sketch parameterization and constraint inference.
By jointly learning both outputs, DAVINCI minimizes error accumulation and enhances the performance of constrained CAD sketch inference.
DAVINCI achieves state-of-the-art results on the large-scale SketchGraphs dataset.
arXiv Detail & Related papers (2024-10-30T09:42:47Z) - PS-CAD: Local Geometry Guidance via Prompting and Selection for CAD Reconstruction [86.726941702182]
We introduce geometric guidance into the reconstruction network PS-CAD.
We provide the geometry of surfaces where the current reconstruction differs from the complete model as a point cloud.
Second, we use geometric analysis to extract a set of planar prompts, that correspond to candidate surfaces.
arXiv Detail & Related papers (2024-05-24T03:43:55Z) - Sketch3D: Style-Consistent Guidance for Sketch-to-3D Generation [55.73399465968594]
This paper proposes a novel generation paradigm Sketch3D to generate realistic 3D assets with shape aligned with the input sketch and color matching the textual description.
Three strategies are designed to optimize 3D Gaussians, i.e., structural optimization via a distribution transfer mechanism, color optimization with a straightforward MSE loss and sketch similarity optimization with a CLIP-based geometric similarity loss.
arXiv Detail & Related papers (2024-04-02T11:03:24Z) - DiffFaceSketch: High-Fidelity Face Image Synthesis with Sketch-Guided
Latent Diffusion Model [8.1818090854822]
We introduce a Sketch-Guided Latent Diffusion Model (SGLDM), an LDM-based network architect trained on a paired sketch-face dataset.
SGLDM can synthesize high-quality face images with different expressions, facial accessories, and hairstyles from various sketches with different abstraction levels.
arXiv Detail & Related papers (2023-02-14T08:51:47Z) - Reconstructing editable prismatic CAD from rounded voxel models [16.03976415868563]
We introduce a novel neural network architecture to solve this challenging task.
Our method reconstructs the input geometry in the voxel space by decomposing the shape.
During inference, we obtain the CAD data by first searching a database of 2D constrained sketches.
arXiv Detail & Related papers (2022-09-02T16:44:10Z) - Vitruvion: A Generative Model of Parametric CAD Sketches [22.65229769427499]
We present an approach to generative modeling of parametric CAD sketches.
Our model, trained on real-world designs from the SketchGraphs dataset, autoregressively synthesizes sketches as sequences of primitives.
We condition the model on various contexts, including partial sketches (primers) and images of hand-drawn sketches.
arXiv Detail & Related papers (2021-09-29T01:02:30Z) - Patch2CAD: Patchwise Embedding Learning for In-the-Wild Shape Retrieval
from a Single Image [58.953160501596805]
We propose a novel approach towards constructing a joint embedding space between 2D images and 3D CAD models in a patch-wise fashion.
Our approach is more robust than state of the art in real-world scenarios without any exact CAD matches.
arXiv Detail & Related papers (2021-08-20T20:58:52Z) - Cloud2Curve: Generation and Vectorization of Parametric Sketches [109.02932608241227]
We present Cloud2Curve, a generative model for scalable high-resolution vector sketches.
We evaluate the generation and vectorization capabilities of our model on Quick, Draw! and KMNIST datasets.
arXiv Detail & Related papers (2021-03-29T12:09:42Z) - SketchGraphs: A Large-Scale Dataset for Modeling Relational Geometry in
Computer-Aided Design [18.041056084458567]
Parametric computer-aided design (CAD) is the dominant paradigm in mechanical engineering for physical design.
SketchGraphs is a collection of 15 million sketches extracted from real-world CAD models coupled with an open-source data processing pipeline.
arXiv Detail & Related papers (2020-07-16T17:56:25Z) - Sketch-BERT: Learning Sketch Bidirectional Encoder Representation from
Transformers by Self-supervised Learning of Sketch Gestalt [125.17887147597567]
We present a model of learning Sketch BiBERT Representation from Transformer (Sketch-BERT)
We generalize BERT to sketch domain, with the novel proposed components and pre-training algorithms.
We show that the learned representation of Sketch-BERT can help and improve the performance of the downstream tasks of sketch recognition, sketch retrieval, and sketch gestalt.
arXiv Detail & Related papers (2020-05-19T01:35:44Z) - SketchDesc: Learning Local Sketch Descriptors for Multi-view
Correspondence [68.63311821718416]
We study the problem of multi-view sketch correspondence, where we take as input multiple freehand sketches with different views of the same object.
This problem is challenging since the visual features of corresponding points at different views can be very different.
We take a deep learning approach and learn a novel local sketch descriptor from data.
arXiv Detail & Related papers (2020-01-16T11:31:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.