SECAD-Net: Self-Supervised CAD Reconstruction by Learning Sketch-Extrude
Operations
- URL: http://arxiv.org/abs/2303.10613v1
- Date: Sun, 19 Mar 2023 09:26:03 GMT
- Title: SECAD-Net: Self-Supervised CAD Reconstruction by Learning Sketch-Extrude
Operations
- Authors: Pu Li, Jianwei Guo, Xiaopeng Zhang, Dong-ming Yan
- Abstract summary: SECAD-Net is an end-to-end neural network aimed at reconstructing compact and easy-to-edit CAD models.
We show superiority over state-of-the-art alternatives including the closely related method for supervised CAD reconstruction.
- Score: 21.000539206470897
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reverse engineering CAD models from raw geometry is a classic but strenuous
research problem. Previous learning-based methods rely heavily on labels due to
the supervised design patterns or reconstruct CAD shapes that are not easily
editable. In this work, we introduce SECAD-Net, an end-to-end neural network
aimed at reconstructing compact and easy-to-edit CAD models in a
self-supervised manner. Drawing inspiration from the modeling language that is
most commonly used in modern CAD software, we propose to learn 2D sketches and
3D extrusion parameters from raw shapes, from which a set of extrusion
cylinders can be generated by extruding each sketch from a 2D plane into a 3D
body. By incorporating the Boolean operation (i.e., union), these cylinders can
be combined to closely approximate the target geometry. We advocate the use of
implicit fields for sketch representation, which allows for creating CAD
variations by interpolating latent codes in the sketch latent space. Extensive
experiments on both ABC and Fusion 360 datasets demonstrate the effectiveness
of our method, and show superiority over state-of-the-art alternatives
including the closely related method for supervised CAD reconstruction. We
further apply our approach to CAD editing and single-view CAD reconstruction.
The code is released at https://github.com/BunnySoCrazy/SECAD-Net.
Related papers
- Img2CAD: Conditioned 3D CAD Model Generation from Single Image with Structured Visual Geometry [12.265852643914439]
We present Img2CAD, the first knowledge that uses 2D image inputs to generate editable parameters.
Img2CAD enables seamless integration between AI 3D reconstruction and CAD representation.
arXiv Detail & Related papers (2024-10-04T13:27:52Z) - 3D Neural Edge Reconstruction [61.10201396044153]
We introduce EMAP, a new method for learning 3D edge representations with a focus on both lines and curves.
Our method implicitly encodes 3D edge distance and direction in Unsigned Distance Functions (UDF) from multi-view edge maps.
On top of this neural representation, we propose an edge extraction algorithm that robustly abstracts 3D edges from the inferred edge points and their directions.
arXiv Detail & Related papers (2024-05-29T17:23:51Z) - PS-CAD: Local Geometry Guidance via Prompting and Selection for CAD Reconstruction [86.726941702182]
We introduce geometric guidance into the reconstruction network PS-CAD.
We provide the geometry of surfaces where the current reconstruction differs from the complete model as a point cloud.
Second, we use geometric analysis to extract a set of planar prompts, that correspond to candidate surfaces.
arXiv Detail & Related papers (2024-05-24T03:43:55Z) - CAD-SIGNet: CAD Language Inference from Point Clouds using Layer-wise
Sketch Instance Guided Attention [13.227571488321358]
We propose an end-to-end trainable and auto-regressive architecture to recover the design history of a CAD model.
Our model learns visual-language representations by layer-wise cross-attention between point cloud and CAD language embedding.
Thanks to its auto-regressive nature, CAD-SIGNet not only reconstructs a unique full design history of the corresponding CAD model given an input point cloud but also provides multiple plausible design choices.
arXiv Detail & Related papers (2024-02-27T16:53:16Z) - Point2CAD: Reverse Engineering CAD Models from 3D Point Clouds [26.10631058349939]
We propose a hybrid analytic-neural reconstruction scheme that bridges the gap between segmented point clouds and structured CAD models.
We also propose a novel implicit neural representation of freeform surfaces, driving up the performance of our overall CAD reconstruction scheme.
arXiv Detail & Related papers (2023-12-07T08:23:44Z) - Self-Supervised Geometry-Aware Encoder for Style-Based 3D GAN Inversion [115.82306502822412]
StyleGAN has achieved great progress in 2D face reconstruction and semantic editing via image inversion and latent editing.
A corresponding generic 3D GAN inversion framework is still missing, limiting the applications of 3D face reconstruction and semantic editing.
We study the challenging problem of 3D GAN inversion where a latent code is predicted given a single face image to faithfully recover its 3D shapes and detailed textures.
arXiv Detail & Related papers (2022-12-14T18:49:50Z) - ExtrudeNet: Unsupervised Inverse Sketch-and-Extrude for Shape Parsing [46.778258706603005]
This paper studies the problem of learning the shape given in the form of point clouds by inverse sketch-and-extrude.
We present ExtrudeNet, an unsupervised end-to-end network for discovering sketch and extrude from point clouds.
arXiv Detail & Related papers (2022-09-30T17:58:11Z) - Reconstructing editable prismatic CAD from rounded voxel models [16.03976415868563]
We introduce a novel neural network architecture to solve this challenging task.
Our method reconstructs the input geometry in the voxel space by decomposing the shape.
During inference, we obtain the CAD data by first searching a database of 2D constrained sketches.
arXiv Detail & Related papers (2022-09-02T16:44:10Z) - Point2Cyl: Reverse Engineering 3D Objects from Point Clouds to Extrusion
Cylinders [25.389088434370066]
We propose Point2Cyl, a supervised network transforming a raw 3D point cloud to a set of extrusion cylinders.
Our approach demonstrates the best performance on two recent CAD datasets.
arXiv Detail & Related papers (2021-12-17T05:22:28Z) - Patch2CAD: Patchwise Embedding Learning for In-the-Wild Shape Retrieval
from a Single Image [58.953160501596805]
We propose a novel approach towards constructing a joint embedding space between 2D images and 3D CAD models in a patch-wise fashion.
Our approach is more robust than state of the art in real-world scenarios without any exact CAD matches.
arXiv Detail & Related papers (2021-08-20T20:58:52Z) - Sketch2Mesh: Reconstructing and Editing 3D Shapes from Sketches [65.96417928860039]
We use an encoder/decoder architecture for the sketch to mesh translation.
We will show that this approach is easy to deploy, robust to style changes, and effective.
arXiv Detail & Related papers (2021-04-01T14:10:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.