Point2CAD: Reverse Engineering CAD Models from 3D Point Clouds
- URL: http://arxiv.org/abs/2312.04962v1
- Date: Thu, 7 Dec 2023 08:23:44 GMT
- Title: Point2CAD: Reverse Engineering CAD Models from 3D Point Clouds
- Authors: Yujia Liu, Anton Obukhov, Jan Dirk Wegner, Konrad Schindler
- Abstract summary: We propose a hybrid analytic-neural reconstruction scheme that bridges the gap between segmented point clouds and structured CAD models.
We also propose a novel implicit neural representation of freeform surfaces, driving up the performance of our overall CAD reconstruction scheme.
- Score: 26.10631058349939
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computer-Aided Design (CAD) model reconstruction from point clouds is an
important problem at the intersection of computer vision, graphics, and machine
learning; it saves the designer significant time when iterating on in-the-wild
objects. Recent advancements in this direction achieve relatively reliable
semantic segmentation but still struggle to produce an adequate topology of the
CAD model. In this work, we analyze the current state of the art for that
ill-posed task and identify shortcomings of existing methods. We propose a
hybrid analytic-neural reconstruction scheme that bridges the gap between
segmented point clouds and structured CAD models and can be readily combined
with different segmentation backbones. Moreover, to power the surface fitting
stage, we propose a novel implicit neural representation of freeform surfaces,
driving up the performance of our overall CAD reconstruction scheme. We
extensively evaluate our method on the popular ABC benchmark of CAD models and
set a new state-of-the-art for that dataset. Project page:
https://www.obukhov.ai/point2cad}{https://www.obukhov.ai/point2cad.
Related papers
- PS-CAD: Local Geometry Guidance via Prompting and Selection for CAD Reconstruction [86.726941702182]
We introduce geometric guidance into the reconstruction network PS-CAD.
We provide the geometry of surfaces where the current reconstruction differs from the complete model as a point cloud.
Second, we use geometric analysis to extract a set of planar prompts, that correspond to candidate surfaces.
arXiv Detail & Related papers (2024-05-24T03:43:55Z) - Geometric Deep Learning for Computer-Aided Design: A Survey [85.79012726689511]
This survey offers a comprehensive overview of learning-based methods in computer-aided design.
It includes similarity analysis and retrieval, 2D and 3D CAD model synthesis, and CAD generation from point clouds.
It provides a complete list of benchmark datasets and their characteristics, along with open-source codes that have propelled research in this domain.
arXiv Detail & Related papers (2024-02-27T17:11:35Z) - CAD-SIGNet: CAD Language Inference from Point Clouds using Layer-wise
Sketch Instance Guided Attention [13.227571488321358]
We propose an end-to-end trainable and auto-regressive architecture to recover the design history of a CAD model.
Our model learns visual-language representations by layer-wise cross-attention between point cloud and CAD language embedding.
Thanks to its auto-regressive nature, CAD-SIGNet not only reconstructs a unique full design history of the corresponding CAD model given an input point cloud but also provides multiple plausible design choices.
arXiv Detail & Related papers (2024-02-27T16:53:16Z) - P2CADNet: An End-to-End Reconstruction Network for Parametric 3D CAD
Model from Point Clouds [10.041481396324517]
This paper proposes an end-to-end network to reconstruct featured CAD model from point cloud (P2CADNet)
We evaluate P2CADNet on the public dataset, and the experimental results show that P2CADNet has excellent reconstruction quality and accuracy.
arXiv Detail & Related papers (2023-10-04T08:00:05Z) - SECAD-Net: Self-Supervised CAD Reconstruction by Learning Sketch-Extrude
Operations [21.000539206470897]
SECAD-Net is an end-to-end neural network aimed at reconstructing compact and easy-to-edit CAD models.
We show superiority over state-of-the-art alternatives including the closely related method for supervised CAD reconstruction.
arXiv Detail & Related papers (2023-03-19T09:26:03Z) - AutoCAD: Automatically Generating Counterfactuals for Mitigating
Shortcut Learning [70.70393006697383]
We present AutoCAD, a fully automatic and task-agnostic CAD generation framework.
In this paper, we present AutoCAD, a fully automatic and task-agnostic CAD generation framework.
arXiv Detail & Related papers (2022-11-29T13:39:53Z) - Reconstructing editable prismatic CAD from rounded voxel models [16.03976415868563]
We introduce a novel neural network architecture to solve this challenging task.
Our method reconstructs the input geometry in the voxel space by decomposing the shape.
During inference, we obtain the CAD data by first searching a database of 2D constrained sketches.
arXiv Detail & Related papers (2022-09-02T16:44:10Z) - Single-view 3D Mesh Reconstruction for Seen and Unseen Categories [69.29406107513621]
Single-view 3D Mesh Reconstruction is a fundamental computer vision task that aims at recovering 3D shapes from single-view RGB images.
This paper tackles Single-view 3D Mesh Reconstruction, to study the model generalization on unseen categories.
We propose an end-to-end two-stage network, GenMesh, to break the category boundaries in reconstruction.
arXiv Detail & Related papers (2022-08-04T14:13:35Z) - Patch2CAD: Patchwise Embedding Learning for In-the-Wild Shape Retrieval
from a Single Image [58.953160501596805]
We propose a novel approach towards constructing a joint embedding space between 2D images and 3D CAD models in a patch-wise fashion.
Our approach is more robust than state of the art in real-world scenarios without any exact CAD matches.
arXiv Detail & Related papers (2021-08-20T20:58:52Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.