Img2CAD: Reverse Engineering 3D CAD Models from Images through VLM-Assisted Conditional Factorization
- URL: http://arxiv.org/abs/2408.01437v1
- Date: Fri, 19 Jul 2024 06:53:30 GMT
- Title: Img2CAD: Reverse Engineering 3D CAD Models from Images through VLM-Assisted Conditional Factorization
- Authors: Yang You, Mikaela Angelina Uy, Jiaqi Han, Rahul Thomas, Haotong Zhang, Suya You, Leonidas Guibas,
- Abstract summary: Reverse engineering 3D computer-aided design (CAD) models from images is an important task for many downstream applications.
In this work, we introduce a novel approach that conditionally factorizes the task into two sub-problems.
We propose TrAssembler that conditioned on the discrete structure with semantics predicts the continuous attribute values.
- Score: 12.12975824816803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reverse engineering 3D computer-aided design (CAD) models from images is an important task for many downstream applications including interactive editing, manufacturing, architecture, robotics, etc. The difficulty of the task lies in vast representational disparities between the CAD output and the image input. CAD models are precise, programmatic constructs that involves sequential operations combining discrete command structure with continuous attributes -- making it challenging to learn and optimize in an end-to-end fashion. Concurrently, input images introduce inherent challenges such as photo-metric variability and sensor noise, complicating the reverse engineering process. In this work, we introduce a novel approach that conditionally factorizes the task into two sub-problems. First, we leverage large foundation models, particularly GPT-4V, to predict the global discrete base structure with semantic information. Second, we propose TrAssembler that conditioned on the discrete structure with semantics predicts the continuous attribute values. To support the training of our TrAssembler, we further constructed an annotated CAD dataset of common objects from ShapeNet. Putting all together, our approach and data demonstrate significant first steps towards CAD-ifying images in the wild. Our project page: https://anonymous123342.github.io/
Related papers
- Large Spatial Model: End-to-end Unposed Images to Semantic 3D [79.94479633598102]
Large Spatial Model (LSM) processes unposed RGB images directly into semantic radiance fields.
LSM simultaneously estimates geometry, appearance, and semantics in a single feed-forward operation.
It can generate versatile label maps by interacting with language at novel viewpoints.
arXiv Detail & Related papers (2024-10-24T17:54:42Z) - PS-CAD: Local Geometry Guidance via Prompting and Selection for CAD Reconstruction [86.726941702182]
We introduce geometric guidance into the reconstruction network PS-CAD.
We provide the geometry of surfaces where the current reconstruction differs from the complete model as a point cloud.
Second, we use geometric analysis to extract a set of planar prompts, that correspond to candidate surfaces.
arXiv Detail & Related papers (2024-05-24T03:43:55Z) - Point2CAD: Reverse Engineering CAD Models from 3D Point Clouds [26.10631058349939]
We propose a hybrid analytic-neural reconstruction scheme that bridges the gap between segmented point clouds and structured CAD models.
We also propose a novel implicit neural representation of freeform surfaces, driving up the performance of our overall CAD reconstruction scheme.
arXiv Detail & Related papers (2023-12-07T08:23:44Z) - DiffCAD: Weakly-Supervised Probabilistic CAD Model Retrieval and Alignment from an RGB Image [34.47379913018661]
We propose DiffCAD, the first weakly-supervised probabilistic approach to CAD retrieval and alignment from an RGB image.
We formulate this as a conditional generative task, leveraging diffusion to learn implicit probabilistic models capturing the shape, pose, and scale of CAD objects in an image.
Our approach is trained only on synthetic data, leveraging monocular depth and mask estimates to enable robust zero-shot adaptation to various real target domains.
arXiv Detail & Related papers (2023-11-30T15:10:21Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Automatic Reverse Engineering: Creating computer-aided design (CAD)
models from multi-view images [0.0]
We present a novel network for an automated reverse engineering task.
A proof-of-concept is demonstrated by successfully reconstructing a number of valid CAD models.
It is shown that some of the capabilities of our network can be transferred to this domain.
arXiv Detail & Related papers (2023-09-23T06:42:09Z) - PPI-NET: End-to-End Parametric Primitive Inference [24.31083483088741]
In engineering applications, line, circle, arc, and point are collectively referred to as primitives.
We propose an efficient and accurate end-to-end method to infer parametric primitives from hand-drawn sketch images.
arXiv Detail & Related papers (2023-08-03T03:50:49Z) - Multiview Compressive Coding for 3D Reconstruction [77.95706553743626]
We introduce a simple framework that operates on 3D points of single objects or whole scenes.
Our model, Multiview Compressive Coding, learns to compress the input appearance and geometry to predict the 3D structure.
arXiv Detail & Related papers (2023-01-19T18:59:52Z) - Unifying Flow, Stereo and Depth Estimation [121.54066319299261]
We present a unified formulation and model for three motion and 3D perception tasks.
We formulate all three tasks as a unified dense correspondence matching problem.
Our model naturally enables cross-task transfer since the model architecture and parameters are shared across tasks.
arXiv Detail & Related papers (2022-11-10T18:59:54Z) - Reconstructing editable prismatic CAD from rounded voxel models [16.03976415868563]
We introduce a novel neural network architecture to solve this challenging task.
Our method reconstructs the input geometry in the voxel space by decomposing the shape.
During inference, we obtain the CAD data by first searching a database of 2D constrained sketches.
arXiv Detail & Related papers (2022-09-02T16:44:10Z) - Patch2CAD: Patchwise Embedding Learning for In-the-Wild Shape Retrieval
from a Single Image [58.953160501596805]
We propose a novel approach towards constructing a joint embedding space between 2D images and 3D CAD models in a patch-wise fashion.
Our approach is more robust than state of the art in real-world scenarios without any exact CAD matches.
arXiv Detail & Related papers (2021-08-20T20:58:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.