CAD-Deform: Deformable Fitting of CAD Models to 3D Scans
- URL: http://arxiv.org/abs/2007.11965v1
- Date: Thu, 23 Jul 2020 12:30:20 GMT
- Title: CAD-Deform: Deformable Fitting of CAD Models to 3D Scans
- Authors: Vladislav Ishimtsev, Alexey Bokhovkin, Alexey Artemov, Savva Ignatyev,
Matthias Niessner, Denis Zorin, Evgeny Burnaev
- Abstract summary: We introduce CAD-Deform, a method which obtains more accurate CAD-to-scan fits by non-rigidly deforming retrieved CAD models.
A series of experiments demonstrate that our method achieves significantly tighter scan-to-CAD fits, allowing a more accurate digital replica of the scanned real-world environment.
- Score: 30.451330075135076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Shape retrieval and alignment are a promising avenue towards turning 3D scans
into lightweight CAD representations that can be used for content creation such
as mobile or AR/VR gaming scenarios. Unfortunately, CAD model retrieval is
limited by the availability of models in standard 3D shape collections (e.g.,
ShapeNet). In this work, we address this shortcoming by introducing CAD-Deform,
a method which obtains more accurate CAD-to-scan fits by non-rigidly deforming
retrieved CAD models. Our key contribution is a new non-rigid deformation model
incorporating smooth transformations and preservation of sharp features, that
simultaneously achieves very tight fits from CAD models to the 3D scan and
maintains the clean, high-quality surface properties of hand-modeled CAD
objects. A series of thorough experiments demonstrate that our method achieves
significantly tighter scan-to-CAD fits, allowing a more accurate digital
replica of the scanned real-world environment while preserving important
geometric features present in synthetic CAD environments.
Related papers
- Text2CAD: Text to 3D CAD Generation via Technical Drawings [45.3611544056261]
Text2CAD is a novel framework that employs stable diffusion models tailored to automate the generation process.
We show that Text2CAD effectively generates technical drawings that are accurately translated into high-quality 3D CAD models.
arXiv Detail & Related papers (2024-11-09T15:12:06Z) - Img2CAD: Conditioned 3D CAD Model Generation from Single Image with Structured Visual Geometry [12.265852643914439]
We present Img2CAD, the first knowledge that uses 2D image inputs to generate editable parameters.
Img2CAD enables seamless integration between AI 3D reconstruction and CAD representation.
arXiv Detail & Related papers (2024-10-04T13:27:52Z) - OpenECAD: An Efficient Visual Language Model for Editable 3D-CAD Design [1.481550828146527]
We fine-tuned pre-trained models to create OpenECAD models (0.55B, 0.89B, 2.4B and 3.1B)
OpenECAD models can process images of 3D designs as input and generate highly structured 2D sketches and 3D construction commands.
These outputs can be directly used with existing CAD tools' APIs to generate project files.
arXiv Detail & Related papers (2024-06-14T10:47:52Z) - PS-CAD: Local Geometry Guidance via Prompting and Selection for CAD Reconstruction [86.726941702182]
We introduce geometric guidance into the reconstruction network PS-CAD.
We provide the geometry of surfaces where the current reconstruction differs from the complete model as a point cloud.
Second, we use geometric analysis to extract a set of planar prompts, that correspond to candidate surfaces.
arXiv Detail & Related papers (2024-05-24T03:43:55Z) - Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability [118.26563926533517]
Auto-regressive models have achieved impressive results in 2D image generation by modeling joint distributions in grid space.
We extend auto-regressive models to 3D domains, and seek a stronger ability of 3D shape generation by improving auto-regressive models at capacity and scalability simultaneously.
arXiv Detail & Related papers (2024-02-19T15:33:09Z) - Weakly-Supervised End-to-End CAD Retrieval to Scan Objects [25.41908065938424]
We propose a new weakly-supervised approach to retrieve semantically and structurally similar CAD models to a query 3D scanned scene.
Our approach leverages a fully-differentiable top-$k$ retrieval layer, enabling end-to-end training guided by geometric and perceptual similarity of the top retrieved CAD models to the scan queries.
arXiv Detail & Related papers (2022-03-24T06:30:47Z) - ROCA: Robust CAD Model Retrieval and Alignment from a Single Image [22.03752392397363]
We present ROCA, a novel end-to-end approach that retrieves and aligns 3D CAD models from a shape database to a single input image.
experiments on challenging, real-world imagery from ScanNet show that ROCA significantly improves on state of the art, from 9.5% to 17.6% in retrieval-aware CAD alignment accuracy.
arXiv Detail & Related papers (2021-12-03T16:02:32Z) - HybridSDF: Combining Free Form Shapes and Geometric Primitives for
effective Shape Manipulation [58.411259332760935]
Deep-learning based 3D surface modeling has opened new shape design avenues.
These advances have not yet been accepted by the CAD community because they cannot be integrated into engineering.
We propose a novel approach to effectively combining geometric primitives and free-form surfaces represented by implicit surfaces for accurate modeling.
arXiv Detail & Related papers (2021-09-22T14:45:19Z) - Patch2CAD: Patchwise Embedding Learning for In-the-Wild Shape Retrieval
from a Single Image [58.953160501596805]
We propose a novel approach towards constructing a joint embedding space between 2D images and 3D CAD models in a patch-wise fashion.
Our approach is more robust than state of the art in real-world scenarios without any exact CAD matches.
arXiv Detail & Related papers (2021-08-20T20:58:52Z) - 'CADSketchNet' -- An Annotated Sketch dataset for 3D CAD Model Retrieval
with Deep Neural Networks [0.8155575318208631]
The research work presented in this paper aims at developing a dataset suitable for building a retrieval system for 3D CAD models based on deep learning.
The paper also aims at evaluating the performance of various retrieval system or a search engine for 3D CAD models that accepts a sketch image as the input query.
arXiv Detail & Related papers (2021-07-13T16:10:16Z) - Mask2CAD: 3D Shape Prediction by Learning to Segment and Retrieve [54.054575408582565]
We propose to leverage existing large-scale datasets of 3D models to understand the underlying 3D structure of objects seen in an image.
We present Mask2CAD, which jointly detects objects in real-world images and for each detected object, optimize for the most similar CAD model and its pose.
This produces a clean, lightweight representation of the objects in an image.
arXiv Detail & Related papers (2020-07-26T00:08:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.