HybridSDF: Combining Free Form Shapes and Geometric Primitives for
effective Shape Manipulation
- URL: http://arxiv.org/abs/2109.10767v2
- Date: Fri, 24 Sep 2021 17:56:13 GMT
- Title: HybridSDF: Combining Free Form Shapes and Geometric Primitives for
effective Shape Manipulation
- Authors: Subeesh Vasu, Nicolas Talabot, Artem Lukoianov, Pierre Baque, Jonathan
Donier, Pascal Fua
- Abstract summary: Deep-learning based 3D surface modeling has opened new shape design avenues.
These advances have not yet been accepted by the CAD community because they cannot be integrated into engineering.
We propose a novel approach to effectively combining geometric primitives and free-form surfaces represented by implicit surfaces for accurate modeling.
- Score: 58.411259332760935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: CAD modeling typically involves the use of simple geometric primitives
whereas recent advances in deep-learning based 3D surface modeling have opened
new shape design avenues. Unfortunately, these advances have not yet been
accepted by the CAD community because they cannot be integrated into
engineering workflows. To remedy this, we propose a novel approach to
effectively combining geometric primitives and free-form surfaces represented
by implicit surfaces for accurate modeling that preserves interpretability,
enforces consistency, and enables easy manipulation.
Related papers
- CADCrafter: Generating Computer-Aided Design Models from Unconstrained Images [69.7768227804928]
CADCrafter is an image-to-parametric CAD model generation framework that trains solely on synthetic textureless CAD data.
We introduce a geometry encoder to accurately capture diverse geometric features.
Our approach can robustly handle real unconstrained CAD images, and even generalize to unseen general objects.
arXiv Detail & Related papers (2025-04-07T06:01:35Z) - PRISM: Probabilistic Representation for Integrated Shape Modeling and Generation [79.46526296655776]
PRISM is a novel approach for 3D shape generation that integrates categorical diffusion models with Statistical Shape Models (SSM) and Gaussian Mixture Models (GMM)
Our method employs compositional SSMs to capture part-level geometric variations and uses GMM to represent part semantics in a continuous space.
Our approach significantly outperforms previous methods in both quality and controllability of part-level operations.
arXiv Detail & Related papers (2025-04-06T11:48:08Z) - CADDreamer: CAD Object Generation from Single-view Images [43.59340035126575]
Existing 3D generative models often produce overly dense and unstructured meshes.
We introduce CADDreamer, a novel approach for generating boundary representations (B-rep) of CAD objects from a single image.
Results demonstrate that our method effectively recovers high-quality CAD objects from single-view images.
arXiv Detail & Related papers (2025-02-28T05:30:29Z) - Geometry Distributions [51.4061133324376]
We propose a novel geometric data representation that models geometry as distributions.
Our approach uses diffusion models with a novel network architecture to learn surface point distributions.
We evaluate our representation qualitatively and quantitatively across various object types, demonstrating its effectiveness in achieving high geometric fidelity.
arXiv Detail & Related papers (2024-11-25T04:06:48Z) - GenCAD: Image-Conditioned Computer-Aided Design Generation with Transformer-Based Contrastive Representation and Diffusion Priors [3.796768352477804]
The creation of manufacturable and editable 3D shapes through Computer-Aided Design (CAD) remains a highly manual and time-consuming task.
This paper introduces GenCAD, a generative model that employs autoregressive transformers with a contrastive learning framework and latent diffusion models to transform image inputs into parametric CAD command sequences.
arXiv Detail & Related papers (2024-09-08T23:49:11Z) - NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation [52.772319840580074]
3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.
Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation.
We introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling.
arXiv Detail & Related papers (2024-03-27T04:09:34Z) - An End-to-End Deep Learning Generative Framework for Refinable Shape
Matching and Generation [45.820901263103806]
Generative modelling for shapes is a prerequisite for In-Silico Clinical Trials (ISCTs)
We develop a novel unsupervised geometric deep-learning model to establish refinable shape correspondences in a latent space.
We extend our proposed base model to a joint shape generative-clustering multi-atlas framework to incorporate further variability.
arXiv Detail & Related papers (2024-03-10T21:33:53Z) - DeFormer: Integrating Transformers with Deformable Models for 3D Shape
Abstraction from a Single Image [31.154786931081087]
We propose a novel bi-channel Transformer architecture, integrated with parameterized deformable models, to simultaneously estimate the global and local deformations of primitives.
DeFormer achieves better reconstruction accuracy over the state-of-the-art, and visualizes with consistent semantic correspondences for improved interpretability.
arXiv Detail & Related papers (2023-09-22T02:46:43Z) - Geometrically Consistent Partial Shape Matching [50.29468769172704]
Finding correspondences between 3D shapes is a crucial problem in computer vision and graphics.
An often neglected but essential property of matching geometrics is consistency.
We propose a novel integer linear programming partial shape matching formulation.
arXiv Detail & Related papers (2023-09-10T12:21:42Z) - Learning Versatile 3D Shape Generation with Improved AR Models [91.87115744375052]
Auto-regressive (AR) models have achieved impressive results in 2D image generation by modeling joint distributions in the grid space.
We propose the Improved Auto-regressive Model (ImAM) for 3D shape generation, which applies discrete representation learning based on a latent vector instead of volumetric grids.
arXiv Detail & Related papers (2023-03-26T12:03:18Z) - CAPRI-Net: Learning Compact CAD Shapes with Adaptive Primitive Assembly [17.82598676258891]
We introduce CAPRI-Net, a neural network for learning compact and interpretable implicit representations of 3D computer-aided design (CAD) models.
Our network takes an input 3D shape that can be provided as a point cloud or voxel grids, and reconstructs it by a compact assembly of quadric surface primitives.
We evaluate our learning framework on both ShapeNet and ABC, the largest and most diverse CAD dataset to date, in terms of reconstruction quality, shape edges, compactness, and interpretability.
arXiv Detail & Related papers (2021-04-12T17:21:19Z) - Deep Active Surface Models [60.027353171412216]
Active Surface Models have a long history of being useful to model complex 3D surfaces but only Active Contours have been used in conjunction with deep networks.
We introduce layers that implement them that can be integrated seamlessly into Graph Convolutional Networks to enforce sophisticated smoothness priors.
arXiv Detail & Related papers (2020-11-17T18:48:28Z) - CAD-Deform: Deformable Fitting of CAD Models to 3D Scans [30.451330075135076]
We introduce CAD-Deform, a method which obtains more accurate CAD-to-scan fits by non-rigidly deforming retrieved CAD models.
A series of experiments demonstrate that our method achieves significantly tighter scan-to-CAD fits, allowing a more accurate digital replica of the scanned real-world environment.
arXiv Detail & Related papers (2020-07-23T12:30:20Z) - Learning Generative Models of Shape Handles [43.41382075567803]
We present a generative model to synthesize 3D shapes as sets of handles.
Our model can generate handle sets with varying cardinality and different types of handles.
We show that the resulting shape representations are intuitive and achieve superior quality than previous state-of-the-art.
arXiv Detail & Related papers (2020-04-06T22:35:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.