CAD 3D Model classification by Graph Neural Networks: A new approach
based on STEP format
- URL: http://arxiv.org/abs/2210.16815v1
- Date: Sun, 30 Oct 2022 11:27:58 GMT
- Title: CAD 3D Model classification by Graph Neural Networks: A new approach
based on STEP format
- Authors: L. Mandelli, S. Berretti
- Abstract summary: We introduce a new approach for retrieval and classification of 3D models that directly performs in the Computer-Aided Design (CAD) format.
Among the various CAD formats, we consider the widely used STEP extension, which represents a standard for product manufacturing information.
We exploit the linked structure of STEP files to create a graph in which the nodes are the primitive elements and the arcs are the connections between them.
- Score: 2.225882303328135
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we introduce a new approach for retrieval and classification
of 3D models that directly performs in the Computer-Aided Design (CAD) format
without any conversion to other representations like point clouds or meshes,
thus avoiding any loss of information. Among the various CAD formats, we
consider the widely used STEP extension, which represents a standard for
product manufacturing information. This particular format represents a 3D model
as a set of primitive elements such as surfaces and vertices linked together.
In our approach, we exploit the linked structure of STEP files to create a
graph in which the nodes are the primitive elements and the arcs are the
connections between them. We then use Graph Neural Networks (GNNs) to solve the
problem of model classification. Finally, we created two datasets of 3D models
in native CAD format, respectively, by collecting data from the Traceparts
model library and from the Configurators software modeling company. We used
these datasets to test and compare our approach with respect to
state-of-the-art methods that consider other 3D formats. Our code is available
at https://github.com/divanoLetto/3D_STEP_Classification
Related papers
- Img2CAD: Conditioned 3D CAD Model Generation from Single Image with Structured Visual Geometry [12.265852643914439]
We present Img2CAD, the first knowledge that uses 2D image inputs to generate editable parameters.
Img2CAD enables seamless integration between AI 3D reconstruction and CAD representation.
arXiv Detail & Related papers (2024-10-04T13:27:52Z) - DIRECT-3D: Learning Direct Text-to-3D Generation on Massive Noisy 3D Data [50.164670363633704]
We present DIRECT-3D, a diffusion-based 3D generative model for creating high-quality 3D assets from text prompts.
Our model is directly trained on extensive noisy and unaligned in-the-wild' 3D assets.
We achieve state-of-the-art performance in both single-class generation and text-to-3D generation.
arXiv Detail & Related papers (2024-06-06T17:58:15Z) - Model2Scene: Learning 3D Scene Representation via Contrastive
Language-CAD Models Pre-training [105.3421541518582]
Current successful methods of 3D scene perception rely on the large-scale annotated point cloud.
We propose Model2Scene, a novel paradigm that learns free 3D scene representation from Computer-Aided Design (CAD) models and languages.
Model2Scene yields impressive label-free 3D object salient detection with an average mAP of 46.08% and 55.49% on the ScanNet and S3DIS datasets, respectively.
arXiv Detail & Related papers (2023-09-29T03:51:26Z) - FullFormer: Generating Shapes Inside Shapes [9.195909458772187]
We present the first implicit generative model that facilitates the generation of complex 3D shapes with rich internal geometric details.
Our model uses unsigned distance fields to represent nested 3D surfaces allowing learning from non-watertight mesh data.
We demonstrate that our model achieves state-of-the-art point cloud generation results on popular classes of 'Cars', 'Planes', and 'Chairs' of the ShapeNet dataset.
arXiv Detail & Related papers (2023-03-20T16:19:23Z) - 3DStyleNet: Creating 3D Shapes with Geometric and Texture Style
Variations [81.45521258652734]
We propose a method to create plausible geometric and texture style variations of 3D objects.
Our method can create many novel stylized shapes, resulting in effortless 3D content creation and style-ware data augmentation.
arXiv Detail & Related papers (2021-08-30T02:28:31Z) - 'CADSketchNet' -- An Annotated Sketch dataset for 3D CAD Model Retrieval
with Deep Neural Networks [0.8155575318208631]
The research work presented in this paper aims at developing a dataset suitable for building a retrieval system for 3D CAD models based on deep learning.
The paper also aims at evaluating the performance of various retrieval system or a search engine for 3D CAD models that accepts a sketch image as the input query.
arXiv Detail & Related papers (2021-07-13T16:10:16Z) - Joint Deep Multi-Graph Matching and 3D Geometry Learning from
Inhomogeneous 2D Image Collections [57.60094385551773]
We propose a trainable framework for learning a deformable 3D geometry model from inhomogeneous image collections.
We in addition obtain the underlying 3D geometry of the objects depicted in the 2D images.
arXiv Detail & Related papers (2021-03-31T17:25:36Z) - A Convolutional Architecture for 3D Model Embedding [1.3858051019755282]
We propose a deep learning architecture to handle 3D models as an input.
We show that the embedding representation conveys semantic information that helps to deal with the similarity assessment of 3D objects.
arXiv Detail & Related papers (2021-03-05T15:46:47Z) - PvDeConv: Point-Voxel Deconvolution for Autoencoding CAD Construction in
3D [23.87757211847093]
We learn to synthesize high-resolution point clouds of 10k points that densely describe the underlying geometry of Computer Aided Design (CAD) models.
We introduce a new dedicated dataset, the CC3D, containing 50k+ pairs of CAD models and their corresponding 3D meshes.
This dataset is used to learn a convolutional autoencoder for point clouds sampled from the pairs of 3D scans - CAD models.
arXiv Detail & Related papers (2021-01-12T14:14:13Z) - Canonical 3D Deformer Maps: Unifying parametric and non-parametric
methods for dense weakly-supervised category reconstruction [79.98689027127855]
We propose a new representation of the 3D shape of common object categories that can be learned from a collection of 2D images of independent objects.
Our method builds in a novel way on concepts from parametric deformation models, non-parametric 3D reconstruction, and canonical embeddings.
It achieves state-of-the-art results in dense 3D reconstruction on public in-the-wild datasets of faces, cars, and birds.
arXiv Detail & Related papers (2020-08-28T15:44:05Z) - Mix Dimension in Poincar\'{e} Geometry for 3D Skeleton-based Action
Recognition [57.98278794950759]
Graph Convolutional Networks (GCNs) have already demonstrated their powerful ability to model the irregular data.
We present a novel spatial-temporal GCN architecture which is defined via the Poincar'e geometry.
We evaluate our method on two current largest scale 3D datasets.
arXiv Detail & Related papers (2020-07-30T18:23:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.