PvDeConv: Point-Voxel Deconvolution for Autoencoding CAD Construction in
3D
- URL: http://arxiv.org/abs/2101.04493v1
- Date: Tue, 12 Jan 2021 14:14:13 GMT
- Title: PvDeConv: Point-Voxel Deconvolution for Autoencoding CAD Construction in
3D
- Authors: Kseniya Cherenkova, Djamila Aouada, Gleb Gusev
- Abstract summary: We learn to synthesize high-resolution point clouds of 10k points that densely describe the underlying geometry of Computer Aided Design (CAD) models.
We introduce a new dedicated dataset, the CC3D, containing 50k+ pairs of CAD models and their corresponding 3D meshes.
This dataset is used to learn a convolutional autoencoder for point clouds sampled from the pairs of 3D scans - CAD models.
- Score: 23.87757211847093
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a Point-Voxel DeConvolution (PVDeConv) module for 3D data
autoencoder. To demonstrate its efficiency we learn to synthesize
high-resolution point clouds of 10k points that densely describe the underlying
geometry of Computer Aided Design (CAD) models. Scanning artifacts, such as
protrusions, missing parts, smoothed edges and holes, inevitably appear in real
3D scans of fabricated CAD objects. Learning the original CAD model
construction from a 3D scan requires a ground truth to be available together
with the corresponding 3D scan of an object. To solve the gap, we introduce a
new dedicated dataset, the CC3D, containing 50k+ pairs of CAD models and their
corresponding 3D meshes. This dataset is used to learn a convolutional
autoencoder for point clouds sampled from the pairs of 3D scans - CAD models.
The challenges of this new dataset are demonstrated in comparison with other
generative point cloud sampling models trained on ShapeNet. The CC3D
autoencoder is efficient with respect to memory consumption and training time
as compared to stateof-the-art models for 3D data generation.
Related papers
- Img2CAD: Conditioned 3D CAD Model Generation from Single Image with Structured Visual Geometry [12.265852643914439]
We present Img2CAD, the first knowledge that uses 2D image inputs to generate editable parameters.
Img2CAD enables seamless integration between AI 3D reconstruction and CAD representation.
arXiv Detail & Related papers (2024-10-04T13:27:52Z) - DIRECT-3D: Learning Direct Text-to-3D Generation on Massive Noisy 3D Data [50.164670363633704]
We present DIRECT-3D, a diffusion-based 3D generative model for creating high-quality 3D assets from text prompts.
Our model is directly trained on extensive noisy and unaligned in-the-wild' 3D assets.
We achieve state-of-the-art performance in both single-class generation and text-to-3D generation.
arXiv Detail & Related papers (2024-06-06T17:58:15Z) - DatasetNeRF: Efficient 3D-aware Data Factory with Generative Radiance Fields [68.94868475824575]
This paper introduces a novel approach capable of generating infinite, high-quality 3D-consistent 2D annotations alongside 3D point cloud segmentations.
We leverage the strong semantic prior within a 3D generative model to train a semantic decoder.
Once trained, the decoder efficiently generalizes across the latent space, enabling the generation of infinite data.
arXiv Detail & Related papers (2023-11-18T21:58:28Z) - Model2Scene: Learning 3D Scene Representation via Contrastive
Language-CAD Models Pre-training [105.3421541518582]
Current successful methods of 3D scene perception rely on the large-scale annotated point cloud.
We propose Model2Scene, a novel paradigm that learns free 3D scene representation from Computer-Aided Design (CAD) models and languages.
Model2Scene yields impressive label-free 3D object salient detection with an average mAP of 46.08% and 55.49% on the ScanNet and S3DIS datasets, respectively.
arXiv Detail & Related papers (2023-09-29T03:51:26Z) - Joint-MAE: 2D-3D Joint Masked Autoencoders for 3D Point Cloud
Pre-training [65.75399500494343]
Masked Autoencoders (MAE) have shown promising performance in self-supervised learning for 2D and 3D computer vision.
We propose Joint-MAE, a 2D-3D joint MAE framework for self-supervised 3D point cloud pre-training.
arXiv Detail & Related papers (2023-02-27T17:56:18Z) - CAD 3D Model classification by Graph Neural Networks: A new approach
based on STEP format [2.225882303328135]
We introduce a new approach for retrieval and classification of 3D models that directly performs in the Computer-Aided Design (CAD) format.
Among the various CAD formats, we consider the widely used STEP extension, which represents a standard for product manufacturing information.
We exploit the linked structure of STEP files to create a graph in which the nodes are the primitive elements and the arcs are the connections between them.
arXiv Detail & Related papers (2022-10-30T11:27:58Z) - RandomRooms: Unsupervised Pre-training from Synthetic Shapes and
Randomized Layouts for 3D Object Detection [138.2892824662943]
A promising solution is to make better use of the synthetic dataset, which consists of CAD object models, to boost the learning on real datasets.
Recent work on 3D pre-training exhibits failure when transfer features learned on synthetic objects to other real-world applications.
In this work, we put forward a new method called RandomRooms to accomplish this objective.
arXiv Detail & Related papers (2021-08-17T17:56:12Z) - 'CADSketchNet' -- An Annotated Sketch dataset for 3D CAD Model Retrieval
with Deep Neural Networks [0.8155575318208631]
The research work presented in this paper aims at developing a dataset suitable for building a retrieval system for 3D CAD models based on deep learning.
The paper also aims at evaluating the performance of various retrieval system or a search engine for 3D CAD models that accepts a sketch image as the input query.
arXiv Detail & Related papers (2021-07-13T16:10:16Z) - A Convolutional Architecture for 3D Model Embedding [1.3858051019755282]
We propose a deep learning architecture to handle 3D models as an input.
We show that the embedding representation conveys semantic information that helps to deal with the similarity assessment of 3D objects.
arXiv Detail & Related papers (2021-03-05T15:46:47Z) - KAPLAN: A 3D Point Descriptor for Shape Completion [80.15764700137383]
KAPLAN is a 3D point descriptor that aggregates local shape information via a series of 2D convolutions.
In each of those planes, point properties like normals or point-to-plane distances are aggregated into a 2D grid and abstracted into a feature representation with an efficient 2D convolutional encoder.
Experiments on public datasets show that KAPLAN achieves state-of-the-art performance for 3D shape completion.
arXiv Detail & Related papers (2020-07-31T21:56:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.