CAPRI-Net: Learning Compact CAD Shapes with Adaptive Primitive Assembly
- URL: http://arxiv.org/abs/2104.05652v1
- Date: Mon, 12 Apr 2021 17:21:19 GMT
- Title: CAPRI-Net: Learning Compact CAD Shapes with Adaptive Primitive Assembly
- Authors: Fenggen Yu, Zhiqin Chen, Manyi Li, Aditya Sanghi, Hooman Shayani, Ali
Mahdavi-Amiri and Hao Zhang
- Abstract summary: We introduce CAPRI-Net, a neural network for learning compact and interpretable implicit representations of 3D computer-aided design (CAD) models.
Our network takes an input 3D shape that can be provided as a point cloud or voxel grids, and reconstructs it by a compact assembly of quadric surface primitives.
We evaluate our learning framework on both ShapeNet and ABC, the largest and most diverse CAD dataset to date, in terms of reconstruction quality, shape edges, compactness, and interpretability.
- Score: 17.82598676258891
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce CAPRI-Net, a neural network for learning compact and
interpretable implicit representations of 3D computer-aided design (CAD)
models, in the form of adaptive primitive assemblies. Our network takes an
input 3D shape that can be provided as a point cloud or voxel grids, and
reconstructs it by a compact assembly of quadric surface primitives via
constructive solid geometry (CSG) operations. The network is self-supervised
with a reconstruction loss, leading to faithful 3D reconstructions with sharp
edges and plausible CSG trees, without any ground-truth shape assemblies. While
the parametric nature of CAD models does make them more predictable locally, at
the shape level, there is a great deal of structural and topological
variations, which present a significant generalizability challenge to
state-of-the-art neural models for 3D shapes. Our network addresses this
challenge by adaptive training with respect to each test shape, with which we
fine-tune the network that was pre-trained on a model collection. We evaluate
our learning framework on both ShapeNet and ABC, the largest and most diverse
CAD dataset to date, in terms of reconstruction quality, shape edges,
compactness, and interpretability, to demonstrate superiority over current
alternatives suitable for neural CAD reconstruction.
Related papers
- DAE-Net: Deforming Auto-Encoder for fine-grained shape co-segmentation [22.538892330541582]
We present an unsupervised 3D shape co-segmentation method which learns a set of deformable part templates from a shape collection.
To accommodate structural variations in the collection, our network composes each shape by a selected subset of template parts which are affine-transformed.
Our network, coined DAE-Net for Deforming Auto-Encoder, can achieve unsupervised 3D shape co-segmentation that yields fine-grained, compact, and meaningful parts.
arXiv Detail & Related papers (2023-11-22T03:26:07Z) - Single-view 3D Mesh Reconstruction for Seen and Unseen Categories [69.29406107513621]
Single-view 3D Mesh Reconstruction is a fundamental computer vision task that aims at recovering 3D shapes from single-view RGB images.
This paper tackles Single-view 3D Mesh Reconstruction, to study the model generalization on unseen categories.
We propose an end-to-end two-stage network, GenMesh, to break the category boundaries in reconstruction.
arXiv Detail & Related papers (2022-08-04T14:13:35Z) - Neural Template: Topology-aware Reconstruction and Disentangled
Generation of 3D Meshes [52.038346313823524]
This paper introduces a novel framework called DTNet for 3D mesh reconstruction and generation via Disentangled Topology.
Our method is able to produce high-quality meshes, particularly with diverse topologies, as compared with the state-of-the-art methods.
arXiv Detail & Related papers (2022-06-10T08:32:57Z) - Learning Mesh Representations via Binary Space Partitioning Tree
Networks [28.962866472806812]
We present BSP-Net, a network that learns to represent a 3D shape via convex decomposition without supervision.
The network is trained to reconstruct a shape using a set of convexes obtained from a BSP-tree built over a set of planes, where the planes and convexes are both defined by learned network weights.
The generated meshes are watertight, compact (i.e., low-poly), and well suited to represent sharp geometry.
arXiv Detail & Related papers (2021-06-27T16:37:54Z) - DSG-Net: Learning Disentangled Structure and Geometry for 3D Shape
Generation [98.96086261213578]
We introduce DSG-Net, a deep neural network that learns a disentangled structured and geometric mesh representation for 3D shapes.
This supports a range of novel shape generation applications with disentangled control, such as of structure (geometry) while keeping geometry (structure) unchanged.
Our method not only supports controllable generation applications but also produces high-quality synthesized shapes, outperforming state-of-the-art methods.
arXiv Detail & Related papers (2020-08-12T17:06:51Z) - Shape Prior Deformation for Categorical 6D Object Pose and Size
Estimation [62.618227434286]
We present a novel learning approach to recover the 6D poses and sizes of unseen object instances from an RGB-D image.
We propose a deep network to reconstruct the 3D object model by explicitly modeling the deformation from a pre-learned categorical shape prior.
arXiv Detail & Related papers (2020-07-16T16:45:05Z) - Learning Local Neighboring Structure for Robust 3D Shape Representation [143.15904669246697]
Representation learning for 3D meshes is important in many computer vision and graphics applications.
We propose a local structure-aware anisotropic convolutional operation (LSA-Conv)
Our model produces significant improvement in 3D shape reconstruction compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-21T13:40:03Z) - Shape-Oriented Convolution Neural Network for Point Cloud Analysis [59.405388577930616]
Point cloud is a principal data structure adopted for 3D geometric information encoding.
Shape-oriented message passing scheme dubbed ShapeConv is proposed to focus on the representation learning of the underlying shape formed by each local neighboring point.
arXiv Detail & Related papers (2020-04-20T16:11:51Z) - Few-Shot Single-View 3-D Object Reconstruction with Compositional Priors [30.262308825799167]
We show that complex encoder-decoder architectures perform similarly to nearest-neighbor baselines in standard benchmarks.
We propose three approaches that efficiently integrate a class prior into a 3D reconstruction model.
arXiv Detail & Related papers (2020-04-14T04:53:34Z) - STD-Net: Structure-preserving and Topology-adaptive Deformation Network
for 3D Reconstruction from a Single Image [27.885717341244014]
3D reconstruction from a single view image is a long-standing prob-lem in computer vision.
In this paper, we propose a novel methodcalled STD-Net to reconstruct the 3D models utilizing the mesh representation.
Experimental results on the images from ShapeNet show that ourproposed STD-Net has better performance than other state-of-the-art methods onreconstructing 3D objects.
arXiv Detail & Related papers (2020-03-07T11:02:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.