Learning Mesh Representations via Binary Space Partitioning Tree
Networks
- URL: http://arxiv.org/abs/2106.14274v1
- Date: Sun, 27 Jun 2021 16:37:54 GMT
- Title: Learning Mesh Representations via Binary Space Partitioning Tree
Networks
- Authors: Zhiqin Chen, Andrea Tagliasacchi, Hao Zhang
- Abstract summary: We present BSP-Net, a network that learns to represent a 3D shape via convex decomposition without supervision.
The network is trained to reconstruct a shape using a set of convexes obtained from a BSP-tree built over a set of planes, where the planes and convexes are both defined by learned network weights.
The generated meshes are watertight, compact (i.e., low-poly), and well suited to represent sharp geometry.
- Score: 28.962866472806812
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Polygonal meshes are ubiquitous, but have only played a relatively minor role
in the deep learning revolution. State-of-the-art neural generative models for
3D shapes learn implicit functions and generate meshes via expensive
iso-surfacing. We overcome these challenges by employing a classical spatial
data structure from computer graphics, Binary Space Partitioning (BSP), to
facilitate 3D learning. The core operation of BSP involves recursive
subdivision of 3D space to obtain convex sets. By exploiting this property, we
devise BSP-Net, a network that learns to represent a 3D shape via convex
decomposition without supervision. The network is trained to reconstruct a
shape using a set of convexes obtained from a BSP-tree built over a set of
planes, where the planes and convexes are both defined by learned network
weights. BSP-Net directly outputs polygonal meshes from the inferred convexes.
The generated meshes are watertight, compact (i.e., low-poly), and well suited
to represent sharp geometry. We show that the reconstruction quality by BSP-Net
is competitive with those from state-of-the-art methods while using much fewer
primitives. We also explore variations to BSP-Net including using a more
generic decoder for reconstruction, more general primitives than planes, as
well as training a generative model with variational auto-encoders. Code is
available at https://github.com/czq142857/BSP-NET-original.
Related papers
- Split-and-Fit: Learning B-Reps via Structure-Aware Voronoi Partitioning [50.684254969269546]
We introduce a novel method for acquiring boundary representations (B-Reps) of 3D CAD models.
We apply a spatial partitioning to derive a single primitive within each partition.
We show that our network, coined NVD-Net for neural Voronoi diagrams, can effectively learn Voronoi partitions for CAD models from training data.
arXiv Detail & Related papers (2024-06-07T21:07:49Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - Dual Octree Graph Networks for Learning Adaptive Volumetric Shape
Representations [21.59311861556396]
Our method encodes the volumetric field of a 3D shape with an adaptive feature volume organized by an octree.
An encoder-decoder network is designed to learn the adaptive feature volume based on the graph convolutions over the dual graph of octree nodes.
Our method effectively encodes shape details, enables fast 3D shape reconstruction, and exhibits good generality for modeling 3D shapes out of training categories.
arXiv Detail & Related papers (2022-05-05T17:56:34Z) - Learnable Triangulation for Deep Learning-based 3D Reconstruction of
Objects of Arbitrary Topology from Single RGB Images [12.693545159861857]
We propose a novel deep reinforcement learning-based approach for 3D object reconstruction from monocular images.
The proposed method outperforms the state-of-the-art in terms of visual quality, reconstruction accuracy, and computational time.
arXiv Detail & Related papers (2021-09-24T09:44:22Z) - CAPRI-Net: Learning Compact CAD Shapes with Adaptive Primitive Assembly [17.82598676258891]
We introduce CAPRI-Net, a neural network for learning compact and interpretable implicit representations of 3D computer-aided design (CAD) models.
Our network takes an input 3D shape that can be provided as a point cloud or voxel grids, and reconstructs it by a compact assembly of quadric surface primitives.
We evaluate our learning framework on both ShapeNet and ABC, the largest and most diverse CAD dataset to date, in terms of reconstruction quality, shape edges, compactness, and interpretability.
arXiv Detail & Related papers (2021-04-12T17:21:19Z) - Learning Local Neighboring Structure for Robust 3D Shape Representation [143.15904669246697]
Representation learning for 3D meshes is important in many computer vision and graphics applications.
We propose a local structure-aware anisotropic convolutional operation (LSA-Conv)
Our model produces significant improvement in 3D shape reconstruction compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-21T13:40:03Z) - Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from
a Single RGB Image [102.44347847154867]
We propose a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives.
Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives.
Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.
arXiv Detail & Related papers (2020-04-02T17:58:05Z) - Implicit Functions in Feature Space for 3D Shape Reconstruction and
Completion [53.885984328273686]
Implicit Feature Networks (IF-Nets) deliver continuous outputs, can handle multiple topologies, and complete shapes for missing or sparse input data.
IF-Nets clearly outperform prior work in 3D object reconstruction in ShapeNet, and obtain significantly more accurate 3D human reconstructions.
arXiv Detail & Related papers (2020-03-03T11:14:29Z) - PUGeo-Net: A Geometry-centric Network for 3D Point Cloud Upsampling [103.09504572409449]
We propose a novel deep neural network based method, called PUGeo-Net, to generate uniform dense point clouds.
Thanks to its geometry-centric nature, PUGeo-Net works well for both CAD models with sharp features and scanned models with rich geometric details.
arXiv Detail & Related papers (2020-02-24T14:13:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.