UV-Net: Learning from Boundary Representations
- URL: http://arxiv.org/abs/2006.10211v2
- Date: Mon, 26 Apr 2021 03:27:44 GMT
- Title: UV-Net: Learning from Boundary Representations
- Authors: Pradeep Kumar Jayaraman, Aditya Sanghi, Joseph G. Lambourne, Karl D.D.
Willis, Thomas Davies, Hooman Shayani, Nigel Morris
- Abstract summary: We introduce UV-Net, a novel neural network architecture and representation designed to operate directly on Boundary representation (B-rep) data from 3D CAD models.
B-rep data presents some unique challenges when used with modern machine learning due to the complexity of the data structure and its support for both continuous non-Euclidean geometric entities and discrete topological entities.
- Score: 17.47054752280569
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce UV-Net, a novel neural network architecture and representation
designed to operate directly on Boundary representation (B-rep) data from 3D
CAD models. The B-rep format is widely used in the design, simulation and
manufacturing industries to enable sophisticated and precise CAD modeling
operations. However, B-rep data presents some unique challenges when used with
modern machine learning due to the complexity of the data structure and its
support for both continuous non-Euclidean geometric entities and discrete
topological entities. In this paper, we propose a unified representation for
B-rep data that exploits the U and V parameter domain of curves and surfaces to
model geometry, and an adjacency graph to explicitly model topology. This leads
to a unique and efficient network architecture, UV-Net, that couples image and
graph convolutional neural networks in a compute and memory-efficient manner.
To aid in future research we present a synthetic labelled B-rep dataset,
SolidLetters, derived from human designed fonts with variations in both
geometry and topology. Finally we demonstrate that UV-Net can generalize to
supervised and unsupervised tasks on five datasets, while outperforming
alternate 3D shape representations such as point clouds, voxels, and meshes.
Related papers
- Nuvo: Neural UV Mapping for Unruly 3D Representations [61.87715912587394]
Existing UV mapping algorithms operate on geometry produced by state-of-the-art 3D reconstruction and generation techniques.
We present a UV mapping method designed to operate on geometry produced by 3D reconstruction and generation techniques.
arXiv Detail & Related papers (2023-12-11T18:58:38Z) - Modeling Graphs Beyond Hyperbolic: Graph Neural Networks in Symmetric
Positive Definite Matrices [8.805129821507046]
Real-world graph data is characterized by multiple types of geometric and topological features.
We construct graph neural networks that can robustly handle complex graphs.
arXiv Detail & Related papers (2023-06-24T21:50:53Z) - Latent Graph Inference using Product Manifolds [0.0]
We generalize the discrete Differentiable Graph Module (dDGM) for latent graph learning.
Our novel approach is tested on a wide range of datasets, and outperforms the original dDGM model.
arXiv Detail & Related papers (2022-11-26T22:13:06Z) - Anisotropic Multi-Scale Graph Convolutional Network for Dense Shape
Correspondence [3.45989531033125]
This paper studies 3D dense shape correspondence, a key shape analysis application in computer vision and graphics.
We introduce a novel hybrid geometric deep learning-based model that learns geometrically meaningful and discretization-independent features.
The resulting correspondence maps show state-of-the-art performance on the benchmark datasets.
arXiv Detail & Related papers (2022-10-17T22:40:50Z) - SolidGen: An Autoregressive Model for Direct B-rep Synthesis [15.599363091502365]
Boundary representation (B-rep) format is de-facto shape representation in computer-aided design (CAD)
Recent approaches to generating CAD models have focused on learning sketch-and-extrude modeling sequences.
We present a new approach that enables learning from and synthesizing B-reps without the need for supervision.
arXiv Detail & Related papers (2022-03-26T00:00:45Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Deep Implicit Surface Point Prediction Networks [49.286550880464866]
Deep neural representations of 3D shapes as implicit functions have been shown to produce high fidelity models.
This paper presents a novel approach that models such surfaces using a new class of implicit representations called the closest surface-point (CSP) representation.
arXiv Detail & Related papers (2021-06-10T14:31:54Z) - BRepNet: A topological message passing system for solid models [6.214548392474976]
Boundary representation (B-rep) models are the standard way 3D shapes are described in Computer-Aided Design (CAD) applications.
We introduce BRepNet, a neural network architecture designed to operate directly on B-rep data structures.
arXiv Detail & Related papers (2021-04-01T18:16:03Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - Mix Dimension in Poincar\'{e} Geometry for 3D Skeleton-based Action
Recognition [57.98278794950759]
Graph Convolutional Networks (GCNs) have already demonstrated their powerful ability to model the irregular data.
We present a novel spatial-temporal GCN architecture which is defined via the Poincar'e geometry.
We evaluate our method on two current largest scale 3D datasets.
arXiv Detail & Related papers (2020-07-30T18:23:18Z) - Learning Local Neighboring Structure for Robust 3D Shape Representation [143.15904669246697]
Representation learning for 3D meshes is important in many computer vision and graphics applications.
We propose a local structure-aware anisotropic convolutional operation (LSA-Conv)
Our model produces significant improvement in 3D shape reconstruction compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-21T13:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.