HoLa: B-Rep Generation using a Holistic Latent Representation
- URL: http://arxiv.org/abs/2504.14257v2
- Date: Tue, 22 Apr 2025 10:12:42 GMT
- Title: HoLa: B-Rep Generation using a Holistic Latent Representation
- Authors: Yilin Liu, Duoteng Xu, Xingyao Yu, Xiang Xu, Daniel Cohen-Or, Hao Zhang, Hui Huang,
- Abstract summary: We introduce a novel representation for learning and generating Computer-Aided Design (CAD) models in the form of $textitboundary representations$ (B-Reps)<n>Our representation unifies the continuous geometric properties of B-Rep primitives in different orders.<n>Our method significantly reduces ambiguities, redundancies, and incoherences among the generated B-Rep primitives.
- Score: 51.07878285790399
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a novel representation for learning and generating Computer-Aided Design (CAD) models in the form of $\textit{boundary representations}$ (B-Reps). Our representation unifies the continuous geometric properties of B-Rep primitives in different orders (e.g., surfaces and curves) and their discrete topological relations in a $\textit{holistic latent}$ (HoLa) space. This is based on the simple observation that the topological connection between two surfaces is intrinsically tied to the geometry of their intersecting curve. Such a prior allows us to reformulate topology learning in B-Reps as a geometric reconstruction problem in Euclidean space. Specifically, we eliminate the presence of curves, vertices, and all the topological connections in the latent space by learning to distinguish and derive curve geometries from a pair of surface primitives via a neural intersection network. To this end, our holistic latent space is only defined on surfaces but encodes a full B-Rep model, including the geometry of surfaces, curves, vertices, and their topological relations. Our compact and holistic latent space facilitates the design of a first diffusion-based generator to take on a large variety of inputs including point clouds, single/multi-view images, 2D sketches, and text prompts. Our method significantly reduces ambiguities, redundancies, and incoherences among the generated B-Rep primitives, as well as training complexities inherent in prior multi-step B-Rep learning pipelines, while achieving greatly improved validity rate over current state of the art: 82% vs. $\approx$50%.
Related papers
- BRepFormer: Transformer-Based B-rep Geometric Feature Recognition [14.01667117252404]
Recognizing geometric features on B-rep models is a cornerstone technique for multimedia content-based retrieval.<n>We propose BRepFormer, a novel transformer-based model to recognize both machining feature and complex CAD models' features.<n>BRepFormer achieves state-of-the-art accuracy on the MFInstSeg, MFTRCAD, and our CBF datasets.
arXiv Detail & Related papers (2025-04-10T01:36:06Z) - DTGBrepGen: A Novel B-rep Generative Model through Decoupling Topology and Geometry [3.859930277034918]
Boundary representation (B-rep) of geometric models is a fundamental format in Computer-Aided Design (CAD)<n>We propose DTGBrepGen, a novel topology-geometry decoupled framework for B-rep generation.
arXiv Detail & Related papers (2025-03-17T12:34:14Z) - SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.<n>Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.<n>In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - Split-and-Fit: Learning B-Reps via Structure-Aware Voronoi Partitioning [50.684254969269546]
We introduce a novel method for acquiring boundary representations (B-Reps) of 3D CAD models.
We apply a spatial partitioning to derive a single primitive within each partition.
We show that our network, coined NVD-Net for neural Voronoi diagrams, can effectively learn Voronoi partitions for CAD models from training data.
arXiv Detail & Related papers (2024-06-07T21:07:49Z) - BrepGen: A B-rep Generative Diffusion Model with Structured Latent Geometry [24.779824909395245]
BrepGen is a diffusion-based generative approach that directly outputs a Boundary representation (Brep) Computer-Aided Design (CAD) model.
BrepGen represents a B-rep model as a novel structured latent geometry in a hierarchical tree.
arXiv Detail & Related papers (2024-01-28T04:07:59Z) - Surf-D: Generating High-Quality Surfaces of Arbitrary Topologies Using Diffusion Models [83.35835521670955]
Surf-D is a novel method for generating high-quality 3D shapes as Surfaces with arbitrary topologies.
We use the Unsigned Distance Field (UDF) as our surface representation to accommodate arbitrary topologies.
We also propose a new pipeline that employs a point-based AutoEncoder to learn a compact and continuous latent space for accurately encoding UDF.
arXiv Detail & Related papers (2023-11-28T18:56:01Z) - Towards General-Purpose Representation Learning of Polygonal Geometries [62.34832826705641]
We develop a general-purpose polygon encoding model, which can encode a polygonal geometry into an embedding space.
We conduct experiments on two tasks: 1) shape classification based on MNIST; 2) spatial relation prediction based on two new datasets - DBSR-46K and DBSR-cplx46K.
Our results show that NUFTspec and ResNet1D outperform multiple existing baselines with significant margins.
arXiv Detail & Related papers (2022-09-29T15:59:23Z) - Geo-Neus: Geometry-Consistent Neural Implicit Surfaces Learning for
Multi-view Reconstruction [41.43563122590449]
We propose geometry-consistent neural implicit surfaces learning for multi-view reconstruction.
Our proposed method achieves high-quality surface reconstruction in both complex thin structures and large smooth regions.
arXiv Detail & Related papers (2022-05-31T14:52:07Z) - Dist2Cycle: A Simplicial Neural Network for Homology Localization [66.15805004725809]
Simplicial complexes can be viewed as high dimensional generalizations of graphs that explicitly encode multi-way ordered relations.
We propose a graph convolutional model for learning functions parametrized by the $k$-homological features of simplicial complexes.
arXiv Detail & Related papers (2021-10-28T14:59:41Z) - UV-Net: Learning from Boundary Representations [17.47054752280569]
We introduce UV-Net, a novel neural network architecture and representation designed to operate directly on Boundary representation (B-rep) data from 3D CAD models.
B-rep data presents some unique challenges when used with modern machine learning due to the complexity of the data structure and its support for both continuous non-Euclidean geometric entities and discrete topological entities.
arXiv Detail & Related papers (2020-06-18T00:12:52Z) - PUGeo-Net: A Geometry-centric Network for 3D Point Cloud Upsampling [103.09504572409449]
We propose a novel deep neural network based method, called PUGeo-Net, to generate uniform dense point clouds.
Thanks to its geometry-centric nature, PUGeo-Net works well for both CAD models with sharp features and scanned models with rich geometric details.
arXiv Detail & Related papers (2020-02-24T14:13:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.