A Set-based Approach for Feature Extraction of 3D CAD Models
- URL: http://arxiv.org/abs/2406.18543v1
- Date: Wed, 22 May 2024 05:43:46 GMT
- Title: A Set-based Approach for Feature Extraction of 3D CAD Models
- Authors: Peng Xu, Qi Gao, Ying-Jie Wu,
- Abstract summary: This report presents a set-based feature extraction approach to address the uncertainty issue.
Unlike existing methods that seek accurate feature results, our approach aims to transform the uncertainty of geometric information into a set of feature subgraphs.
A feature extraction system is programmed using C++ and UG/Open to demonstrate the feasibility of our proposed approach.
- Score: 8.707056631060729
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feature extraction is a critical technology to realize the automatic transmission of feature information throughout product life cycles. As CAD models primarily capture the 3D geometry of products, feature extraction heavily relies on geometric information. However, existing feature extraction methods often yield inaccurate outcomes due to the diverse interpretations of geometric information. This report presents a set-based feature extraction approach to address this uncertainty issue. Unlike existing methods that seek accurate feature results, our approach aims to transform the uncertainty of geometric information into a set of feature subgraphs. First, we define the convexity of basic geometric entities and introduce the concept of two-level attributed adjacency graphs. Second, a feature extraction workflow is designed to determine feature boundaries and identify feature subgraphs from CAD models. This set of feature subgraphs can be used for further feature recognition. A feature extraction system is programmed using C++ and UG/Open to demonstrate the feasibility of our proposed approach.
Related papers
- 3D Neural Edge Reconstruction [61.10201396044153]
We introduce EMAP, a new method for learning 3D edge representations with a focus on both lines and curves.
Our method implicitly encodes 3D edge distance and direction in Unsigned Distance Functions (UDF) from multi-view edge maps.
On top of this neural representation, we propose an edge extraction algorithm that robustly abstracts 3D edges from the inferred edge points and their directions.
arXiv Detail & Related papers (2024-05-29T17:23:51Z) - ParaPoint: Learning Global Free-Boundary Surface Parameterization of 3D Point Clouds [52.03819676074455]
ParaPoint is an unsupervised neural learning pipeline for achieving global free-boundary surface parameterization.
This work makes the first attempt to investigate neural point cloud parameterization that pursues both global mappings and free boundaries.
arXiv Detail & Related papers (2024-03-15T14:35:05Z) - Back to 3D: Few-Shot 3D Keypoint Detection with Back-Projected 2D Features [64.39691149255717]
Keypoint detection on 3D shapes requires semantic and geometric awareness while demanding high localization accuracy.
We employ a keypoint candidate optimization module which aims to match the average observed distribution of keypoints on the shape.
The resulting approach achieves a new state of the art for few-shot keypoint detection on the KeyPointNet dataset.
arXiv Detail & Related papers (2023-11-29T21:58:41Z) - Human as Points: Explicit Point-based 3D Human Reconstruction from
Single-view RGB Images [78.56114271538061]
We introduce an explicit point-based human reconstruction framework called HaP.
Our approach is featured by fully-explicit point cloud estimation, manipulation, generation, and refinement in the 3D geometric space.
Our results may indicate a paradigm rollback to the fully-explicit and geometry-centric algorithm design.
arXiv Detail & Related papers (2023-11-06T05:52:29Z) - Parametric Depth Based Feature Representation Learning for Object
Detection and Segmentation in Bird's Eye View [44.78243406441798]
This paper focuses on leveraging geometry information, such as depth, to model such feature transformation.
We first lift the 2D image features to the 3D space defined for the ego vehicle via a predicted parametric depth distribution for each pixel in each view.
We then aggregate the 3D feature volume based on the 3D space occupancy derived from depth to the BEV frame.
arXiv Detail & Related papers (2023-07-09T06:07:22Z) - OriCon3D: Effective 3D Object Detection using Orientation and Confidence [0.0]
We propose an advanced methodology for the detection of 3D objects from a single image.
We use a deep convolutional neural network-based 3D object weighted orientation regression paradigm.
Our approach significantly improves the accuracy of 3D object pose determination, surpassing baseline methodologies.
arXiv Detail & Related papers (2023-04-27T19:52:47Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - DEF: Deep Estimation of Sharp Geometric Features in 3D Shapes [43.853000396885626]
We propose a learning-based framework for predicting sharp geometric features in sampled 3D shapes.
By fusing the result of individual patches, we can process large 3D models, which are impossible to process for existing data-driven methods.
arXiv Detail & Related papers (2020-11-30T18:21:00Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - PAM:Point-wise Attention Module for 6D Object Pose Estimation [2.4815579733050153]
6D pose estimation refers to object recognition and estimation of 3D rotation and 3D translation.
Previous methods utilized depth information in the refinement process or were designed as a heterogeneous architecture for each data space to extract feature.
This paper proposes a Point Attention Module that can efficiently extract powerful feature from RGB-D.
arXiv Detail & Related papers (2020-08-12T11:29:48Z) - Geometric Attention for Prediction of Differential Properties in 3D
Point Clouds [32.68259334785767]
In this study, we present a geometric attention mechanism that can provide such properties in a learnable fashion.
We establish the usefulness of the proposed technique with several experiments on the prediction of normal vectors and the extraction of feature lines.
arXiv Detail & Related papers (2020-07-06T07:40:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.