DEF: Deep Estimation of Sharp Geometric Features in 3D Shapes
- URL: http://arxiv.org/abs/2011.15081v1
- Date: Mon, 30 Nov 2020 18:21:00 GMT
- Title: DEF: Deep Estimation of Sharp Geometric Features in 3D Shapes
- Authors: Albert Matveev, Alexey Artemov, Ruslan Rakhimov, Gleb Bobrovskikh,
Daniele Panozzo, Denis Zorin, Evgeny Burnaev
- Abstract summary: We propose a learning-based framework for predicting sharp geometric features in sampled 3D shapes.
By fusing the result of individual patches, we can process large 3D models, which are impossible to process for existing data-driven methods.
- Score: 43.853000396885626
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sharp feature lines carry essential information about human-made objects,
enabling compact 3D shape representations, high-quality surface reconstruction,
and are a signal source for mesh processing. While extracting high-quality
lines from noisy and undersampled data is challenging for traditional methods,
deep learning-powered algorithms can leverage global and semantic information
from the training data to aid in the process. We propose Deep Estimators of
Features (DEFs), a learning-based framework for predicting sharp geometric
features in sampled 3D shapes. Differently from existing data-driven methods,
which reduce this problem to feature classification, we propose to regress a
scalar field representing the distance from point samples to the closest
feature line on local patches. By fusing the result of individual patches, we
can process large 3D models, which are impossible to process for existing
data-driven methods due to their size and complexity. Extensive experimental
evaluation of DEFs is implemented on synthetic and real-world 3D shape datasets
and suggests advantages of our image- and point-based estimators over
competitor methods, as well as improved noise robustness and scalability of our
approach.
Related papers
- Learning Unsigned Distance Fields from Local Shape Functions for 3D Surface Reconstruction [42.840655419509346]
This paper presents a novel neural framework, LoSF-UDF, for reconstructing surfaces from 3D point clouds by leveraging local shape functions to learn UDFs.
We observe that 3D shapes manifest simple patterns within localized areas, prompting us to create a training dataset of point cloud patches.
Our approach learns features within a specific radius around each query point and utilizes an attention mechanism to focus on the crucial features for UDF estimation.
arXiv Detail & Related papers (2024-07-01T14:39:03Z) - Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - LISR: Learning Linear 3D Implicit Surface Representation Using Compactly
Supported Radial Basis Functions [5.056545768004376]
Implicit 3D surface reconstruction of an object from its partial and noisy 3D point cloud scan is the classical geometry processing and 3D computer vision problem.
We propose a neural network architecture for learning the linear implicit shape representation of the 3D surface of an object.
The proposed approach achieves better Chamfer distance and comparable F-score than the state-of-the-art approach on the benchmark dataset.
arXiv Detail & Related papers (2024-02-11T20:42:49Z) - LASA: Instance Reconstruction from Real Scans using A Large-scale
Aligned Shape Annotation Dataset [17.530432165466507]
We present a novel Cross-Modal Shape Reconstruction (DisCo) method and an Occupancy-Guided 3D Object Detection (OccGOD) method.
Our methods achieve state-of-the-art performance in both instance-level scene reconstruction and 3D object detection tasks.
arXiv Detail & Related papers (2023-12-19T18:50:10Z) - Robust 3D Tracking with Quality-Aware Shape Completion [67.9748164949519]
We propose a synthetic target representation composed of dense and complete point clouds depicting the target shape precisely by shape completion for robust 3D tracking.
Specifically, we design a voxelized 3D tracking framework with shape completion, in which we propose a quality-aware shape completion mechanism to alleviate the adverse effect of noisy historical predictions.
arXiv Detail & Related papers (2023-12-17T04:50:24Z) - 3D Adversarial Augmentations for Robust Out-of-Domain Predictions [115.74319739738571]
We focus on improving the generalization to out-of-domain data.
We learn a set of vectors that deform the objects in an adversarial fashion.
We perform adversarial augmentation by applying the learned sample-independent vectors to the available objects when training a model.
arXiv Detail & Related papers (2023-08-29T17:58:55Z) - Normal Transformer: Extracting Surface Geometry from LiDAR Points
Enhanced by Visual Semantics [6.516912796655748]
This paper presents a technique for estimating the normal from 3D point clouds and 2D colour images.
We have developed a transformer neural network that learns to utilise the hybrid information of visual semantic and 3D geometric data.
arXiv Detail & Related papers (2022-11-19T03:55:09Z) - Secrets of 3D Implicit Object Shape Reconstruction in the Wild [92.5554695397653]
Reconstructing high-fidelity 3D objects from sparse, partial observation is crucial for various applications in computer vision, robotics, and graphics.
Recent neural implicit modeling methods show promising results on synthetic or dense datasets.
But, they perform poorly on real-world data that is sparse and noisy.
This paper analyzes the root cause of such deficient performance of a popular neural implicit model.
arXiv Detail & Related papers (2021-01-18T03:24:48Z) - Exploring Deep 3D Spatial Encodings for Large-Scale 3D Scene
Understanding [19.134536179555102]
We propose an alternative approach to overcome the limitations of CNN based approaches by encoding the spatial features of raw 3D point clouds into undirected graph models.
The proposed method achieves on par state-of-the-art accuracy with improved training time and model stability thus indicating strong potential for further research.
arXiv Detail & Related papers (2020-11-29T12:56:19Z) - Dense Non-Rigid Structure from Motion: A Manifold Viewpoint [162.88686222340962]
Non-Rigid Structure-from-Motion (NRSfM) problem aims to recover 3D geometry of a deforming object from its 2D feature correspondences across multiple frames.
We show that our approach significantly improves accuracy, scalability, and robustness against noise.
arXiv Detail & Related papers (2020-06-15T09:15:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.