SepicNet: Sharp Edges Recovery by Parametric Inference of Curves in 3D
Shapes
- URL: http://arxiv.org/abs/2304.06531v1
- Date: Thu, 13 Apr 2023 13:37:21 GMT
- Title: SepicNet: Sharp Edges Recovery by Parametric Inference of Curves in 3D
Shapes
- Authors: Kseniya Cherenkova, Elona Dupont, Anis Kacem, Ilya Arzhannikov, Gleb
Gusev and Djamila Aouada
- Abstract summary: We introduce SepicNet, a novel deep network for the detection and parametrization of sharp edges in 3D shapes as primitive curves.
We develop an adaptive point cloud sampling technique that captures the sharp features better than uniform sampling.
- Score: 16.355677959323426
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: 3D scanning as a technique to digitize objects in reality and create their 3D
models, is used in many fields and areas. Though the quality of 3D scans
depends on the technical characteristics of the 3D scanner, the common drawback
is the smoothing of fine details, or the edges of an object. We introduce
SepicNet, a novel deep network for the detection and parametrization of sharp
edges in 3D shapes as primitive curves. To make the network end-to-end
trainable, we formulate the curve fitting in a differentiable manner. We
develop an adaptive point cloud sampling technique that captures the sharp
features better than uniform sampling. The experiments were conducted on a
newly introduced large-scale dataset of 50k 3D scans, where the sharp edge
annotations were extracted from their parametric CAD models, and demonstrate
significant improvement over state-of-the-art methods.
Related papers
- MV2Cyl: Reconstructing 3D Extrusion Cylinders from Multi-View Images [13.255044855902408]
We present MV2Cyl, a novel method for reconstructing 3D from 2D multi-view images.
We achieve the optimal reconstruction result with the best accuracy in 2D sketch and extrude parameter estimation.
arXiv Detail & Related papers (2024-06-16T08:54:38Z) - 3D Neural Edge Reconstruction [61.10201396044153]
We introduce EMAP, a new method for learning 3D edge representations with a focus on both lines and curves.
Our method implicitly encodes 3D edge distance and direction in Unsigned Distance Functions (UDF) from multi-view edge maps.
On top of this neural representation, we propose an edge extraction algorithm that robustly abstracts 3D edges from the inferred edge points and their directions.
arXiv Detail & Related papers (2024-05-29T17:23:51Z) - Oriented-grid Encoder for 3D Implicit Representations [10.02138130221506]
This paper is the first to exploit 3D characteristics in 3D geometric encoders explicitly.
Our method gets state-of-the-art results when compared to the prior techniques.
arXiv Detail & Related papers (2024-02-09T19:28:13Z) - 3DiffTection: 3D Object Detection with Geometry-Aware Diffusion Features [70.50665869806188]
3DiffTection is a state-of-the-art method for 3D object detection from single images.
We fine-tune a diffusion model to perform novel view synthesis conditioned on a single image.
We further train the model on target data with detection supervision.
arXiv Detail & Related papers (2023-11-07T23:46:41Z) - MvDeCor: Multi-view Dense Correspondence Learning for Fine-grained 3D
Segmentation [91.6658845016214]
We propose to utilize self-supervised techniques in the 2D domain for fine-grained 3D shape segmentation tasks.
We render a 3D shape from multiple views, and set up a dense correspondence learning task within the contrastive learning framework.
As a result, the learned 2D representations are view-invariant and geometrically consistent.
arXiv Detail & Related papers (2022-08-18T00:48:15Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - 3D-to-2D Distillation for Indoor Scene Parsing [78.36781565047656]
We present a new approach that enables us to leverage 3D features extracted from large-scale 3D data repository to enhance 2D features extracted from RGB images.
First, we distill 3D knowledge from a pretrained 3D network to supervise a 2D network to learn simulated 3D features from 2D features during the training.
Second, we design a two-stage dimension normalization scheme to calibrate the 2D and 3D features for better integration.
Third, we design a semantic-aware adversarial training model to extend our framework for training with unpaired 3D data.
arXiv Detail & Related papers (2021-04-06T02:22:24Z) - DOPS: Learning to Detect 3D Objects and Predict their 3D Shapes [54.239416488865565]
We propose a fast single-stage 3D object detection method for LIDAR data.
The core novelty of our method is a fast, single-pass architecture that both detects objects in 3D and estimates their shapes.
We find that our proposed method achieves state-of-the-art results by 5% on object detection in ScanNet scenes, and it gets top results by 3.4% in the Open dataset.
arXiv Detail & Related papers (2020-04-02T17:48:50Z) - 3D Shape Segmentation with Geometric Deep Learning [2.512827436728378]
We propose a neural-network based approach that produces 3D augmented views of the 3D shape to solve the whole segmentation as sub-segmentation problems.
We validate our approach using 3D shapes of publicly available datasets and of real objects that are reconstructed using photogrammetry techniques.
arXiv Detail & Related papers (2020-02-02T14:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.