NEF: Neural Edge Fields for 3D Parametric Curve Reconstruction from
Multi-view Images
- URL: http://arxiv.org/abs/2303.07653v2
- Date: Thu, 16 Mar 2023 12:22:50 GMT
- Title: NEF: Neural Edge Fields for 3D Parametric Curve Reconstruction from
Multi-view Images
- Authors: Yunfan Ye, Renjiao Yi, Zhirui Gao, Chenyang Zhu, Zhiping Cai, Kai Xu
- Abstract summary: We study the problem of reconstructing 3D feature curves of an object from a set of calibrated multi-view images.
We learn a neural implicit field representing the density distribution of 3D edges which we refer to as Neural Edge Field (NEF)
NEF is optimized with a view-based rendering loss where a 2D edge map is rendered at a given view and is compared to the ground-truth edge map extracted from the image of that view.
- Score: 18.303674194874457
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of reconstructing 3D feature curves of an object from a
set of calibrated multi-view images. To do so, we learn a neural implicit field
representing the density distribution of 3D edges which we refer to as Neural
Edge Field (NEF). Inspired by NeRF, NEF is optimized with a view-based
rendering loss where a 2D edge map is rendered at a given view and is compared
to the ground-truth edge map extracted from the image of that view. The
rendering-based differentiable optimization of NEF fully exploits 2D edge
detection, without needing a supervision of 3D edges, a 3D geometric operator
or cross-view edge correspondence. Several technical designs are devised to
ensure learning a range-limited and view-independent NEF for robust edge
extraction. The final parametric 3D curves are extracted from NEF with an
iterative optimization method. On our benchmark with synthetic data, we
demonstrate that NEF outperforms existing state-of-the-art methods on all
metrics. Project page: https://yunfan1202.github.io/NEF/.
Related papers
- GPS-Gaussian+: Generalizable Pixel-wise 3D Gaussian Splatting for Real-Time Human-Scene Rendering from Sparse Views [67.34073368933814]
We propose a generalizable Gaussian Splatting approach for high-resolution image rendering under a sparse-view camera setting.
We train our Gaussian parameter regression module on human-only data or human-scene data, jointly with a depth estimation module to lift 2D parameter maps to 3D space.
Experiments on several datasets demonstrate that our method outperforms state-of-the-art methods while achieving an exceeding rendering speed.
arXiv Detail & Related papers (2024-11-18T08:18:44Z) - EdgeGaussians -- 3D Edge Mapping via Gaussian Splatting [33.43750488033706]
State-of-the-art image-based methods learn a 3D edge point cloud then fit 3D edges to it.
Our method learns explicitly the 3D edge points and their edge direction hence bypassing the need for point sampling.
Results show that the proposed method produces edges as accurate and complete as the state-of-the-art while being an order of magnitude faster.
arXiv Detail & Related papers (2024-09-19T16:28:45Z) - MV2Cyl: Reconstructing 3D Extrusion Cylinders from Multi-View Images [13.255044855902408]
We present MV2Cyl, a novel method for reconstructing 3D from 2D multi-view images.
We achieve the optimal reconstruction result with the best accuracy in 2D sketch and extrude parameter estimation.
arXiv Detail & Related papers (2024-06-16T08:54:38Z) - 3D Neural Edge Reconstruction [61.10201396044153]
We introduce EMAP, a new method for learning 3D edge representations with a focus on both lines and curves.
Our method implicitly encodes 3D edge distance and direction in Unsigned Distance Functions (UDF) from multi-view edge maps.
On top of this neural representation, we propose an edge extraction algorithm that robustly abstracts 3D edges from the inferred edge points and their directions.
arXiv Detail & Related papers (2024-05-29T17:23:51Z) - ParaPoint: Learning Global Free-Boundary Surface Parameterization of 3D Point Clouds [52.03819676074455]
ParaPoint is an unsupervised neural learning pipeline for achieving global free-boundary surface parameterization.
This work makes the first attempt to investigate neural point cloud parameterization that pursues both global mappings and free boundaries.
arXiv Detail & Related papers (2024-03-15T14:35:05Z) - Depth-Regularized Optimization for 3D Gaussian Splatting in Few-Shot
Images [47.14713579719103]
We introduce a dense depth map as a geometry guide to mitigate overfitting.
The adjusted depth aids in the color-based optimization of 3D Gaussian splatting.
We verify the proposed method on the NeRF-LLFF dataset with varying numbers of few images.
arXiv Detail & Related papers (2023-11-22T13:53:04Z) - NEAT: Distilling 3D Wireframes from Neural Attraction Fields [52.90572335390092]
This paper studies the problem of structured lineframe junctions using 3D reconstruction segments andFocusing junctions.
ProjectNEAT enjoys the joint neural fields and view without crossart matching from scratch.
arXiv Detail & Related papers (2023-07-14T07:25:47Z) - High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z) - MvDeCor: Multi-view Dense Correspondence Learning for Fine-grained 3D
Segmentation [91.6658845016214]
We propose to utilize self-supervised techniques in the 2D domain for fine-grained 3D shape segmentation tasks.
We render a 3D shape from multiple views, and set up a dense correspondence learning task within the contrastive learning framework.
As a result, the learned 2D representations are view-invariant and geometrically consistent.
arXiv Detail & Related papers (2022-08-18T00:48:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.