Neural Vector Fields: Implicit Representation by Explicit Learning
- URL: http://arxiv.org/abs/2303.04341v2
- Date: Sat, 3 Jun 2023 07:54:13 GMT
- Title: Neural Vector Fields: Implicit Representation by Explicit Learning
- Authors: Xianghui Yang, Guosheng Lin, Zhenghao Chen, Luping Zhou
- Abstract summary: We propose a novel 3D representation method, Neural Vector Fields (NVF)
It not only adopts the explicit learning process to manipulate meshes directly, but also the implicit representation of unsigned distance functions (UDFs)
Our method first predicts displacement queries towards the surface and models shapes as text reconstructions.
- Score: 63.337294707047036
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) are widely applied for nowadays 3D surface
reconstruction tasks and such methods can be further divided into two
categories, which respectively warp templates explicitly by moving vertices or
represent 3D surfaces implicitly as signed or unsigned distance functions.
Taking advantage of both advanced explicit learning process and powerful
representation ability of implicit functions, we propose a novel 3D
representation method, Neural Vector Fields (NVF). It not only adopts the
explicit learning process to manipulate meshes directly, but also leverages the
implicit representation of unsigned distance functions (UDFs) to break the
barriers in resolution and topology. Specifically, our method first predicts
the displacements from queries towards the surface and models the shapes as
\textit{Vector Fields}. Rather than relying on network differentiation to
obtain direction fields as most existing UDF-based methods, the produced vector
fields encode the distance and direction fields both and mitigate the ambiguity
at "ridge" points, such that the calculation of direction fields is
straightforward and differentiation-free. The differentiation-free
characteristic enables us to further learn a shape codebook via Vector
Quantization, which encodes the cross-object priors, accelerates the training
procedure, and boosts model generalization on cross-category reconstruction.
The extensive experiments on surface reconstruction benchmarks indicate that
our method outperforms those state-of-the-art methods in different evaluation
scenarios including watertight vs non-watertight shapes, category-specific vs
category-agnostic reconstruction, category-unseen reconstruction, and
cross-domain reconstruction. Our code is released at
https://github.com/Wi-sc/NVF.
Related papers
- Split-and-Fit: Learning B-Reps via Structure-Aware Voronoi Partitioning [50.684254969269546]
We introduce a novel method for acquiring boundary representations (B-Reps) of 3D CAD models.
We apply a spatial partitioning to derive a single primitive within each partition.
We show that our network, coined NVD-Net for neural Voronoi diagrams, can effectively learn Voronoi partitions for CAD models from training data.
arXiv Detail & Related papers (2024-06-07T21:07:49Z) - Neural Vector Fields: Generalizing Distance Vector Fields by Codebooks
and Zero-Curl Regularization [73.3605319281966]
We propose a novel 3D representation, Neural Vector Fields (NVF), which adopts the explicit learning process to manipulate meshes and implicit unsigned distance function (UDF) representation to break the barriers in resolution and topology.
We evaluate both NVFs on four surface reconstruction scenarios, including watertight vs non-watertight shapes, category-agnostic reconstruction vs category-unseen reconstruction, category-specific, and cross-domain reconstruction.
arXiv Detail & Related papers (2023-09-04T10:42:56Z) - NeuralUDF: Learning Unsigned Distance Fields for Multi-view
Reconstruction of Surfaces with Arbitrary Topologies [87.06532943371575]
We present a novel method, called NeuralUDF, for reconstructing surfaces with arbitrary topologies from 2D images via volume rendering.
In this paper, we propose to represent surfaces as the Unsigned Distance Function (UDF) and develop a new volume rendering scheme to learn the neural UDF representation.
arXiv Detail & Related papers (2022-11-25T15:21:45Z) - Neural Vector Fields for Implicit Surface Representation and Inference [73.25812045209001]
Implicit fields have recently shown increasing success in representing and learning 3D shapes accurately.
We develop a novel and yet a fundamental representation considering unit vectors in 3D space and call it Vector Field (VF)
We show the advantages of VF representation, in learning open, closed, or multi-layered as well as piecewise planar surfaces.
arXiv Detail & Related papers (2022-04-13T17:53:34Z) - DiGS : Divergence guided shape implicit neural representation for
unoriented point clouds [36.60407995156801]
Shape implicit neural representations (INRs) have recently shown to be effective in shape analysis and reconstruction tasks.
We propose a divergence guided shape representation learning approach that does not require normal vectors as input.
arXiv Detail & Related papers (2021-06-21T02:10:03Z) - Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible
Neural Networks [118.20778308823779]
We present a novel 3D primitive representation that defines primitives using an Invertible Neural Network (INN)
Our model learns to parse 3D objects into semantically consistent part arrangements without any part-level supervision.
arXiv Detail & Related papers (2021-03-18T17:59:31Z) - MeshSDF: Differentiable Iso-Surface Extraction [45.769838982991736]
We introduce a differentiable way to produce explicit surface mesh representations from Deep Signed Distance Functions.
Our key insight is that by reasoning on how implicit field perturbations impact local surface geometry, one can ultimately differentiate the 3D location of surface samples.
We exploit this to define MeshSDF, an end-to-end differentiable mesh representation which can vary its topology.
arXiv Detail & Related papers (2020-06-06T23:44:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.