NeuroGF: A Neural Representation for Fast Geodesic Distance and Path
Queries
- URL: http://arxiv.org/abs/2306.00658v3
- Date: Fri, 29 Sep 2023 02:38:59 GMT
- Title: NeuroGF: A Neural Representation for Fast Geodesic Distance and Path
Queries
- Authors: Qijian Zhang, Junhui Hou, Yohanes Yudhi Adikusuma, Wenping Wang, Ying
He
- Abstract summary: This paper presents the first attempt to represent geodesics on 3D mesh models using neural implicit functions.
Specifically, we introduce neural geodesic fields (NeuroGFs), which are learned to represent the all-pairs geodesics of a given mesh.
NeuroGFs exhibit exceptional performance in solving the single-source all-destination (SSAD) and point-to-point geodesics.
- Score: 77.04220651098723
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Geodesics are essential in many geometry processing applications. However,
traditional algorithms for computing geodesic distances and paths on 3D mesh
models are often inefficient and slow. This makes them impractical for
scenarios that require extensive querying of arbitrary point-to-point
geodesics. Although neural implicit representations have emerged as a popular
way of representing 3D shape geometries, there is still no research on
representing geodesics with deep implicit functions. To bridge this gap, this
paper presents the first attempt to represent geodesics on 3D mesh models using
neural implicit functions. Specifically, we introduce neural geodesic fields
(NeuroGFs), which are learned to represent the all-pairs geodesics of a given
mesh. By using NeuroGFs, we can efficiently and accurately answer queries of
arbitrary point-to-point geodesic distances and paths, overcoming the
limitations of traditional algorithms. Evaluations on common 3D models show
that NeuroGFs exhibit exceptional performance in solving the single-source
all-destination (SSAD) and point-to-point geodesics, and achieve high accuracy
consistently. Besides, NeuroGFs also offer the unique advantage of encoding
both 3D geometry and geodesics in a unified representation. Moreover, we
further extend generalizable learning frameworks of NeuroGFs by adding shape
feature encoders, which also show satisfactory performances for unseen shapes
and categories. Code is made available at
https://github.com/keeganhk/NeuroGF/tree/master.
Related papers
- Learning Geodesics of Geometric Shape Deformations From Images [4.802048897896533]
This paper presents a novel method, named geodesic deformable networks (GDN), that for the first time enables the learning of geodesic flows of deformation fields derived from images.
In particular, the capability of our proposed GDN being able to predict geodesics is important for quantifying and comparing deformable shape presented in images.
arXiv Detail & Related papers (2024-10-24T14:49:59Z) - GeoLRM: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation [65.33726478659304]
We introduce the Geometry-Aware Large Reconstruction Model (GeoLRM), an approach which can predict high-quality assets with 512k Gaussians and 21 input images in only 11 GB GPU memory.
Previous works neglect the inherent sparsity of 3D structure and do not utilize explicit geometric relationships between 3D and 2D images.
GeoLRM tackles these issues by incorporating a novel 3D-aware transformer structure that directly processes 3D points and uses deformable cross-attention mechanisms.
arXiv Detail & Related papers (2024-06-21T17:49:31Z) - ParaPoint: Learning Global Free-Boundary Surface Parameterization of 3D Point Clouds [52.03819676074455]
ParaPoint is an unsupervised neural learning pipeline for achieving global free-boundary surface parameterization.
This work makes the first attempt to investigate neural point cloud parameterization that pursues both global mappings and free boundaries.
arXiv Detail & Related papers (2024-03-15T14:35:05Z) - A new perspective on building efficient and expressive 3D equivariant
graph neural networks [39.0445472718248]
We propose a hierarchy of 3D isomorphism to evaluate the expressive power of equivariant GNNs.
Our work leads to two crucial modules for designing expressive and efficient geometric GNNs.
To demonstrate the applicability of our theory, we propose LEFTNet which effectively implements these modules.
arXiv Detail & Related papers (2023-04-07T18:08:27Z) - GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - GeoNet++: Iterative Geometric Neural Network with Edge-Aware Refinement
for Joint Depth and Surface Normal Estimation [204.13451624763735]
We propose a geometric neural network with edge-aware refinement (GeoNet++) to jointly predict both depth and surface normal maps from a single image.
GeoNet++ effectively predicts depth and surface normals with strong 3D consistency and sharp boundaries.
In contrast to current metrics that focus on evaluating pixel-wise error/accuracy, 3DGM measures whether the predicted depth can reconstruct high-quality 3D surface normals.
arXiv Detail & Related papers (2020-12-13T06:48:01Z) - PUGeo-Net: A Geometry-centric Network for 3D Point Cloud Upsampling [103.09504572409449]
We propose a novel deep neural network based method, called PUGeo-Net, to generate uniform dense point clouds.
Thanks to its geometry-centric nature, PUGeo-Net works well for both CAD models with sharp features and scanned models with rich geometric details.
arXiv Detail & Related papers (2020-02-24T14:13:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.