Learning the Geodesic Embedding with Graph Neural Networks
- URL: http://arxiv.org/abs/2309.05613v2
- Date: Wed, 4 Oct 2023 09:42:43 GMT
- Title: Learning the Geodesic Embedding with Graph Neural Networks
- Authors: Bo Pang, Zhongtian Zheng, Guoping Wang, Peng-Shuai Wang
- Abstract summary: We present GeGnn, a learning-based method for computing the approximate geodesic distance between two arbitrary points on discrete polyhedra surfaces.
Our key idea is to train a graph neural network to embed an input mesh into a high-dimensional embedding space.
We verify the efficiency and effectiveness of our method on ShapeNet and demonstrate that our method is faster than existing methods by orders of magnitude.
- Score: 22.49236293942187
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present GeGnn, a learning-based method for computing the approximate
geodesic distance between two arbitrary points on discrete polyhedra surfaces
with constant time complexity after fast precomputation. Previous relevant
methods either focus on computing the geodesic distance between a single source
and all destinations, which has linear complexity at least or require a long
precomputation time. Our key idea is to train a graph neural network to embed
an input mesh into a high-dimensional embedding space and compute the geodesic
distance between a pair of points using the corresponding embedding vectors and
a lightweight decoding function. To facilitate the learning of the embedding,
we propose novel graph convolution and graph pooling modules that incorporate
local geodesic information and are verified to be much more effective than
previous designs. After training, our method requires only one forward pass of
the network per mesh as precomputation. Then, we can compute the geodesic
distance between a pair of points using our decoding function, which requires
only several matrix multiplications and can be massively parallelized on GPUs.
We verify the efficiency and effectiveness of our method on ShapeNet and
demonstrate that our method is faster than existing methods by orders of
magnitude while achieving comparable or better accuracy. Additionally, our
method exhibits robustness on noisy and incomplete meshes and strong
generalization ability on out-of-distribution meshes. The code and pretrained
model can be found on https://github.com/IntelligentGeometry/GeGnn.
Related papers
- GridPull: Towards Scalability in Learning Implicit Representations from
3D Point Clouds [60.27217859189727]
We propose GridPull to improve the efficiency of learning implicit representations from large scale point clouds.
Our novelty lies in the fast inference of a discrete distance field defined on grids without using any neural components.
We use uniform grids for a fast grid search to localize sampled queries, and organize surface points in a tree structure to speed up the calculation of distances to the surface.
arXiv Detail & Related papers (2023-08-25T04:52:52Z) - Efficient Graph Field Integrators Meet Point Clouds [59.27295475120132]
We present two new classes of algorithms for efficient field integration on graphs encoding point clouds.
The first class, SeparatorFactorization(SF), leverages the bounded genus of point cloud mesh graphs, while the second class, RFDiffusion(RFD), uses popular epsilon-nearest-neighbor graph representations for point clouds.
arXiv Detail & Related papers (2023-02-02T08:33:36Z) - Learning Graph Search Heuristics [48.83557172525969]
We present PHIL (Path Heuristic with Imitation Learning), a novel neural architecture and a training algorithm for discovering graph search and navigations from data.
Our function learns graph embeddings useful for inferring node distances, runs in constant time independent of graph sizes, and can be easily incorporated in an algorithm such as A* at test time.
Experiments show that PHIL reduces the number of explored nodes compared to state-of-the-art methods on benchmark datasets by 58.5% on average.
arXiv Detail & Related papers (2022-12-07T22:28:00Z) - GraphWalks: Efficient Shape Agnostic Geodesic Shortest Path Estimation [93.60478281489243]
We propose a learnable network to approximate geodesic paths on 3D surfaces.
The proposed method provides efficient approximations of the shortest paths and geodesic distances estimations.
arXiv Detail & Related papers (2022-05-30T16:22:53Z) - Multi-Robot Active Mapping via Neural Bipartite Graph Matching [49.72892929603187]
We study the problem of multi-robot active mapping, which aims for complete scene map construction in minimum time steps.
The key to this problem lies in the goal position estimation to enable more efficient robot movements.
We propose a novel algorithm, namely NeuralCoMapping, which takes advantage of both approaches.
arXiv Detail & Related papers (2022-03-30T14:03:17Z) - Degree-Based Random Walk Approach for Graph Embedding [0.0]
A computationally less intensive and node connectivity aware uniform sampling method is proposed.
The advantages of the proposed algorithm become more enhanced when the algorithm is applied to large graphs.
arXiv Detail & Related papers (2021-10-21T19:16:16Z) - Learnable Graph Matching: Incorporating Graph Partitioning with Deep
Feature Learning for Multiple Object Tracking [58.30147362745852]
Data association across frames is at the core of Multiple Object Tracking (MOT) task.
Existing methods mostly ignore the context information among tracklets and intra-frame detections.
We propose a novel learnable graph matching method to address these issues.
arXiv Detail & Related papers (2021-03-30T08:58:45Z) - Symplectic Adjoint Method for Exact Gradient of Neural ODE with Minimal
Memory [7.1975923901054575]
Backpropagation algorithm requires a memory footprint proportional to the number of uses times the network size.
Otherwise, the adjoint method obtains a gradient by a numerical integration backward in time with a minimal memory footprint.
This study proposes the symplectic adjoint method, which obtains the exact gradient with a footprint proportional to the number of uses plus the network size.
arXiv Detail & Related papers (2021-02-19T05:47:14Z) - A Study of Performance of Optimal Transport [16.847501106437534]
We show that network simplex and augmenting path based algorithms can consistently outperform numerical matrix-scaling based methods.
We present a new algorithm that improves upon the classical Kuhn-Munkres algorithm.
arXiv Detail & Related papers (2020-05-03T20:37:05Z) - Shortest path distance approximation using deep learning techniques [0.43461794560295636]
We show that a feedforward neural network fed with embeddings can approximate distances with relatively low distortion error.
The suggested method is evaluated on the Facebook, BlogCatalog, Youtube and Flickr social networks.
arXiv Detail & Related papers (2020-02-12T21:59:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.