A*Net: A Scalable Path-based Reasoning Approach for Knowledge Graphs
- URL: http://arxiv.org/abs/2206.04798v5
- Date: Wed, 8 Nov 2023 21:56:56 GMT
- Title: A*Net: A Scalable Path-based Reasoning Approach for Knowledge Graphs
- Authors: Zhaocheng Zhu, Xinyu Yuan, Mikhail Galkin, Sophie Xhonneux, Ming
Zhang, Maxime Gazeau, Jian Tang
- Abstract summary: A*Net is a scalable path-based method for knowledge graph reasoning.
Inspired by the A* algorithm for shortest iteration path problems, our A*Net learns a priority function to select important nodes and edges at each.
- Score: 19.873384058276713
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reasoning on large-scale knowledge graphs has been long dominated by
embedding methods. While path-based methods possess the inductive capacity that
embeddings lack, their scalability is limited by the exponential number of
paths. Here we present A*Net, a scalable path-based method for knowledge graph
reasoning. Inspired by the A* algorithm for shortest path problems, our A*Net
learns a priority function to select important nodes and edges at each
iteration, to reduce time and memory footprint for both training and inference.
The ratio of selected nodes and edges can be specified to trade off between
performance and efficiency. Experiments on both transductive and inductive
knowledge graph reasoning benchmarks show that A*Net achieves competitive
performance with existing state-of-the-art path-based methods, while merely
visiting 10% nodes and 10% edges at each iteration. On a million-scale dataset
ogbl-wikikg2, A*Net not only achieves a new state-of-the-art result, but also
converges faster than embedding methods. A*Net is the first path-based method
for knowledge graph reasoning at such scale.
Related papers
- Probabilistic Routing for Graph-Based Approximate Nearest Neighbor Search [3.934351369702082]
Approximate nearest neighbor search (ANNS) in high-dimensional spaces is a pivotal challenge in the field of machine learning.
This paper introduces a method that offers a probabilistic guarantee when exploring a node's neighbors in the graph.
We then introduce PEOs, a novel approach that efficiently identifies which neighbors in the graph should be considered for exact distance calculation.
arXiv Detail & Related papers (2024-02-17T18:08:37Z) - Distance-Based Propagation for Efficient Knowledge Graph Reasoning [43.138409280069204]
Knowledge graph completion (KGC) aims to predict unseen edges in knowledge graphs (KGs)
New class of methods have been proposed to tackle this problem by aggregating path information.
New method, TAGNet, is able to efficiently propagate information.
arXiv Detail & Related papers (2023-11-02T06:37:46Z) - Learning Graph Search Heuristics [48.83557172525969]
We present PHIL (Path Heuristic with Imitation Learning), a novel neural architecture and a training algorithm for discovering graph search and navigations from data.
Our function learns graph embeddings useful for inferring node distances, runs in constant time independent of graph sizes, and can be easily incorporated in an algorithm such as A* at test time.
Experiments show that PHIL reduces the number of explored nodes compared to state-of-the-art methods on benchmark datasets by 58.5% on average.
arXiv Detail & Related papers (2022-12-07T22:28:00Z) - GraphWalks: Efficient Shape Agnostic Geodesic Shortest Path Estimation [93.60478281489243]
We propose a learnable network to approximate geodesic paths on 3D surfaces.
The proposed method provides efficient approximations of the shortest paths and geodesic distances estimations.
arXiv Detail & Related papers (2022-05-30T16:22:53Z) - DOTIN: Dropping Task-Irrelevant Nodes for GNNs [119.17997089267124]
Recent graph learning approaches have introduced the pooling strategy to reduce the size of graphs for learning.
We design a new approach called DOTIN (underlineDrunderlineopping underlineTask-underlineIrrelevant underlineNodes) to reduce the size of graphs.
Our method speeds up GAT by about 50% on graph-level tasks including graph classification and graph edit distance.
arXiv Detail & Related papers (2022-04-28T12:00:39Z) - Road Extraction from Overhead Images with Graph Neural Networks [18.649284163019516]
We propose a method that directly infers the final road graph in a single pass.
The key idea consists in combining a Fully Convolutional Network in charge of locating points of interest and a Graph Neural Network which predicts links between these points.
We evaluate our method against existing works on the popular RoadTracer dataset and achieve competitive results.
arXiv Detail & Related papers (2021-12-09T21:10:27Z) - Learnable Graph Matching: Incorporating Graph Partitioning with Deep
Feature Learning for Multiple Object Tracking [58.30147362745852]
Data association across frames is at the core of Multiple Object Tracking (MOT) task.
Existing methods mostly ignore the context information among tracklets and intra-frame detections.
We propose a novel learnable graph matching method to address these issues.
arXiv Detail & Related papers (2021-03-30T08:58:45Z) - Cream of the Crop: Distilling Prioritized Paths For One-Shot Neural
Architecture Search [60.965024145243596]
One-shot weight sharing methods have recently drawn great attention in neural architecture search due to high efficiency and competitive performance.
To alleviate this problem, we present a simple yet effective architecture distillation method.
We introduce the concept of prioritized path, which refers to the architecture candidates exhibiting superior performance during training.
Since the prioritized paths are changed on the fly depending on their performance and complexity, the final obtained paths are the cream of the crop.
arXiv Detail & Related papers (2020-10-29T17:55:05Z) - Scalable Graph Neural Networks via Bidirectional Propagation [89.70835710988395]
Graph Neural Networks (GNN) is an emerging field for learning on non-Euclidean data.
This paper presents GBP, a scalable GNN that utilizes a localized bidirectional propagation process from both the feature vectors and the training/testing nodes.
An empirical study demonstrates that GBP achieves state-of-the-art performance with significantly less training/testing time.
arXiv Detail & Related papers (2020-10-29T08:55:33Z) - Co-embedding of Nodes and Edges with Graph Neural Networks [13.020745622327894]
Graph embedding is a way to transform and encode the data structure in high dimensional and non-Euclidean feature space.
CensNet is a general graph embedding framework, which embeds both nodes and edges to a latent feature space.
Our approach achieves or matches the state-of-the-art performance in four graph learning tasks.
arXiv Detail & Related papers (2020-10-25T22:39:31Z) - Shortest path distance approximation using deep learning techniques [0.43461794560295636]
We show that a feedforward neural network fed with embeddings can approximate distances with relatively low distortion error.
The suggested method is evaluated on the Facebook, BlogCatalog, Youtube and Flickr social networks.
arXiv Detail & Related papers (2020-02-12T21:59:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.