Learning from A Single Graph is All You Need for Near-Shortest Path
Routing in Wireless Networks
- URL: http://arxiv.org/abs/2308.09829v1
- Date: Fri, 18 Aug 2023 21:35:45 GMT
- Title: Learning from A Single Graph is All You Need for Near-Shortest Path
Routing in Wireless Networks
- Authors: Yung-Fu Chen, Sen Lin, Anish Arora
- Abstract summary: We propose a learning algorithm for local routing policies that needs only a few data samples obtained from a single graph while generalizing to all random graphs in a standard model of wireless networks.
We thus solve the all-pairs near-shortest path problem by training deep neural networks (DNNs) that efficiently and scalably learn routing policies that are local, i.e., they only consider node states and the states of neighboring nodes.
- Score: 8.22181379329857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a learning algorithm for local routing policies that needs only a
few data samples obtained from a single graph while generalizing to all random
graphs in a standard model of wireless networks. We thus solve the all-pairs
near-shortest path problem by training deep neural networks (DNNs) that
efficiently and scalably learn routing policies that are local, i.e., they only
consider node states and the states of neighboring nodes. Remarkably, one of
these DNNs we train learns a policy that exactly matches the performance of
greedy forwarding; another generally outperforms greedy forwarding. Our
algorithm design exploits network domain knowledge in several ways: First, in
the selection of input features and, second, in the selection of a ``seed
graph'' and subsamples from its shortest paths. The leverage of domain
knowledge provides theoretical explainability of why the seed graph and node
subsampling suffice for learning that is efficient, scalable, and
generalizable. Simulation-based results on uniform random graphs with diverse
sizes and densities empirically corroborate that using samples generated from a
few routing paths in a modest-sized seed graph quickly learns a model that is
generalizable across (almost) all random graphs in the wireless network model.
Related papers
- Knowledge-Guided Machine Learning for Stabilizing Near-Shortest Path Routing [3.595536209220219]
We propose a simple algorithm that needs only a few data samples from a single graph for learning local routing policies.<n>We learn a new policy, which we call GreedyTensile routing, that almost always outperforms greedy forwarding.<n>We demonstrate the explainability and ultra-low latency run-time operation of Greedy Tensile routing.
arXiv Detail & Related papers (2025-09-08T12:56:42Z) - Graph neural networks extrapolate out-of-distribution for shortest paths [13.300757448796361]
Graph Neural Networks (GNNs) are trained to minimize a sparsity-regularized loss over a small set of shortest path instances.
We show that GNNs trained by gradient descent are able to minimize this loss and extrapolate in practice.
arXiv Detail & Related papers (2025-03-24T21:52:05Z) - DiRW: Path-Aware Digraph Learning for Heterophily [34.32328516545247]
graph neural network (GNN) has emerged as a powerful representation learning tool for graph-structured data.<n>DiRW is a plug-and-play strategy for most spatial-based DiGNNs and also an innovative model which offers a new digraph learning paradigm.<n>Experiments on 9 datasets demonstrate that DiRW: (1) enhances most spatial-based methods as a plug-and-play strategy; (2) SOTA performance as a new digraph learning paradigm.
arXiv Detail & Related papers (2024-10-14T09:26:56Z) - A Local Graph Limits Perspective on Sampling-Based GNNs [7.601210044390688]
We propose a theoretical framework for training Graph Neural Networks (GNNs) on large input graphs via training on small, fixed-size sampled subgraphs.
We prove that parameters learned from training sampling-based GNNs on small samples of a large input graph are within an $epsilon$-neighborhood of the outcome of training the same architecture on the whole graph.
arXiv Detail & Related papers (2023-10-17T02:58:49Z) - GRAPES: Learning to Sample Graphs for Scalable Graph Neural Networks [2.4175455407547015]
Graph neural networks learn to represent nodes by aggregating information from their neighbors.
Several existing methods address this by sampling a small subset of nodes, scaling GNNs to much larger graphs.
We introduce GRAPES, an adaptive sampling method that learns to identify the set of nodes crucial for training a GNN.
arXiv Detail & Related papers (2023-10-05T09:08:47Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - A Robust Stacking Framework for Training Deep Graph Models with
Multifaceted Node Features [61.92791503017341]
Graph Neural Networks (GNNs) with numerical node features and graph structure as inputs have demonstrated superior performance on various supervised learning tasks with graph data.
The best models for such data types in most standard supervised learning settings with IID (non-graph) data are not easily incorporated into a GNN.
Here we propose a robust stacking framework that fuses graph-aware propagation with arbitrary models intended for IID data.
arXiv Detail & Related papers (2022-06-16T22:46:33Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - RaWaNet: Enriching Graph Neural Network Input via Random Walks on Graphs [0.0]
Graph neural networks (GNNs) have gained increasing popularity and have shown very promising results for data that are represented by graphs.
We propose a random walk data processing of the graphs based on three selected lengths. Namely, (regular) walks of length 1 and 2, and a fractional walk of length $gamma in (0,1)$, in order to capture the different local and global dynamics on the graphs.
We test our method on various molecular datasets by passing the processed node features to the network in order to perform several classification and regression tasks.
arXiv Detail & Related papers (2021-09-15T20:04:01Z) - Scalable Graph Neural Network Training: The Case for Sampling [4.9201378771958675]
Graph Neural Networks (GNNs) are a new and increasingly popular family of deep neural network architectures to perform learning on graphs.
Training them efficiently is challenging due to the irregular nature of graph data.
Two different approaches have emerged in the literature: whole-graph and sample-based training.
arXiv Detail & Related papers (2021-05-05T20:44:10Z) - Learning Graph Neural Networks with Positive and Unlabeled Nodes [34.903471348798725]
Graph neural networks (GNNs) are important tools for transductive learning tasks, such as node classification in graphs.
Most GNN models aggregate information from short distances in each round, and fail to capture long distance relationship in graphs.
In this paper, we propose a novel graph neural network framework, long-short distance aggregation networks (LSDAN) to overcome these limitations.
arXiv Detail & Related papers (2021-03-08T11:43:37Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Analyzing Neural Networks Based on Random Graphs [77.34726150561087]
We perform a massive evaluation of neural networks with architectures corresponding to random graphs of various types.
We find that none of the classical numerical graph invariants by itself allows to single out the best networks.
We also find that networks with primarily short-range connections perform better than networks which allow for many long-range connections.
arXiv Detail & Related papers (2020-02-19T11:04:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.