Random Projection Forest Initialization for Graph Convolutional Networks
- URL: http://arxiv.org/abs/2302.12001v2
- Date: Sun, 22 Oct 2023 05:43:36 GMT
- Title: Random Projection Forest Initialization for Graph Convolutional Networks
- Authors: Mashaan Alshammari, John Stavrakakis, Adel F. Ahmed, Masahiro
Takatsuka
- Abstract summary: Graph convolutional networks (GCNs) were a great step towards extending deep learning to unstructured data such as graphs.
We present a new way to construct the graph and initialize the GCN using random projection forest (rpForest)
rpForest enables us to assign varying weights on edges indicating varying importance, which enhanced the learning.
- Score: 0.6554326244334866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph convolutional networks (GCNs) were a great step towards extending deep
learning to unstructured data such as graphs. But GCNs still need a constructed
graph to work with. To solve this problem, classical graphs such as $k$-nearest
neighbor are usually used to initialize the GCN. Although it is computationally
efficient to construct $k$-nn graphs, the constructed graph might not be very
useful for learning. In a $k$-nn graph, points are restricted to have a fixed
number of edges, and all edges in the graph have equal weights. We present a
new way to construct the graph and initialize the GCN. It is based on random
projection forest (rpForest). rpForest enables us to assign varying weights on
edges indicating varying importance, which enhanced the learning. The number of
trees is a hyperparameter in rpForest. We performed spectral analysis to help
us setting this parameter in the right range. In the experiments, initializing
the GCN using rpForest provides better results compared to $k$-nn
initialization.
Related papers
- Graph Construction using Principal Axis Trees for Simple Graph
Convolution [0.6554326244334866]
We introduce a graph construction scheme that constructs the adjacency matrix $A$ using unsupervised and supervised information.
We tested this graph construction scheme on two well-known Graph Neural Networks (GNNs)
arXiv Detail & Related papers (2023-02-22T12:02:23Z) - Bring Your Own View: Graph Neural Networks for Link Prediction with
Personalized Subgraph Selection [57.34881616131377]
We introduce a Personalized Subgraph Selector (PS2) as a plug-and-play framework to automatically, personally, and inductively identify optimal subgraphs for different edges.
PS2 is instantiated as a bi-level optimization problem that can be efficiently solved differently.
We suggest a brand-new angle towards GNNLP training: by first identifying the optimal subgraphs for edges; and then focusing on training the inference model by using the sampled subgraphs.
arXiv Detail & Related papers (2022-12-23T17:30:19Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - FoSR: First-order spectral rewiring for addressing oversquashing in GNNs [0.0]
Graph neural networks (GNNs) are able to leverage the structure of graph data by passing messages along the edges of the graph.
We propose a computationally efficient algorithm that prevents oversquashing by systematically adding edges to the graph.
We find experimentally that our algorithm outperforms existing graph rewiring methods in several graph classification tasks.
arXiv Detail & Related papers (2022-10-21T07:58:03Z) - DiP-GNN: Discriminative Pre-Training of Graph Neural Networks [49.19824331568713]
Graph neural network (GNN) pre-training methods have been proposed to enhance the power of GNNs.
One popular pre-training method is to mask out a proportion of the edges, and a GNN is trained to recover them.
In our framework, the graph seen by the discriminator better matches the original graph because the generator can recover a proportion of the masked edges.
arXiv Detail & Related papers (2022-09-15T17:41:50Z) - Graph Neural Network Bandits [89.31889875864599]
We consider the bandit optimization problem with the reward function defined over graph-structured data.
Key challenges in this setting are scaling to large domains, and to graphs with many nodes.
We show that graph neural networks (GNNs) can be used to estimate the reward function.
arXiv Detail & Related papers (2022-07-13T18:12:36Z) - AnchorGAE: General Data Clustering via $O(n)$ Bipartite Graph
Convolution [79.44066256794187]
We show how to convert a non-graph dataset into a graph by introducing the generative graph model, which is used to build graph convolution networks (GCNs)
A bipartite graph constructed by anchors is updated dynamically to exploit the high-level information behind data.
We theoretically prove that the simple update will lead to degeneration and a specific strategy is accordingly designed.
arXiv Detail & Related papers (2021-11-12T07:08:13Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - RaWaNet: Enriching Graph Neural Network Input via Random Walks on Graphs [0.0]
Graph neural networks (GNNs) have gained increasing popularity and have shown very promising results for data that are represented by graphs.
We propose a random walk data processing of the graphs based on three selected lengths. Namely, (regular) walks of length 1 and 2, and a fractional walk of length $gamma in (0,1)$, in order to capture the different local and global dynamics on the graphs.
We test our method on various molecular datasets by passing the processed node features to the network in order to perform several classification and regression tasks.
arXiv Detail & Related papers (2021-09-15T20:04:01Z) - Learning Bayesian Networks Under Sparsity Constraints: A Parameterized
Complexity Analysis [7.99536002595393]
We study the problem of learning the structure of an optimal Bayesian network when additional constraints are posed on the network or on its moralized graph.
We show that learning an optimal network with at most $k$ edges in the moralized graph presumably has no $f(k)cdot |I|O(1)$-time algorithm.
arXiv Detail & Related papers (2020-04-30T12:31:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.