Graph Construction using Principal Axis Trees for Simple Graph
Convolution
- URL: http://arxiv.org/abs/2302.12000v3
- Date: Tue, 7 Nov 2023 09:00:26 GMT
- Title: Graph Construction using Principal Axis Trees for Simple Graph
Convolution
- Authors: Mashaan Alshammari, John Stavrakakis, Adel F. Ahmed, Masahiro
Takatsuka
- Abstract summary: We introduce a graph construction scheme that constructs the adjacency matrix $A$ using unsupervised and supervised information.
We tested this graph construction scheme on two well-known Graph Neural Networks (GNNs)
- Score: 0.6554326244334866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) are increasingly becoming the favorite method
for graph learning. They exploit the semi-supervised nature of deep learning,
and they bypass computational bottlenecks associated with traditional graph
learning methods. In addition to the feature matrix $X$, GNNs need an adjacency
matrix $A$ to perform feature propagation. In many cases, the adjacency matrix
$A$ is missing. We introduce a graph construction scheme that constructs the
adjacency matrix $A$ using unsupervised and supervised information.
Unsupervised information characterizes the neighborhood around points. We used
Principal Axis trees (PA-trees) as a source for unsupervised information, where
we create edges between points falling onto the same leaf node. For supervised
information, we used the concept of penalty and intrinsic graphs. A penalty
graph connects points with different class labels, whereas an intrinsic graph
connects points with the same class labels. We used the penalty and intrinsic
graphs to remove or add edges to the graph constructed via PA-tree. We tested
this graph construction scheme on two well-known GNNs: 1) Graph Convolutional
Network (GCN) and 2) Simple Graph Convolution (SGC). The experiments show that
it is better to use SGC because it is faster and delivers better or the same
results as GCN. We also test the effect of oversmoothing on both GCN and SGC.
We found out that the level of smoothing has to be carefully selected for SGC
to avoid oversmoothing.
Related papers
- Learning on Large Graphs using Intersecting Communities [13.053266613831447]
MPNNs iteratively update each node's representation in an input graph by aggregating messages from the node's neighbors.
MPNNs might quickly become prohibitive for large graphs provided they are not very sparse.
We propose approximating the input graph as an intersecting community graph (ICG) -- a combination of intersecting cliques.
arXiv Detail & Related papers (2024-05-31T09:26:26Z) - Two Heads Are Better Than One: Boosting Graph Sparse Training via
Semantic and Topological Awareness [80.87683145376305]
Graph Neural Networks (GNNs) excel in various graph learning tasks but face computational challenges when applied to large-scale graphs.
We propose Graph Sparse Training ( GST), which dynamically manipulates sparsity at the data level.
GST produces a sparse graph with maximum topological integrity and no performance degradation.
arXiv Detail & Related papers (2024-02-02T09:10:35Z) - From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited [51.24526202984846]
Graph-based semi-supervised learning (GSSL) has long been a hot research topic.
graph convolutional networks (GCNs) have become the predominant techniques for their promising performance.
arXiv Detail & Related papers (2023-09-24T10:10:21Z) - Random Projection Forest Initialization for Graph Convolutional Networks [0.6554326244334866]
Graph convolutional networks (GCNs) were a great step towards extending deep learning to unstructured data such as graphs.
We present a new way to construct the graph and initialize the GCN using random projection forest (rpForest)
rpForest enables us to assign varying weights on edges indicating varying importance, which enhanced the learning.
arXiv Detail & Related papers (2023-02-22T11:49:19Z) - FoSR: First-order spectral rewiring for addressing oversquashing in GNNs [0.0]
Graph neural networks (GNNs) are able to leverage the structure of graph data by passing messages along the edges of the graph.
We propose a computationally efficient algorithm that prevents oversquashing by systematically adding edges to the graph.
We find experimentally that our algorithm outperforms existing graph rewiring methods in several graph classification tasks.
arXiv Detail & Related papers (2022-10-21T07:58:03Z) - AnchorGAE: General Data Clustering via $O(n)$ Bipartite Graph
Convolution [79.44066256794187]
We show how to convert a non-graph dataset into a graph by introducing the generative graph model, which is used to build graph convolution networks (GCNs)
A bipartite graph constructed by anchors is updated dynamically to exploit the high-level information behind data.
We theoretically prove that the simple update will lead to degeneration and a specific strategy is accordingly designed.
arXiv Detail & Related papers (2021-11-12T07:08:13Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - GLAM: Graph Learning by Modeling Affinity to Labeled Nodes for Graph
Neural Networks [0.0]
We propose a semi-supervised graph learning method for cases when there are no graphs available.
This method learns a graph as a convex combination of the unsupervised kNN graph and a supervised label-affinity graph.
Our experiments suggest that this approach gives close to or better performance (up to 1.5%), while being simpler and faster (up to 70x) to train, than state-of-the-art graph learning methods.
arXiv Detail & Related papers (2021-02-20T17:56:52Z) - Inverse Graph Identification: Can We Identify Node Labels Given Graph
Labels? [89.13567439679709]
Graph Identification (GI) has long been researched in graph learning and is essential in certain applications.
This paper defines a novel problem dubbed Inverse Graph Identification (IGI)
We propose a simple yet effective method that makes the node-level message passing process using Graph Attention Network (GAT) under the protocol of GI.
arXiv Detail & Related papers (2020-07-12T12:06:17Z) - Data Augmentation View on Graph Convolutional Network and the Proposal
of Monte Carlo Graph Learning [51.03995934179918]
We introduce data augmentation, which is more transparent than the previous understandings.
Inspired by it, we propose a new graph learning paradigm -- Monte Carlo Graph Learning (MCGL)
We show that MCGL's tolerance to graph structure noise is weaker than GCN on noisy graphs.
arXiv Detail & Related papers (2020-06-23T15:25:05Z) - TreeRNN: Topology-Preserving Deep GraphEmbedding and Learning [24.04035265351755]
We study the methods to transfer the graphs into trees so that explicit orders are learned to direct the feature integration from local to global.
To best learn the patterns from the graph-tree-images, we propose TreeRNN, a 2D RNN architecture that recurrently integrates the image pixels by rows and columns to help classify the graph categories.
arXiv Detail & Related papers (2020-06-21T15:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.