Learning to Approximate Adaptive Kernel Convolution on Graphs
- URL: http://arxiv.org/abs/2401.11840v1
- Date: Mon, 22 Jan 2024 10:57:11 GMT
- Title: Learning to Approximate Adaptive Kernel Convolution on Graphs
- Authors: Jaeyoon Sim, Sooyeon Jeon, InJun Choi, Guorong Wu, Won Hwa Kim
- Abstract summary: We propose a diffusion learning framework, where the range of feature aggregation is controlled by the scale of a diffusion kernel.
Our model is tested on various standard for node-wise classification for the state-of-the-art datasets performance.
It is also validated on a real-world brain network data for graph classifications to demonstrate its practicality for Alzheimer classification.
- Score: 4.434835769977399
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Various Graph Neural Networks (GNNs) have been successful in analyzing data
in non-Euclidean spaces, however, they have limitations such as oversmoothing,
i.e., information becomes excessively averaged as the number of hidden layers
increases. The issue stems from the intrinsic formulation of conventional graph
convolution where the nodal features are aggregated from a direct neighborhood
per layer across the entire nodes in the graph. As setting different number of
hidden layers per node is infeasible, recent works leverage a diffusion kernel
to redefine the graph structure and incorporate information from farther nodes.
Unfortunately, such approaches suffer from heavy diagonalization of a graph
Laplacian or learning a large transform matrix. In this regards, we propose a
diffusion learning framework, where the range of feature aggregation is
controlled by the scale of a diffusion kernel. For efficient computation, we
derive closed-form derivatives of approximations of the graph convolution with
respect to the scale, so that node-wise range can be adaptively learned. With a
downstream classifier, the entire framework is made trainable in an end-to-end
manner. Our model is tested on various standard datasets for node-wise
classification for the state-of-the-art performance, and it is also validated
on a real-world brain network data for graph classifications to demonstrate its
practicality for Alzheimer classification.
Related papers
- Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Learning Adaptive Neighborhoods for Graph Neural Networks [45.94778766867247]
Graph convolutional networks (GCNs) enable end-to-end learning on graph structured data.
We propose a novel end-to-end differentiable graph generator which builds graph topologies.
Our module can be readily integrated into existing pipelines involving graph convolution operations.
arXiv Detail & Related papers (2023-07-18T08:37:25Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Addressing Heterophily in Node Classification with Graph Echo State
Networks [11.52174067809364]
We address the challenges of heterophilic graphs with Graph Echo State Network (GESN) for node classification.
GESN is a reservoir computing model for graphs, where node embeddings are computed by an untrained message-passing function.
Our experiments show that reservoir models are able to achieve better or comparable accuracy with respect to most fully trained deep models.
arXiv Detail & Related papers (2023-05-14T19:42:31Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Geometric Graph Representation Learning via Maximizing Rate Reduction [73.6044873825311]
Learning node representations benefits various downstream tasks in graph analysis such as community detection and node classification.
We propose Geometric Graph Representation Learning (G2R) to learn node representations in an unsupervised manner.
G2R maps nodes in distinct groups into different subspaces, while each subspace is compact and different subspaces are dispersed.
arXiv Detail & Related papers (2022-02-13T07:46:24Z) - Graph Neural Diffusion Networks for Semi-supervised Learning [6.376489604292251]
Graph Convolutional Networks (GCN) is a pioneering model for graph-based semi-supervised learning.
We propose a new graph neural network called neural-Nets (for Graph Neural Diffusion Networks) that exploits the local and global neighborhood information.
The adoption of neural networks makes neural diffusions adaptable to different datasets.
arXiv Detail & Related papers (2022-01-24T14:07:56Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Graph Neural Networks with Feature and Structure Aware Random Walk [7.143879014059894]
We show that in typical heterphilous graphs, the edges may be directed, and whether to treat the edges as is or simply make them undirected greatly affects the performance of the GNN models.
We develop a model that adaptively learns the directionality of the graph, and exploits the underlying long-distance correlations between nodes.
arXiv Detail & Related papers (2021-11-19T08:54:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.