Efficient and Stable Graph Scattering Transforms via Pruning
- URL: http://arxiv.org/abs/2001.09882v1
- Date: Mon, 27 Jan 2020 16:05:56 GMT
- Title: Efficient and Stable Graph Scattering Transforms via Pruning
- Authors: Vassilis N. Ioannidis and Siheng Chen and Georgios B. Giannakis
- Abstract summary: Graph scattering transforms ( GSTs) offer training-free deep GCN models that extract features from graph data.
The price paid by GSTs is exponential complexity in space and time that increases with the number of layers.
The present work addresses the complexity limitation of GSTs by introducing an efficient so-termed pruned (p) GST approach.
- Score: 86.76336979318681
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph convolutional networks (GCNs) have well-documented performance in
various graph learning tasks, but their analysis is still at its infancy. Graph
scattering transforms (GSTs) offer training-free deep GCN models that extract
features from graph data, and are amenable to generalization and stability
analyses. The price paid by GSTs is exponential complexity in space and time
that increases with the number of layers. This discourages deployment of GSTs
when a deep architecture is needed. The present work addresses the complexity
limitation of GSTs by introducing an efficient so-termed pruned (p)GST
approach. The resultant pruning algorithm is guided by a
graph-spectrum-inspired criterion, and retains informative scattering features
on-the-fly while bypassing the exponential complexity associated with GSTs.
Stability of the novel pGSTs is also established when the input graph data or
the network structure are perturbed. Furthermore, the sensitivity of pGST to
random and localized signal perturbations is investigated analytically and
experimentally. Numerical tests showcase that pGST performs comparably to the
baseline GST at considerable computational savings. Furthermore, pGST achieves
comparable performance to state-of-the-art GCNs in graph and 3D point cloud
classification tasks. Upon analyzing the pGST pruning patterns, it is shown
that graph data in different domains call for different network architectures,
and that the pruning algorithm may be employed to guide the design choices for
contemporary GCNs.
Related papers
- HetGPT: Harnessing the Power of Prompt Tuning in Pre-Trained
Heterogeneous Graph Neural Networks [24.435068514392487]
HetGPT is a post-training prompting framework for graph neural networks.
It improves the performance of state-of-the-art HGNNs on semi-supervised node classification.
arXiv Detail & Related papers (2023-10-23T19:35:57Z) - Graph Neural Processes for Spatio-Temporal Extrapolation [36.01312116818714]
We study the task of extrapolation-temporal processes that generates data at target locations from surrounding contexts in a graph.
Existing methods either use learning-grained models like Neural Networks or statistical approaches like Gaussian for this task.
We propose Spatio Graph Neural Processes (STGNP), a neural latent variable model which commands these capabilities simultaneously.
arXiv Detail & Related papers (2023-05-30T03:55:37Z) - Space-Time Graph Neural Networks with Stochastic Graph Perturbations [100.31591011966603]
Space-time graph neural networks (ST-GNNs) learn efficient graph representations of time-varying data.
In this paper we revisit the properties of ST-GNNs and prove that they are stable to graph stabilitys.
Our analysis suggests that ST-GNNs are suitable for transfer learning on time-varying graphs.
arXiv Detail & Related papers (2022-10-28T16:59:51Z) - LSP : Acceleration and Regularization of Graph Neural Networks via
Locality Sensitive Pruning of Graphs [2.4250821950628234]
Graph Neural Networks (GNNs) have emerged as highly successful tools for graph-related tasks.
Large graphs often involve many redundant components that can be removed without compromising the performance.
We propose a systematic method called Locality-Sensitive Pruning (LSP) for graph pruning based on Locality-Sensitive Hashing.
arXiv Detail & Related papers (2021-11-10T14:12:28Z) - Space-Time Graph Neural Networks [104.55175325870195]
We introduce space-time graph neural network (ST-GNN) to jointly process the underlying space-time topology of time-varying network data.
Our analysis shows that small variations in the network topology and time evolution of a system does not significantly affect the performance of ST-GNNs.
arXiv Detail & Related papers (2021-10-06T16:08:44Z) - Spatio-Temporal Graph Scattering Transform [54.52797775999124]
Graph neural networks may be impractical in some real-world scenarios due to a lack of sufficient high-quality training data.
We put forth a novel mathematically designed framework to analyze-temporal data.
arXiv Detail & Related papers (2020-12-06T19:49:55Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - Gated Graph Recurrent Neural Networks [176.3960927323358]
We introduce Graph Recurrent Neural Networks (GRNNs) as a general learning framework for graph processes.
To address the problem of vanishing gradients, we put forward GRNNs with three different gating mechanisms: time, node and edge gates.
The numerical results also show that GRNNs outperform GNNs and RNNs, highlighting the importance of taking both the temporal and graph structures of a graph process into account.
arXiv Detail & Related papers (2020-02-03T22:35:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.