Decimated Framelet System on Graphs and Fast G-Framelet Transforms
- URL: http://arxiv.org/abs/2012.06922v1
- Date: Sat, 12 Dec 2020 23:57:17 GMT
- Title: Decimated Framelet System on Graphs and Fast G-Framelet Transforms
- Authors: Xuebin Zheng, Bingxin Zhou, Yu Guang Wang, Xiaosheng Zhuang
- Abstract summary: An adequate representation of graph data is vital to the learning performance of a statistical or machine learning model for graph-structured data.
We propose a novel multiscale representation system for graph data, called decimated framelets, which form a localized tight frame on the graph.
The effectiveness is demonstrated by real-world applications, including multiresolution analysis for traffic network, and graph neural networks for graph classification tasks.
- Score: 3.7277730514654555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph representation learning has many real-world applications, from
super-resolution imaging, 3D computer vision to drug repurposing, protein
classification, social networks analysis. An adequate representation of graph
data is vital to the learning performance of a statistical or machine learning
model for graph-structured data. In this paper, we propose a novel multiscale
representation system for graph data, called decimated framelets, which form a
localized tight frame on the graph. The decimated framelet system allows
storage of the graph data representation on a coarse-grained chain and
processes the graph data at multi scales where at each scale, the data is
stored at a subgraph. Based on this, we then establish decimated G-framelet
transforms for the decomposition and reconstruction of the graph data at multi
resolutions via a constructive data-driven filter bank. The graph framelets are
built on a chain-based orthonormal basis that supports fast graph Fourier
transforms. From this, we give a fast algorithm for the decimated G-framelet
transforms, or FGT, that has linear computational complexity O(N) for a graph
of size N. The theory of decimated framelets and FGT is verified with numerical
examples for random graphs. The effectiveness is demonstrated by real-world
applications, including multiresolution analysis for traffic network, and graph
neural networks for graph classification tasks.
Related papers
- Greener GRASS: Enhancing GNNs with Encoding, Rewiring, and Attention [12.409982249220812]
We introduce Graph Attention with Structures (GRASS), a novel GNN architecture, to enhance graph relative attention.
GRASS rewires the input graph by superimposing a random regular graph to achieve long-range information propagation.
It also employs a novel additive attention mechanism tailored for graph-structured data.
arXiv Detail & Related papers (2024-07-08T06:21:56Z) - GLISP: A Scalable GNN Learning System by Exploiting Inherent Structural
Properties of Graphs [5.410321469222541]
We propose GLISP, a sampling based GNN learning system for industrial scale graphs.
GLISP consists of three core components: graph partitioner, graph sampling service and graph inference engine.
Experiments show that GLISP achieves up to $6.53times$ and $70.77times$ speedups over existing GNN systems for training and inference tasks.
arXiv Detail & Related papers (2024-01-06T02:59:24Z) - Structure-free Graph Condensation: From Large-scale Graphs to Condensed
Graph-free Data [91.27527985415007]
Existing graph condensation methods rely on the joint optimization of nodes and structures in the condensed graph.
We advocate a new Structure-Free Graph Condensation paradigm, named SFGC, to distill a large-scale graph into a small-scale graph node set.
arXiv Detail & Related papers (2023-06-05T07:53:52Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Scaling R-GCN Training with Graph Summarization [71.06855946732296]
Training of Relation Graph Convolutional Networks (R-GCN) does not scale well with the size of the graph.
In this work, we experiment with the use of graph summarization techniques to compress the graph.
We obtain reasonable results on the AIFB, MUTAG and AM datasets.
arXiv Detail & Related papers (2022-03-05T00:28:43Z) - Spectral Graph Convolutional Networks With Lifting-based Adaptive Graph
Wavelets [81.63035727821145]
Spectral graph convolutional networks (SGCNs) have been attracting increasing attention in graph representation learning.
We propose a novel class of spectral graph convolutional networks that implement graph convolutions with adaptive graph wavelets.
arXiv Detail & Related papers (2021-08-03T17:57:53Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - How Framelets Enhance Graph Neural Networks [27.540282741523253]
This paper presents a new approach for assembling graph neural networks based on framelet transforms.
We propose shrinkage as a new activation for the framelet convolution, which thresholds the high-frequency information at different scales.
arXiv Detail & Related papers (2021-02-13T19:19:19Z) - Promoting Graph Awareness in Linearized Graph-to-Text Generation [72.83863719868364]
We study the ability of linearized models to encode local graph structures.
Our findings motivate solutions to enrich the quality of models' implicit graph encodings.
We find that these denoising scaffolds lead to substantial improvements in downstream generation in low-resource settings.
arXiv Detail & Related papers (2020-12-31T18:17:57Z) - MathNet: Haar-Like Wavelet Multiresolution-Analysis for Graph
Representation and Learning [31.42901131602713]
We propose a framework for graph neural networks with multiresolution Haar-like wavelets, or MathNet, with interrelated convolution and pooling strategies.
The proposed MathNet outperforms various existing GNN models, especially on big data sets.
arXiv Detail & Related papers (2020-07-22T05:00:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.