Multi-hop Graph Convolutional Network with High-order Chebyshev
Approximation for Text Reasoning
- URL: http://arxiv.org/abs/2106.05221v1
- Date: Tue, 8 Jun 2021 07:49:43 GMT
- Title: Multi-hop Graph Convolutional Network with High-order Chebyshev
Approximation for Text Reasoning
- Authors: Shuoran Jiang, Qingcai Chen, Xin Liu, Baotian Hu, Lisai Zhang
- Abstract summary: We define the spectral graph convolutional network with the high-order dynamic Chebyshev approximation (HDGCN)
To alleviate the over-smoothing in high-order Chebyshev approximation, a multi-vote-based cross-attention (MVCAttn) with linear computation complexity is also proposed.
- Score: 15.65069702939315
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph convolutional network (GCN) has become popular in various natural
language processing (NLP) tasks with its superiority in long-term and
non-consecutive word interactions. However, existing single-hop graph reasoning
in GCN may miss some important non-consecutive dependencies. In this study, we
define the spectral graph convolutional network with the high-order dynamic
Chebyshev approximation (HDGCN), which augments the multi-hop graph reasoning
by fusing messages aggregated from direct and long-term dependencies into one
convolutional layer. To alleviate the over-smoothing in high-order Chebyshev
approximation, a multi-vote-based cross-attention (MVCAttn) with linear
computation complexity is also proposed. The empirical results on four
transductive and inductive NLP tasks and the ablation study verify the efficacy
of the proposed model. Our source code is available at
https://github.com/MathIsAll/HDGCN-pytorch.
Related papers
- Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Heuristic Learning with Graph Neural Networks: A Unified Framework for Link Prediction [25.87108956561691]
Link prediction is a fundamental task in graph learning, inherently shaped by the topology of the graph.
We propose a unified matrix formulation to accommodate and generalize various weights.
We also propose the Heuristic Learning Graph Neural Network (HL-GNN) to efficiently implement the formulation.
arXiv Detail & Related papers (2024-06-12T08:05:45Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - AGNN: Alternating Graph-Regularized Neural Networks to Alleviate
Over-Smoothing [29.618952407794776]
We propose an Alternating Graph-regularized Neural Network (AGNN) composed of Graph Convolutional Layer (GCL) and Graph Embedding Layer (GEL)
GEL is derived from the graph-regularized optimization containing Laplacian embedding term, which can alleviate the over-smoothing problem.
AGNN is evaluated via a large number of experiments including performance comparison with some multi-layer or multi-order graph neural networks.
arXiv Detail & Related papers (2023-04-14T09:20:03Z) - Automatic Relation-aware Graph Network Proliferation [182.30735195376792]
We propose Automatic Relation-aware Graph Network Proliferation (ARGNP) for efficiently searching GNNs.
These operations can extract hierarchical node/relational information and provide anisotropic guidance for message passing on a graph.
Experiments on six datasets for four graph learning tasks demonstrate that GNNs produced by our method are superior to the current state-of-the-art hand-crafted and search-based GNNs.
arXiv Detail & Related papers (2022-05-31T10:38:04Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - MrGCN: Mirror Graph Convolution Network for Relation Extraction with
Long-Term Dependencies [32.27755470353054]
In relation extraction, dependency trees that contain rich syntactic clues have been widely used to help capture long-term dependencies in text.
We propose the Mirror Graph Convolution Network (MrGCN), a GNN model with pooling-unpooling structures tailored to relation extraction.
Experiments on two datasets demonstrate the effectiveness of our method, showing significant improvements over previous results.
arXiv Detail & Related papers (2021-01-01T00:52:53Z) - Scalable Graph Neural Networks for Heterogeneous Graphs [12.44278942365518]
Graph neural networks (GNNs) are a popular class of parametric model for learning over graph-structured data.
Recent work has argued that GNNs primarily use the graph for feature smoothing, and have shown competitive results on benchmark tasks.
In this work, we ask whether these results can be extended to heterogeneous graphs, which encode multiple types of relationship between different entities.
arXiv Detail & Related papers (2020-11-19T06:03:35Z) - Geometrically Principled Connections in Graph Neural Networks [66.51286736506658]
We argue geometry should remain the primary driving force behind innovation in the emerging field of geometric deep learning.
We relate graph neural networks to widely successful computer graphics and data approximation models: radial basis functions (RBFs)
We introduce affine skip connections, a novel building block formed by combining a fully connected layer with any graph convolution operator.
arXiv Detail & Related papers (2020-04-06T13:25:46Z) - Cross-GCN: Enhancing Graph Convolutional Network with $k$-Order Feature
Interactions [153.6357310444093]
Graph Convolutional Network (GCN) is an emerging technique that performs learning and reasoning on graph data.
We argue that existing designs of GCN forgo modeling cross features, making GCN less effective for tasks or data where cross features are important.
We design a new operator named Cross-feature Graph Convolution, which explicitly models the arbitrary-order cross features with complexity linear to feature dimension and order size.
arXiv Detail & Related papers (2020-03-05T13:05:27Z) - The Power of Graph Convolutional Networks to Distinguish Random Graph
Models: Short Version [27.544219236164764]
Graph convolutional networks (GCNs) are a widely used method for graph representation learning.
We investigate the power of GCNs to distinguish between different random graph models on the basis of the embeddings of their sample graphs.
arXiv Detail & Related papers (2020-02-13T17:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.