OrthoReg: Improving Graph-regularized MLPs via Orthogonality
Regularization
- URL: http://arxiv.org/abs/2302.00109v1
- Date: Tue, 31 Jan 2023 21:20:48 GMT
- Title: OrthoReg: Improving Graph-regularized MLPs via Orthogonality
Regularization
- Authors: Hengrui Zhang, Shen Wang, Vassilis N. Ioannidis, Soji Adeshina, Jiani
Zhang, Xiao Qin, Christos Faloutsos, Da Zheng, George Karypis, Philip S. Yu
- Abstract summary: Graph Neural Networks (GNNs) are currently dominating in modeling graphstructure data.
Graph-regularized networks (GR-MLPs) implicitly inject the graph structure information into model weights, while their performance can hardly match that of GNNs in most tasks.
We show that GR-MLPs suffer from dimensional collapse, a phenomenon in which the largest a few eigenvalues dominate the embedding space.
We propose OrthoReg, a novel GR-MLP model to mitigate the dimensional collapse issue.
- Score: 66.30021126251725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) are currently dominating in modeling
graph-structure data, while their high reliance on graph structure for
inference significantly impedes them from widespread applications. By contrast,
Graph-regularized MLPs (GR-MLPs) implicitly inject the graph structure
information into model weights, while their performance can hardly match that
of GNNs in most tasks. This motivates us to study the causes of the limited
performance of GR-MLPs. In this paper, we first demonstrate that node
embeddings learned from conventional GR-MLPs suffer from dimensional collapse,
a phenomenon in which the largest a few eigenvalues dominate the embedding
space, through empirical observations and theoretical analysis. As a result,
the expressive power of the learned node representations is constrained. We
further propose OrthoReg, a novel GR-MLP model to mitigate the dimensional
collapse issue. Through a soft regularization loss on the correlation matrix of
node embeddings, OrthoReg explicitly encourages orthogonal node representations
and thus can naturally avoid dimensionally collapsed representations.
Experiments on traditional transductive semi-supervised classification tasks
and inductive node classification for cold-start scenarios demonstrate its
effectiveness and superiority.
Related papers
- Breaking the Entanglement of Homophily and Heterophily in
Semi-supervised Node Classification [25.831508778029097]
We introduce AMUD, which quantifies the relationship between node profiles and topology from a statistical perspective.
We also propose ADPA as a new directed graph learning paradigm for AMUD.
arXiv Detail & Related papers (2023-12-07T07:54:11Z) - Latent Graph Inference with Limited Supervision [58.54674649232757]
Latent graph inference (LGI) aims to jointly learn the underlying graph structure and node representations from data features.
Existing LGI methods commonly suffer from the issue of supervision starvation, where massive edge weights are learned without semantic supervision and do not contribute to the training loss.
In this paper, we observe that this issue is actually caused by the graph sparsification operation, which severely destroys the important connections established between pivotal nodes and labeled ones.
arXiv Detail & Related papers (2023-10-06T15:22:40Z) - Implicit Graph Neural Diffusion Networks: Convergence, Generalization,
and Over-Smoothing [7.984586585987328]
Implicit Graph Neural Networks (GNNs) have achieved significant success in addressing graph learning problems.
We introduce a geometric framework for designing implicit graph diffusion layers based on a parameterized graph Laplacian operator.
We show how implicit GNN layers can be viewed as the fixed-point equation of a Dirichlet energy minimization problem.
arXiv Detail & Related papers (2023-08-07T05:22:33Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - ResNorm: Tackling Long-tailed Degree Distribution Issue in Graph Neural
Networks via Normalization [80.90206641975375]
This paper focuses on improving the performance of GNNs via normalization.
By studying the long-tailed distribution of node degrees in the graph, we propose a novel normalization method for GNNs.
The $scale$ operation of ResNorm reshapes the node-wise standard deviation (NStd) distribution so as to improve the accuracy of tail nodes.
arXiv Detail & Related papers (2022-06-16T13:49:09Z) - RawlsGCN: Towards Rawlsian Difference Principle on Graph Convolutional
Network [102.27090022283208]
Graph Convolutional Network (GCN) plays pivotal roles in many real-world applications.
GCN often exhibits performance disparity with respect to node degrees, resulting in worse predictive accuracy for low-degree nodes.
We formulate the problem of mitigating the degree-related performance disparity in GCN from the perspective of the Rawlsian difference principle.
arXiv Detail & Related papers (2022-02-28T05:07:57Z) - Orthogonal Graph Neural Networks [53.466187667936026]
Graph neural networks (GNNs) have received tremendous attention due to their superiority in learning node representations.
stacking more convolutional layers significantly decreases the performance of GNNs.
We propose a novel Ortho-GConv, which could generally augment the existing GNN backbones to stabilize the model training and improve the model's generalization performance.
arXiv Detail & Related papers (2021-09-23T12:39:01Z) - Graph-MLP: Node Classification without Message Passing in Graph [28.604893350871777]
Graph Neural Network (GNN) has been demonstrated its effectiveness in dealing with non-Euclidean structural data.
Recent works have mainly focused on powerful message passing modules, however, in this paper, we show that none of the message passing modules is necessary.
We propose a pure multilayer-perceptron-based framework, Graph-MLP with the supervision signal leveraging graph structure.
arXiv Detail & Related papers (2021-06-08T02:07:21Z) - Implicit Graph Neural Networks [46.0589136729616]
We propose a graph learning framework called Implicit Graph Neural Networks (IGNN)
IGNNs consistently capture long-range dependencies and outperform state-of-the-art GNN models.
arXiv Detail & Related papers (2020-09-14T06:04:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.