ReFactor GNNs: Revisiting Factorisation-based Models from a Message-Passing Perspective
- URL: http://arxiv.org/abs/2207.09980v4
- Date: Thu, 16 Jan 2025 15:56:56 GMT
- Title: ReFactor GNNs: Revisiting Factorisation-based Models from a Message-Passing Perspective
- Authors: Yihong Chen, Pushkar Mishra, Luca Franceschi, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel,
- Abstract summary: We bridge the gap between Factorisation-based Models (FMs) and Graph Neural Networks (GNNs) by proposing ReFactor GNNs.
We show how FMs can be cast as GNNs by reformulating the gradient descent procedure as message-passing operations.
Our GNNs achieve comparable transductive performance to FMs, and state-of-the-art inductive performance while using an order of magnitude fewer parameters.
- Score: 36.744669438010625
- License:
- Abstract: Factorisation-based Models (FMs), such as DistMult, have enjoyed enduring success for Knowledge Graph Completion (KGC) tasks, often outperforming Graph Neural Networks (GNNs). However, unlike GNNs, FMs struggle to incorporate node features and generalise to unseen nodes in inductive settings. Our work bridges the gap between FMs and GNNs by proposing ReFactor GNNs. This new architecture draws upon both modelling paradigms, which previously were largely thought of as disjoint. Concretely, using a message-passing formalism, we show how FMs can be cast as GNNs by reformulating the gradient descent procedure as message-passing operations, which forms the basis of our ReFactor GNNs. Across a multitude of well-established KGC benchmarks, our ReFactor GNNs achieve comparable transductive performance to FMs, and state-of-the-art inductive performance while using an order of magnitude fewer parameters.
Related papers
- Graph Neural Networks at a Fraction [1.8175282137722093]
This paper introduces Quaternion Message Passing Neural Networks (QMPNNs), a framework that leverages quaternion space to compute node representations.
We present a novel perspective on Graph Lottery Tickets, redefining their applicability within the context of GNNs and QMPNNs.
arXiv Detail & Related papers (2025-02-10T03:55:09Z) - Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification [7.14327815822376]
Graph Transformers (GTs) have emerged as popular alternatives to traditional Graph Neural Networks (GNNs)
In this paper, we reevaluate the performance of three classic GNN models (GCN, GAT, and GraphSAGE) against GTs.
arXiv Detail & Related papers (2024-06-13T10:53:33Z) - LOGIN: A Large Language Model Consulted Graph Neural Network Training Framework [30.54068909225463]
We aim to streamline the GNN design process and leverage the advantages of Large Language Models (LLMs) to improve the performance of GNNs on downstream tasks.
We formulate a new paradigm, coined "LLMs-as-Consultants," which integrates LLMs with GNNs in an interactive manner.
We empirically evaluate the effectiveness of LOGIN on node classification tasks across both homophilic and heterophilic graphs.
arXiv Detail & Related papers (2024-05-22T18:17:20Z) - Unleashing the potential of GNNs via Bi-directional Knowledge Transfer [58.64807174714959]
Bi-directional Knowledge Transfer (BiKT) is a plug-and-play approach to unleash the potential of the feature transformation operations without modifying the original architecture.
BiKT brings up to 0.5% - 4% performance gain over the original GNN, which means a boosted GNN is obtained.
arXiv Detail & Related papers (2023-10-26T04:11:49Z) - FRGNN: Mitigating the Impact of Distribution Shift on Graph Neural
Networks via Test-Time Feature Reconstruction [13.21683198528012]
A distribution shift can adversely affect the test performance of Graph Neural Networks (GNNs)
We propose FR-GNN, a general framework for GNNs to conduct feature reconstruction.
Notably, the reconstructed node features can be directly utilized for testing the well-trained model.
arXiv Detail & Related papers (2023-08-18T02:34:37Z) - Graph Neural Networks are Inherently Good Generalizers: Insights by
Bridging GNNs and MLPs [71.93227401463199]
This paper pinpoints the major source of GNNs' performance gain to their intrinsic capability, by introducing an intermediate model class dubbed as P(ropagational)MLP.
We observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training.
arXiv Detail & Related papers (2022-12-18T08:17:32Z) - MGDCF: Distance Learning via Markov Graph Diffusion for Neural
Collaborative Filtering [96.65234340724237]
We show the equivalence between some state-of-the-art GNN-based CF models and a traditional 1-layer NRL model based on context encoding.
We present Markov Graph Diffusion Collaborative Filtering (MGDCF) to generalize some state-of-the-art GNN-based CF models.
arXiv Detail & Related papers (2022-04-05T17:24:32Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.