MGDCF: Distance Learning via Markov Graph Diffusion for Neural
Collaborative Filtering
- URL: http://arxiv.org/abs/2204.02338v2
- Date: Sat, 6 Jan 2024 16:28:42 GMT
- Title: MGDCF: Distance Learning via Markov Graph Diffusion for Neural
Collaborative Filtering
- Authors: Jun Hu, Bryan Hooi, Shengsheng Qian, Quan Fang, Changsheng Xu
- Abstract summary: We show the equivalence between some state-of-the-art GNN-based CF models and a traditional 1-layer NRL model based on context encoding.
We present Markov Graph Diffusion Collaborative Filtering (MGDCF) to generalize some state-of-the-art GNN-based CF models.
- Score: 96.65234340724237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have recently been utilized to build
Collaborative Filtering (CF) models to predict user preferences based on
historical user-item interactions. However, there is relatively little
understanding of how GNN-based CF models relate to some traditional Network
Representation Learning (NRL) approaches. In this paper, we show the
equivalence between some state-of-the-art GNN-based CF models and a traditional
1-layer NRL model based on context encoding. Based on a Markov process that
trades off two types of distances, we present Markov Graph Diffusion
Collaborative Filtering (MGDCF) to generalize some state-of-the-art GNN-based
CF models. Instead of considering the GNN as a trainable black box that
propagates learnable user/item vertex embeddings, we treat GNNs as an
untrainable Markov process that can construct constant context features of
vertices for a traditional NRL model that encodes context features with a
fully-connected layer. Such simplification can help us to better understand how
GNNs benefit CF models. Especially, it helps us realize that ranking losses
play crucial roles in GNN-based CF tasks. With our proposed simple yet powerful
ranking loss InfoBPR, the NRL model can still perform well without the context
features constructed by GNNs. We conduct experiments to perform detailed
analysis on MGDCF.
Related papers
- LOGIN: A Large Language Model Consulted Graph Neural Network Training Framework [30.54068909225463]
We aim to streamline the GNN design process and leverage the advantages of Large Language Models (LLMs) to improve the performance of GNNs on downstream tasks.
We formulate a new paradigm, coined "LLMs-as-Consultants," which integrates LLMs with GNNs in an interactive manner.
We empirically evaluate the effectiveness of LOGIN on node classification tasks across both homophilic and heterophilic graphs.
arXiv Detail & Related papers (2024-05-22T18:17:20Z) - Graph Coordinates and Conventional Neural Networks -- An Alternative for
Graph Neural Networks [0.10923877073891444]
We propose Topology Coordinate Neural Network (TCNN) and Directional Virtual Coordinate Neural Network (DVCNN) as novel alternatives to message passing GNNs.
TCNN and DVCNN achieve competitive or superior performance to message passing GNNs.
Our work expands the toolbox of techniques for graph-based machine learning.
arXiv Detail & Related papers (2023-12-03T10:14:10Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - Graph Neural Networks are Inherently Good Generalizers: Insights by
Bridging GNNs and MLPs [71.93227401463199]
This paper pinpoints the major source of GNNs' performance gain to their intrinsic capability, by introducing an intermediate model class dubbed as P(ropagational)MLP.
We observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training.
arXiv Detail & Related papers (2022-12-18T08:17:32Z) - ReFactorGNNs: Revisiting Factorisation-based Models from a
Message-Passing Perspective [42.845783579293]
We bridge the gap between Factorisation-based Models (FMs) and Graph Neural Networks (GNNs) by proposing ReFactorGNNs.
We show how FMs can be cast as GNNs by reformulating the gradient descent procedure as message-passing operations.
Our ReFactorGNNs achieve comparable transductive performance to FMs, and state-of-the-art inductive performance while using an order of magnitude fewer parameters.
arXiv Detail & Related papers (2022-07-20T15:39:30Z) - KerGNNs: Interpretable Graph Neural Networks with Graph Kernels [14.421535610157093]
Graph neural networks (GNNs) have become the state-of-the-art method in downstream graph-related tasks.
We propose a novel GNN framework, termed textit Kernel Graph Neural Networks (KerGNNs)
KerGNNs integrate graph kernels into the message passing process of GNNs.
We show that our method achieves competitive performance compared with existing state-of-the-art methods.
arXiv Detail & Related papers (2022-01-03T06:16:30Z) - Hybrid Graph Neural Networks for Few-Shot Learning [85.93495480949079]
Graph neural networks (GNNs) have been used to tackle the few-shot learning problem.
Under the inductive setting, existing GNN based methods are less competitive.
We propose a novel hybrid GNN model consisting of two GNNs, an instance GNN and a prototype GNN.
arXiv Detail & Related papers (2021-12-13T10:20:15Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - Efficient Probabilistic Logic Reasoning with Graph Neural Networks [63.099999467118245]
Markov Logic Networks (MLNs) can be used to address many knowledge graph problems.
Inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult.
We propose a graph neural network (GNN) variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model.
arXiv Detail & Related papers (2020-01-29T23:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.