LightGCN: Simplifying and Powering Graph Convolution Network for
Recommendation
- URL: http://arxiv.org/abs/2002.02126v4
- Date: Tue, 7 Jul 2020 04:20:53 GMT
- Title: LightGCN: Simplifying and Powering Graph Convolution Network for
Recommendation
- Authors: Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang and Meng
Wang
- Abstract summary: Graph Convolution Network (GCN) has become new state-of-the-art for collaborative filtering.
In this work, we aim to simplify the design of GCN to make it more concise and appropriate for recommendation.
We propose a new model named LightGCN, including only the most essential component in GCN -- neighborhood aggregation.
- Score: 100.76229017056181
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Convolution Network (GCN) has become new state-of-the-art for
collaborative filtering. Nevertheless, the reasons of its effectiveness for
recommendation are not well understood. Existing work that adapts GCN to
recommendation lacks thorough ablation analyses on GCN, which is originally
designed for graph classification tasks and equipped with many neural network
operations. However, we empirically find that the two most common designs in
GCNs -- feature transformation and nonlinear activation -- contribute little to
the performance of collaborative filtering. Even worse, including them adds to
the difficulty of training and degrades recommendation performance.
In this work, we aim to simplify the design of GCN to make it more concise
and appropriate for recommendation. We propose a new model named LightGCN,
including only the most essential component in GCN -- neighborhood aggregation
-- for collaborative filtering. Specifically, LightGCN learns user and item
embeddings by linearly propagating them on the user-item interaction graph, and
uses the weighted sum of the embeddings learned at all layers as the final
embedding. Such simple, linear, and neat model is much easier to implement and
train, exhibiting substantial improvements (about 16.0\% relative improvement
on average) over Neural Graph Collaborative Filtering (NGCF) -- a
state-of-the-art GCN-based recommender model -- under exactly the same
experimental setting. Further analyses are provided towards the rationality of
the simple LightGCN from both analytical and empirical perspectives.
Related papers
- Beyond Graph Convolutional Network: An Interpretable
Regularizer-centered Optimization Framework [12.116373546916078]
Graph convolutional networks (GCNs) have been attracting widespread attentions due to their encouraging performance and powerful generalizations.
In this paper, we induce an interpretable regularizer-centerd optimization framework, in which by building appropriate regularizers we can interpret most GCNs.
Under the proposed framework, we devise a dual-regularizer graph convolutional network (dubbed tsGCN) to capture topological and semantic structures from graph data.
arXiv Detail & Related papers (2023-01-11T05:51:33Z) - Old can be Gold: Better Gradient Flow can Make Vanilla-GCNs Great Again [96.4999517230259]
We provide a new perspective of gradient flow to understand the substandard performance of deep GCNs.
We propose to use gradient-guided dynamic rewiring of vanilla-GCNs with skip connections.
Our methods significantly boost their performance to comfortably compete and outperform many fancy state-of-the-art methods.
arXiv Detail & Related papers (2022-10-14T21:30:25Z) - Rethinking Graph Convolutional Networks in Knowledge Graph Completion [83.25075514036183]
Graph convolutional networks (GCNs) have been increasingly popular in knowledge graph completion (KGC)
In this paper, we build upon representative GCN-based KGC models and introduce variants to find which factor of GCNs is critical in KGC.
We propose a simple yet effective framework named LTE-KGE, which equips existing KGE models with linearly transformed entity embeddings.
arXiv Detail & Related papers (2022-02-08T11:36:18Z) - An Adaptive Graph Pre-training Framework for Localized Collaborative
Filtering [79.17319280791237]
We propose an adaptive graph pre-training framework for localized collaborative filtering (ADAPT)
ADAPT captures both the common knowledge across different graphs and the uniqueness for each graph.
It does not require transferring user/item embeddings, and is able to capture both the common knowledge across different graphs and the uniqueness for each graph.
arXiv Detail & Related papers (2021-12-14T06:53:13Z) - SStaGCN: Simplified stacking based graph convolutional networks [2.556756699768804]
Graph convolutional network (GCN) is a powerful model studied broadly in various graph structural data learning tasks.
We propose a novel GCN called SStaGCN (Simplified stacking based GCN) by utilizing the ideas of stacking and aggregation.
We show that SStaGCN can efficiently mitigate the over-smoothing problem of GCN.
arXiv Detail & Related papers (2021-11-16T05:00:08Z) - Graph Partner Neural Networks for Semi-Supervised Learning on Graphs [16.489177915147785]
Graph Convolutional Networks (GCNs) are powerful for processing graphstructured data and have achieved state-of-the-art performance in several tasks such as node classification, link prediction, and graph classification.
It is inevitable for deep GCNs to suffer from an over-smoothing issue that the representations of nodes will tend to be indistinguishable after repeated graph convolution operations.
We propose the Graph Partner Neural Network (GPNN) which incorporates a de- parameterized GCN and a parameter-sharing scheme.
arXiv Detail & Related papers (2021-10-18T10:56:56Z) - User Embedding based Neighborhood Aggregation Method for Inductive
Recommendation [0.48598200320383667]
We consider the problem of learning latent features (aka embedding) for users and items in a recommendation setting.
Recent methods using graph convolutional networks (e.g., LightGCN) achieve state-of-the-art performance.
We propose a graph convolutional network modeling approach for collaborative filtering CF-GCN.
arXiv Detail & Related papers (2021-02-15T14:30:01Z) - On the Equivalence of Decoupled Graph Convolution Network and Label
Propagation [60.34028546202372]
Some work shows that coupling is inferior to decoupling, which supports deep graph propagation better.
Despite effectiveness, the working mechanisms of the decoupled GCN are not well understood.
We propose a new label propagation method named propagation then training Adaptively (PTA), which overcomes the flaws of the decoupled GCN.
arXiv Detail & Related papers (2020-10-23T13:57:39Z) - RGCF: Refined Graph Convolution Collaborative Filtering with concise and
expressive embedding [42.46797662323393]
We develop a new GCN-based Collaborative Filtering model, named Refined Graph convolution Collaborative Filtering(RGCF)
RGCF is more capable for capturing the implicit high-order connectivities inside the graph and the resultant vector representations are more expressive.
We conduct extensive experiments on three public million-size datasets, demonstrating that our RGCF significantly outperforms state-of-the-art models.
arXiv Detail & Related papers (2020-07-07T12:26:10Z) - Revisiting Graph based Collaborative Filtering: A Linear Residual Graph
Convolutional Network Approach [55.44107800525776]
Graph Convolutional Networks (GCNs) are state-of-the-art graph based representation learning models.
In this paper, we revisit GCN based Collaborative Filtering (CF) based Recommender Systems (RS)
We show that removing non-linearities would enhance recommendation performance, consistent with the theories in simple graph convolutional networks.
We propose a residual network structure that is specifically designed for CF with user-item interaction modeling.
arXiv Detail & Related papers (2020-01-28T04:41:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.