Are Message Passing Neural Networks Really Helpful for Knowledge Graph
Completion?
- URL: http://arxiv.org/abs/2205.10652v3
- Date: Tue, 4 Jul 2023 02:41:01 GMT
- Title: Are Message Passing Neural Networks Really Helpful for Knowledge Graph
Completion?
- Authors: Juanhui Li and Harry Shomer and Jiayuan Ding and Yiqi Wang and Yao Ma
and Neil Shah and Jiliang Tang and Dawei Yin
- Abstract summary: We show that simple models are able to achieve comparable performance to MPNNs.
We show careful scoring function and loss function design has a much stronger influence on KGC model performance.
- Score: 49.858038034580005
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge graphs (KGs) facilitate a wide variety of applications. Despite
great efforts in creation and maintenance, even the largest KGs are far from
complete. Hence, KG completion (KGC) has become one of the most crucial tasks
for KG research. Recently, considerable literature in this space has centered
around the use of Message Passing (Graph) Neural Networks (MPNNs), to learn
powerful embeddings. The success of these methods is naturally attributed to
the use of MPNNs over simpler multi-layer perceptron (MLP) models, given their
additional message passing (MP) component. In this work, we find that
surprisingly, simple MLP models are able to achieve comparable performance to
MPNNs, suggesting that MP may not be as crucial as previously believed. With
further exploration, we show careful scoring function and loss function design
has a much stronger influence on KGC model performance. This suggests a
conflation of scoring function design, loss function design, and MP in prior
work, with promising insights regarding the scalability of state-of-the-art KGC
methods today, as well as careful attention to more suitable MP designs for KGC
tasks tomorrow. Our codes are publicly available at:
https://github.com/Juanhui28/Are_MPNNs_helpful.
Related papers
- Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains [66.55612528039894]
Knowledge Graphs (KGs) can serve as reliable knowledge sources for question answering (QA)
We present DoG (Decoding on Graphs), a novel framework that facilitates a deep synergy between LLMs and KGs.
Experiments across various KGQA tasks with different background KGs demonstrate that DoG achieves superior and robust performance.
arXiv Detail & Related papers (2024-10-24T04:01:40Z) - Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs [77.42221150848535]
We propose a novel message passing function called Multiset to Multiset GNN(M2M-GNN)
Our theoretical analyses and extensive experiments demonstrate that M2M-GNN effectively alleviates the aforementioned limitations of SMP, yielding superior performance in comparison.
arXiv Detail & Related papers (2024-05-31T07:39:22Z) - Schema First! Learn Versatile Knowledge Graph Embeddings by Capturing
Semantics with MASCHInE [3.174882428337821]
Knowledge graph embedding models (KGEMs) have gained considerable traction in recent years.
In this work, we design protographs -- small, modified versions of a KG that leverage RDF/S information.
The learnt protograph-based embeddings are meant to encapsulate the semantics of a KG, and can be leveraged in learning KGEs that, in turn, also better capture semantics.
arXiv Detail & Related papers (2023-06-06T13:22:54Z) - How does over-squashing affect the power of GNNs? [39.52168593457813]
Graph Neural Networks (GNNs) are the state-of-the-art model for machine learning on graph-structured data.
We provide a rigorous analysis to determine which function classes of node features can be learned by an MPNN of a given capacity.
We prove that, to guarantee sufficient communication between pairs of nodes, the capacity of the MPNN must be large enough.
arXiv Detail & Related papers (2023-06-06T11:15:53Z) - MLPInit: Embarrassingly Simple GNN Training Acceleration with MLP
Initialization [51.76758674012744]
Training graph neural networks (GNNs) on large graphs is complex and extremely time consuming.
We propose an embarrassingly simple, yet hugely effective method for GNN training acceleration, called PeerInit.
arXiv Detail & Related papers (2022-09-30T21:33:51Z) - Explainable Sparse Knowledge Graph Completion via High-order Graph
Reasoning Network [111.67744771462873]
This paper proposes a novel explainable model for sparse Knowledge Graphs (KGs)
It combines high-order reasoning into a graph convolutional network, namely HoGRN.
It can not only improve the generalization ability to mitigate the information insufficiency issue but also provide interpretability.
arXiv Detail & Related papers (2022-07-14T10:16:56Z) - MEKER: Memory Efficient Knowledge Embedding Representation for Link
Prediction and Question Answering [65.62309538202771]
Knowledge Graphs (KGs) are symbolically structured storages of facts.
KG embedding contains concise data used in NLP tasks requiring implicit information about the real world.
We propose a memory-efficient KG embedding model, which yields SOTA-comparable performance on link prediction tasks and KG-based Question Answering.
arXiv Detail & Related papers (2022-04-22T10:47:03Z) - Boosting Graph Neural Networks by Injecting Pooling in Message Passing [4.952681349410351]
We propose a new, adaptable, and powerful MP framework to prevent over-smoothing.
Our bilateral-MP estimates a pairwise modular gradient by utilizing the class information of nodes.
Experiments on five medium-size benchmark datasets indicate that the bilateral-MP improves performance by alleviating over-smoothing.
arXiv Detail & Related papers (2022-02-08T08:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.