MoCoSA: Momentum Contrast for Knowledge Graph Completion with
Structure-Augmented Pre-trained Language Models
- URL: http://arxiv.org/abs/2308.08204v1
- Date: Wed, 16 Aug 2023 08:09:10 GMT
- Title: MoCoSA: Momentum Contrast for Knowledge Graph Completion with
Structure-Augmented Pre-trained Language Models
- Authors: Jiabang He, Liu Jia, Lei Wang, Xiyao Li, Xing Xu
- Abstract summary: We propose Momentum Contrast for knowledge graph completion with Structure-Augmented pre-trained language models (MoCoSA)
Our approach achieves state-of-the-art performance in terms of mean reciprocal rank (MRR), with improvements of 2.5% on WN18RR and 21% on OpenBG500.
- Score: 11.57782182864771
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Graph Completion (KGC) aims to conduct reasoning on the facts
within knowledge graphs and automatically infer missing links. Existing methods
can mainly be categorized into structure-based or description-based. On the one
hand, structure-based methods effectively represent relational facts in
knowledge graphs using entity embeddings. However, they struggle with
semantically rich real-world entities due to limited structural information and
fail to generalize to unseen entities. On the other hand, description-based
methods leverage pre-trained language models (PLMs) to understand textual
information. They exhibit strong robustness towards unseen entities. However,
they have difficulty with larger negative sampling and often lag behind
structure-based methods. To address these issues, in this paper, we propose
Momentum Contrast for knowledge graph completion with Structure-Augmented
pre-trained language models (MoCoSA), which allows the PLM to perceive the
structural information by the adaptable structure encoder. To improve learning
efficiency, we proposed momentum hard negative and intra-relation negative
sampling. Experimental results demonstrate that our approach achieves
state-of-the-art performance in terms of mean reciprocal rank (MRR), with
improvements of 2.5% on WN18RR and 21% on OpenBG500.
Related papers
- Learning to Model Graph Structural Information on MLPs via Graph Structure Self-Contrasting [50.181824673039436]
We propose a Graph Structure Self-Contrasting (GSSC) framework that learns graph structural information without message passing.
The proposed framework is based purely on Multi-Layer Perceptrons (MLPs), where the structural information is only implicitly incorporated as prior knowledge.
It first applies structural sparsification to remove potentially uninformative or noisy edges in the neighborhood, and then performs structural self-contrasting in the sparsified neighborhood to learn robust node representations.
arXiv Detail & Related papers (2024-09-09T12:56:02Z) - G-SAP: Graph-based Structure-Aware Prompt Learning over Heterogeneous Knowledge for Commonsense Reasoning [8.02547453169677]
We propose a novel Graph-based Structure-Aware Prompt Learning Model for commonsense reasoning, named G-SAP.
In particular, an evidence graph is constructed by integrating multiple knowledge sources, i.e. ConceptNet, Wikipedia, and Cambridge Dictionary.
The results reveal a significant advancement over the existing models, especially, with 6.12% improvement over the SoTA LM+GNNs model on the OpenbookQA dataset.
arXiv Detail & Related papers (2024-05-09T08:28:12Z) - Multi-perspective Improvement of Knowledge Graph Completion with Large
Language Models [95.31941227776711]
We propose MPIKGC to compensate for the deficiency of contextualized knowledge and improve KGC by querying large language models (LLMs)
We conducted extensive evaluation of our framework based on four description-based KGC models and four datasets, for both link prediction and triplet classification tasks.
arXiv Detail & Related papers (2024-03-04T12:16:15Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - Unifying Structure and Language Semantic for Efficient Contrastive
Knowledge Graph Completion with Structured Entity Anchors [0.3913403111891026]
The goal of knowledge graph completion (KGC) is to predict missing links in a KG using trained facts that are already known.
We propose a novel method to effectively unify structure information and language semantics without losing the power of inductive reasoning.
arXiv Detail & Related papers (2023-11-07T11:17:55Z) - Investigating Graph Structure Information for Entity Alignment with
Dangling Cases [31.779386064600956]
Entity alignment aims to discover the equivalent entities in different knowledge graphs (KGs)
We propose a novel entity alignment framework called Weakly-optimal Graph Contrastive Learning (WOGCL)
We show that WOGCL outperforms the current state-of-the-art methods with pure structural information in both traditional (relaxed) and dangling settings.
arXiv Detail & Related papers (2023-04-10T17:24:43Z) - Joint Language Semantic and Structure Embedding for Knowledge Graph
Completion [66.15933600765835]
We propose to jointly embed the semantics in the natural language description of the knowledge triplets with their structure information.
Our method embeds knowledge graphs for the completion task via fine-tuning pre-trained language models.
Our experiments on a variety of knowledge graph benchmarks have demonstrated the state-of-the-art performance of our method.
arXiv Detail & Related papers (2022-09-19T02:41:02Z) - Language Models as Knowledge Embeddings [26.384327693266837]
We propose LMKE, which adopts Language Models to derive Knowledge Embeddings.
We formulate description-based KE learning with a contrastive learning framework to improve efficiency in training and evaluation.
arXiv Detail & Related papers (2022-06-25T10:39:12Z) - Graph Information Bottleneck [77.21967740646784]
Graph Neural Networks (GNNs) provide an expressive way to fuse information from network structure and node features.
Inheriting from the general Information Bottleneck (IB), GIB aims to learn the minimal sufficient representation for a given task.
We show that our proposed models are more robust than state-of-the-art graph defense models.
arXiv Detail & Related papers (2020-10-24T07:13:00Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.