SMiLE: Schema-augmented Multi-level Contrastive Learning for Knowledge
Graph Link Prediction
- URL: http://arxiv.org/abs/2210.04870v3
- Date: Mon, 4 Mar 2024 03:38:14 GMT
- Title: SMiLE: Schema-augmented Multi-level Contrastive Learning for Knowledge
Graph Link Prediction
- Authors: Miao Peng, Ben Liu, Qianqian Xie, Wenjie Xu, Hua Wang, Min Peng
- Abstract summary: Link prediction is the task of inferring missing links between entities in knowledge graphs.
We propose a novel Multi-level contrastive LEarning framework (SMiLE) to conduct knowledge graph link prediction.
- Score: 28.87290783250351
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Link prediction is the task of inferring missing links between entities in
knowledge graphs. Embedding-based methods have shown effectiveness in
addressing this problem by modeling relational patterns in triples. However,
the link prediction task often requires contextual information in entity
neighborhoods, while most existing embedding-based methods fail to capture it.
Additionally, little attention is paid to the diversity of entity
representations in different contexts, which often leads to false prediction
results. In this situation, we consider that the schema of knowledge graph
contains the specific contextual information, and it is beneficial for
preserving the consistency of entities across contexts. In this paper, we
propose a novel Schema-augmented Multi-level contrastive LEarning framework
(SMiLE) to conduct knowledge graph link prediction. Specifically, we first
exploit network schema as the prior constraint to sample negatives and
pre-train our model by employing a multi-level contrastive learning method to
yield both prior schema and contextual information. Then we fine-tune our model
under the supervision of individual triples to learn subtler representations
for link prediction. Extensive experimental results on four knowledge graph
datasets with thorough analysis of each component demonstrate the effectiveness
of our proposed framework against state-of-the-art baselines. The
implementation of SMiLE is available at https://github.com/GKNL/SMiLE.
Related papers
- G-SAP: Graph-based Structure-Aware Prompt Learning over Heterogeneous Knowledge for Commonsense Reasoning [8.02547453169677]
We propose a novel Graph-based Structure-Aware Prompt Learning Model for commonsense reasoning, named G-SAP.
In particular, an evidence graph is constructed by integrating multiple knowledge sources, i.e. ConceptNet, Wikipedia, and Cambridge Dictionary.
The results reveal a significant advancement over the existing models, especially, with 6.12% improvement over the SoTA LM+GNNs model on the OpenbookQA dataset.
arXiv Detail & Related papers (2024-05-09T08:28:12Z) - Learning Representations without Compositional Assumptions [79.12273403390311]
We propose a data-driven approach that learns feature set dependencies by representing feature sets as graph nodes and their relationships as learnable edges.
We also introduce LEGATO, a novel hierarchical graph autoencoder that learns a smaller, latent graph to aggregate information from multiple views dynamically.
arXiv Detail & Related papers (2023-05-31T10:36:10Z) - ConGraT: Self-Supervised Contrastive Pretraining for Joint Graph and Text Embeddings [20.25180279903009]
We propose Contrastive Graph-Text pretraining (ConGraT) for jointly learning separate representations of texts and nodes in a text-attributed graph (TAG)
Our method trains a language model (LM) and a graph neural network (GNN) to align their representations in a common latent space using a batch-wise contrastive learning objective inspired by CLIP.
Experiments demonstrate that ConGraT outperforms baselines on various downstream tasks, including node and text category classification, link prediction, and language modeling.
arXiv Detail & Related papers (2023-05-23T17:53:30Z) - Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data [52.491074276133325]
Multi-view representation learning has developed rapidly over the past decades and has been applied in many fields.
We propose a new cross-view graph contrastive learning framework, which integrates multi-view information to align data and learn latent representations.
Experiments conducted on several real datasets demonstrate the effectiveness of the proposed method on the clustering and classification tasks.
arXiv Detail & Related papers (2022-11-08T09:19:32Z) - Fine-Grained Visual Entailment [51.66881737644983]
We propose an extension of this task, where the goal is to predict the logical relationship of fine-grained knowledge elements within a piece of text to an image.
Unlike prior work, our method is inherently explainable and makes logical predictions at different levels of granularity.
We evaluate our method on a new dataset of manually annotated knowledge elements and show that our method achieves 68.18% accuracy at this challenging task.
arXiv Detail & Related papers (2022-03-29T16:09:38Z) - LP-BERT: Multi-task Pre-training Knowledge Graph BERT for Link
Prediction [3.5382535469099436]
LP-BERT contains two training stages: multi-task pre-training and knowledge graph fine-tuning.
We achieve state-of-the-art results on WN18RR and UMLS datasets, especially the Hits@10 indicator improved by 5%.
arXiv Detail & Related papers (2022-01-13T09:18:30Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z) - PPKE: Knowledge Representation Learning by Path-based Pre-training [43.41597219004598]
We propose a Path-based Pre-training model to learn Knowledge Embeddings, called PPKE.
Our model achieves state-of-the-art results on several benchmark datasets for link prediction and relation prediction tasks.
arXiv Detail & Related papers (2020-12-07T10:29:30Z) - Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph
Link Prediction [69.1473775184952]
We introduce a realistic problem of few-shot out-of-graph link prediction.
We tackle this problem with a novel transductive meta-learning framework.
We validate our model on multiple benchmark datasets for knowledge graph completion and drug-drug interaction prediction.
arXiv Detail & Related papers (2020-06-11T17:42:46Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.