Improving Knowledge Graph Representation Learning by Structure
Contextual Pre-training
- URL: http://arxiv.org/abs/2112.04087v1
- Date: Wed, 8 Dec 2021 02:50:54 GMT
- Title: Improving Knowledge Graph Representation Learning by Structure
Contextual Pre-training
- Authors: Ganqiang Ye, Wen Zhang, Zhen Bi, Chi Man Wong, Chen Hui and Huajun
Chen
- Abstract summary: We propose a novel pre-training-then-fine-tuning framework for knowledge graph representation learning.
A KG model is pre-trained with triple classification task, followed by discriminative fine-tuning on specific downstream tasks.
Experimental results demonstrate that fine-tuning SCoP not only outperforms results of baselines on a portfolio of downstream tasks but also avoids tedious task-specific model design and parameter training.
- Score: 9.70121995251553
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Representation learning models for Knowledge Graphs (KG) have proven to be
effective in encoding structural information and performing reasoning over KGs.
In this paper, we propose a novel pre-training-then-fine-tuning framework for
knowledge graph representation learning, in which a KG model is firstly
pre-trained with triple classification task, followed by discriminative
fine-tuning on specific downstream tasks such as entity type prediction and
entity alignment. Drawing on the general ideas of learning deep contextualized
word representations in typical pre-trained language models, we propose SCoP to
learn pre-trained KG representations with structural and contextual triples of
the target triple encoded. Experimental results demonstrate that fine-tuning
SCoP not only outperforms results of baselines on a portfolio of downstream
tasks but also avoids tedious task-specific model design and parameter
training.
Related papers
- Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal
Structured Representations [70.41385310930846]
We present an end-to-end framework Structure-CLIP to enhance multi-modal structured representations.
We use scene graphs to guide the construction of semantic negative examples, which results in an increased emphasis on learning structured representations.
A Knowledge-Enhance (KEE) is proposed to leverage SGK as input to further enhance structured representations.
arXiv Detail & Related papers (2023-05-06T03:57:05Z) - Rethinking Visual Prompt Learning as Masked Visual Token Modeling [106.71983630652323]
We propose Visual Prompt learning as masked visual Token Modeling (VPTM) to transform the downstream visual classification into the pre-trained masked visual token prediction.
VPTM is the first visual prompt method on the generative pre-trained visual model, which achieves consistency between pre-training and downstream visual classification by task reformulation.
arXiv Detail & Related papers (2023-03-09T02:43:10Z) - Robust Graph Representation Learning via Predictive Coding [46.22695915912123]
Predictive coding is a message-passing framework initially developed to model information processing in the brain.
In this work, we build models that rely on the message-passing rule of predictive coding.
We show that the proposed models are comparable to standard ones in terms of performance in both inductive and transductive tasks.
arXiv Detail & Related papers (2022-12-09T03:58:22Z) - Autoregressive Structured Prediction with Language Models [73.11519625765301]
We describe an approach to model structures as sequences of actions in an autoregressive manner with PLMs.
Our approach achieves the new state-of-the-art on all the structured prediction tasks we looked at.
arXiv Detail & Related papers (2022-10-26T13:27:26Z) - GAP: A Graph-aware Language Model Framework for Knowledge Graph-to-Text
Generation [3.593955557310285]
Recent improvements in KG-to-text generation are due to auxiliary pre-training tasks designed to give the fine-tune task a boost in performance.
Here, we demonstrate that by fusing graph-aware elements into existing pre-trained language models, we are able to outperform state-of-the-art models and close the gap imposed by additional pre-training tasks.
arXiv Detail & Related papers (2022-04-13T23:53:37Z) - DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting [91.56988987393483]
We present a new framework for dense prediction by implicitly and explicitly leveraging the pre-trained knowledge from CLIP.
Specifically, we convert the original image-text matching problem in CLIP to a pixel-text matching problem and use the pixel-text score maps to guide the learning of dense prediction models.
Our method is model-agnostic, which can be applied to arbitrary dense prediction systems and various pre-trained visual backbones.
arXiv Detail & Related papers (2021-12-02T18:59:32Z) - SDCUP: Schema Dependency-Enhanced Curriculum Pre-Training for Table
Semantic Parsing [19.779493883522072]
This paper designs two novel pre-training objectives to impose the desired inductive bias into the learned representations for table pre-training.
We propose a schema-aware curriculum learning approach to mitigate the impact of noise and learn effectively from the pre-training data in an easy-to-hard manner.
arXiv Detail & Related papers (2021-11-18T02:51:04Z) - PPKE: Knowledge Representation Learning by Path-based Pre-training [43.41597219004598]
We propose a Path-based Pre-training model to learn Knowledge Embeddings, called PPKE.
Our model achieves state-of-the-art results on several benchmark datasets for link prediction and relation prediction tasks.
arXiv Detail & Related papers (2020-12-07T10:29:30Z) - Inductive Entity Representations from Text via Link Prediction [4.980304226944612]
We propose a holistic evaluation protocol for entity representations learned via a link prediction objective.
We consider the inductive link prediction and entity classification tasks.
We also consider an information retrieval task for entity-oriented search.
arXiv Detail & Related papers (2020-10-07T16:04:06Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.