Triple Classification for Scholarly Knowledge Graph Completion
- URL: http://arxiv.org/abs/2111.11845v1
- Date: Tue, 23 Nov 2021 13:16:31 GMT
- Title: Triple Classification for Scholarly Knowledge Graph Completion
- Authors: Mohamad Yaser Jaradeh, Kuldeep Singh, Markus Stocker, S\"oren Auer
- Abstract summary: We present exBERT, a method for leveraging pre-trained transformer language models to perform scholarly knowledge graph completion.
The evaluation shows that exBERT outperforms other baselines in the tasks of triple classification, link prediction, and relation prediction.
We present two scholarly datasets as resources for the research community, collected from public KGs and online resources.
- Score: 1.9322973059079729
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scholarly Knowledge Graphs (KGs) provide a rich source of structured
information representing knowledge encoded in scientific publications. With the
sheer volume of published scientific literature comprising a plethora of
inhomogeneous entities and relations to describe scientific concepts, these KGs
are inherently incomplete. We present exBERT, a method for leveraging
pre-trained transformer language models to perform scholarly knowledge graph
completion. We model triples of a knowledge graph as text and perform triple
classification (i.e., belongs to KG or not). The evaluation shows that exBERT
outperforms other baselines on three scholarly KG completion datasets in the
tasks of triple classification, link prediction, and relation prediction.
Furthermore, we present two scholarly datasets as resources for the research
community, collected from public KGs and online resources.
Related papers
- Knowledge Graph Completion using Structural and Textual Embeddings [0.0]
We propose a relations prediction model that harnesses both textual and structural information within Knowledge Graphs.
Our approach integrates walks-based embeddings with language model embeddings to effectively represent nodes.
We demonstrate that our model achieves competitive results in the relation prediction task when evaluated on a widely used dataset.
arXiv Detail & Related papers (2024-04-24T21:04:14Z) - Exploring Large Language Models for Knowledge Graph Completion [17.139056629060626]
We consider triples in knowledge graphs as text sequences and introduce an innovative framework called Knowledge Graph LLM.
Our technique employs entity and relation descriptions of a triple as prompts and utilizes the response for predictions.
Experiments on various benchmark knowledge graphs demonstrate that our method attains state-of-the-art performance in tasks such as triple classification and relation prediction.
arXiv Detail & Related papers (2023-08-26T16:51:17Z) - Text-Augmented Open Knowledge Graph Completion via Pre-Trained Language
Models [53.09723678623779]
We propose TAGREAL to automatically generate quality query prompts and retrieve support information from large text corpora.
The results show that TAGREAL achieves state-of-the-art performance on two benchmark datasets.
We find that TAGREAL has superb performance even with limited training data, outperforming existing embedding-based, graph-based, and PLM-based methods.
arXiv Detail & Related papers (2023-05-24T22:09:35Z) - A Survey of Knowledge Graph Reasoning on Graph Types: Static, Dynamic,
and Multimodal [57.8455911689554]
Knowledge graph reasoning (KGR) aims to deduce new facts from existing facts based on mined logic rules underlying knowledge graphs (KGs)
It has been proven to significantly benefit the usage of KGs in many AI applications, such as question answering, recommendation systems, and etc.
arXiv Detail & Related papers (2022-12-12T08:40:04Z) - Knowledge Graph Refinement based on Triplet BERT-Networks [0.0]
This paper adopts a transformer-based triplet network creating an embedding space that clusters the information about an entity or relation in the Knowledge Graph.
It creates textual sequences from facts and fine-tunes a triplet network of pre-trained transformer-based language models.
We show that GilBERT achieves better or comparable results to the state-of-the-art performance on these two refinement tasks.
arXiv Detail & Related papers (2022-11-18T19:01:21Z) - A Review of Knowledge Graph Completion [0.0]
Information extraction methods proved to be effective at triple extraction from structured or unstructured data.
Most of the current knowledge graphs are incomplete.
In order to use KGs in downstream tasks, it is desirable to predict missing links in KGs.
arXiv Detail & Related papers (2022-08-24T16:42:59Z) - Scientific Language Models for Biomedical Knowledge Base Completion: An
Empirical Study [62.376800537374024]
We study scientific LMs for KG completion, exploring whether we can tap into their latent knowledge to enhance biomedical link prediction.
We integrate the LM-based models with KG embedding models, using a router method that learns to assign each input example to either type of model and provides a substantial boost in performance.
arXiv Detail & Related papers (2021-06-17T17:55:33Z) - Enhancing Scientific Papers Summarization with Citation Graph [78.65955304229863]
We redefine the task of scientific papers summarization by utilizing their citation graph.
We construct a novel scientific papers summarization dataset Semantic Scholar Network (SSN) which contains 141K research papers in different domains.
Our model can achieve competitive performance when compared with the pretrained models.
arXiv Detail & Related papers (2021-04-07T11:13:35Z) - FedE: Embedding Knowledge Graphs in Federated Setting [21.022513922373207]
Multi-Source KG is a common situation in real Knowledge Graph applications.
Because of the data privacy and sensitivity, a set of relevant knowledge graphs cannot complement each other's KGC by just collecting data from different knowledge graphs together.
We propose a Federated Knowledge Graph Embedding framework FedE, focusing on learning knowledge graph embeddings by aggregating locally-computed updates.
arXiv Detail & Related papers (2020-10-24T11:52:05Z) - ENT-DESC: Entity Description Generation by Exploring Knowledge Graph [53.03778194567752]
In practice, the input knowledge could be more than enough, since the output description may only cover the most significant knowledge.
We introduce a large-scale and challenging dataset to facilitate the study of such a practical scenario in KG-to-text.
We propose a multi-graph structure that is able to represent the original graph information more comprehensively.
arXiv Detail & Related papers (2020-04-30T14:16:19Z) - Structure-Augmented Text Representation Learning for Efficient Knowledge
Graph Completion [53.31911669146451]
Human-curated knowledge graphs provide critical supportive information to various natural language processing tasks.
These graphs are usually incomplete, urging auto-completion of them.
graph embedding approaches, e.g., TransE, learn structured knowledge via representing graph elements into dense embeddings.
textual encoding approaches, e.g., KG-BERT, resort to graph triple's text and triple-level contextualized representations.
arXiv Detail & Related papers (2020-04-30T13:50:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.