KERMIT: Knowledge Graph Completion of Enhanced Relation Modeling with
Inverse Transformation
- URL: http://arxiv.org/abs/2309.14770v1
- Date: Tue, 26 Sep 2023 09:03:25 GMT
- Title: KERMIT: Knowledge Graph Completion of Enhanced Relation Modeling with
Inverse Transformation
- Authors: Haotian Li, Lingzhi Wang, Yuliang Wei, Richard Yi Da Xu, Bailing Wang
- Abstract summary: We use ChatGPT as an external knowledge base to generate coherent descriptions to bridge the semantic gap between the queries and answers.
We leverage inverse relations to create a symmetric graph, thereby creating extra labeling and providing supplementary information for link prediction.
- Score: 15.787778445130323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graph completion is a task that revolves around filling in missing
triples based on the information available in a knowledge graph. Among the
current studies, text-based methods complete the task by utilizing textual
descriptions of triples. However, this modeling approach may encounter
limitations, particularly when the description fails to accurately and
adequately express the intended meaning. To overcome these challenges, we
propose the augmentation of data through two additional mechanisms. Firstly, we
employ ChatGPT as an external knowledge base to generate coherent descriptions
to bridge the semantic gap between the queries and answers. Secondly, we
leverage inverse relations to create a symmetric graph, thereby creating extra
labeling and providing supplementary information for link prediction. This
approach offers additional insights into the relationships between entities.
Through these efforts, we have observed significant improvements in knowledge
graph completion, as these mechanisms enhance the richness and diversity of the
available data, leading to more accurate results.
Related papers
- Few-shot Knowledge Graph Relational Reasoning via Subgraph Adaptation [51.47994645529258]
Few-shot Knowledge Graph (KG) Reasoning aims to predict unseen triplets (i.e., query triplets) for rare relations in KGs.
We propose SAFER (Subgraph Adaptation for Few-shot Reasoning), a novel approach that effectively adapts the information in contextualized graphs to various subgraphs.
arXiv Detail & Related papers (2024-06-19T21:40:35Z) - Exploring Large Language Models for Knowledge Graph Completion [17.139056629060626]
We consider triples in knowledge graphs as text sequences and introduce an innovative framework called Knowledge Graph LLM.
Our technique employs entity and relation descriptions of a triple as prompts and utilizes the response for predictions.
Experiments on various benchmark knowledge graphs demonstrate that our method attains state-of-the-art performance in tasks such as triple classification and relation prediction.
arXiv Detail & Related papers (2023-08-26T16:51:17Z) - Graph Relation Aware Continual Learning [3.908470250825618]
Continual graph learning (CGL) studies the problem of learning from an infinite stream of graph data.
We design a relation-aware adaptive model, dubbed as RAM-CG, that consists of a relation-discovery modular to explore latent relations behind edges.
RAM-CG provides significant 2.2%, 6.9% and 6.6% accuracy improvements over the state-of-the-art results on CitationNet, OGBN-arxiv and TWITCH dataset.
arXiv Detail & Related papers (2023-08-16T09:53:20Z) - You Only Transfer What You Share: Intersection-Induced Graph Transfer
Learning for Link Prediction [79.15394378571132]
We investigate a previously overlooked phenomenon: in many cases, a densely connected, complementary graph can be found for the original graph.
The denser graph may share nodes with the original graph, which offers a natural bridge for transferring selective, meaningful knowledge.
We identify this setting as Graph Intersection-induced Transfer Learning (GITL), which is motivated by practical applications in e-commerce or academic co-authorship predictions.
arXiv Detail & Related papers (2023-02-27T22:56:06Z) - Knowledge Graph Refinement based on Triplet BERT-Networks [0.0]
This paper adopts a transformer-based triplet network creating an embedding space that clusters the information about an entity or relation in the Knowledge Graph.
It creates textual sequences from facts and fine-tunes a triplet network of pre-trained transformer-based language models.
We show that GilBERT achieves better or comparable results to the state-of-the-art performance on these two refinement tasks.
arXiv Detail & Related papers (2022-11-18T19:01:21Z) - VEM$^2$L: A Plug-and-play Framework for Fusing Text and Structure
Knowledge on Sparse Knowledge Graph Completion [14.537509860565706]
We propose a plug-and-play framework VEM2L over sparse Knowledge Graphs to fuse knowledge extracted from text and structure messages into a unity.
Specifically, we partition knowledge acquired by models into two nonoverlapping parts.
We also propose a new fusion strategy proved by Variational EM algorithm to fuse the generalization ability of models.
arXiv Detail & Related papers (2022-07-04T15:50:21Z) - Knowledge Graph Completion with Text-aided Regularization [2.8361571014635407]
Knowledge Graph Completion is a task of expanding the knowledge graph/base through estimating possible entities.
Traditional approaches mainly focus on using the existing graphical information that is intrinsic of the graph.
We try numerous ways of using extracted or raw textual information to help existing KG embedding frameworks reach better prediction results.
arXiv Detail & Related papers (2021-01-22T06:10:09Z) - ENT-DESC: Entity Description Generation by Exploring Knowledge Graph [53.03778194567752]
In practice, the input knowledge could be more than enough, since the output description may only cover the most significant knowledge.
We introduce a large-scale and challenging dataset to facilitate the study of such a practical scenario in KG-to-text.
We propose a multi-graph structure that is able to represent the original graph information more comprehensively.
arXiv Detail & Related papers (2020-04-30T14:16:19Z) - Structure-Augmented Text Representation Learning for Efficient Knowledge
Graph Completion [53.31911669146451]
Human-curated knowledge graphs provide critical supportive information to various natural language processing tasks.
These graphs are usually incomplete, urging auto-completion of them.
graph embedding approaches, e.g., TransE, learn structured knowledge via representing graph elements into dense embeddings.
textual encoding approaches, e.g., KG-BERT, resort to graph triple's text and triple-level contextualized representations.
arXiv Detail & Related papers (2020-04-30T13:50:34Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z) - Generative Adversarial Zero-Shot Relational Learning for Knowledge
Graphs [96.73259297063619]
We consider a novel formulation, zero-shot learning, to free this cumbersome curation.
For newly-added relations, we attempt to learn their semantic features from their text descriptions.
We leverage Generative Adrial Networks (GANs) to establish the connection between text and knowledge graph domain.
arXiv Detail & Related papers (2020-01-08T01:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.