Text-To-KG Alignment: Comparing Current Methods on Classification Tasks
- URL: http://arxiv.org/abs/2306.02871v1
- Date: Mon, 5 Jun 2023 13:45:45 GMT
- Title: Text-To-KG Alignment: Comparing Current Methods on Classification Tasks
- Authors: Sondre Wold and Lilja {\O}vrelid and Erik Velldal
- Abstract summary: knowledge graphs (KG) provide dense and structured representations of factual information.
Recent work has focused on creating pipeline models that retrieve information from KGs as additional context.
It is not known how current methods compare to a scenario where the aligned subgraph is completely relevant to the query.
- Score: 2.191505742658975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In contrast to large text corpora, knowledge graphs (KG) provide dense and
structured representations of factual information. This makes them attractive
for systems that supplement or ground the knowledge found in pre-trained
language models with an external knowledge source. This has especially been the
case for classification tasks, where recent work has focused on creating
pipeline models that retrieve information from KGs like ConceptNet as
additional context. Many of these models consist of multiple components, and
although they differ in the number and nature of these parts, they all have in
common that for some given text query, they attempt to identify and retrieve a
relevant subgraph from the KG. Due to the noise and idiosyncrasies often found
in KGs, it is not known how current methods compare to a scenario where the
aligned subgraph is completely relevant to the query. In this work, we try to
bridge this knowledge gap by reviewing current approaches to text-to-KG
alignment and evaluating them on two datasets where manually created graphs are
available, providing insights into the effectiveness of current methods.
Related papers
- iText2KG: Incremental Knowledge Graphs Construction Using Large Language Models [0.7165255458140439]
iText2KG is a method for incremental, topic-independent Knowledge Graph construction without post-processing.
Our method demonstrates superior performance compared to baseline methods across three scenarios.
arXiv Detail & Related papers (2024-09-05T06:49:14Z) - Knowledge Graphs and Pre-trained Language Models enhanced Representation Learning for Conversational Recommender Systems [58.561904356651276]
We introduce the Knowledge-Enhanced Entity Representation Learning (KERL) framework to improve the semantic understanding of entities for Conversational recommender systems.
KERL uses a knowledge graph and a pre-trained language model to improve the semantic understanding of entities.
KERL achieves state-of-the-art results in both recommendation and response generation tasks.
arXiv Detail & Related papers (2023-12-18T06:41:23Z) - Universal Preprocessing Operators for Embedding Knowledge Graphs with
Literals [2.211868306499727]
Knowledge graph embeddings are dense numerical representations of entities in a knowledge graph (KG)
We propose a set of universal preprocessing operators which can be used to transform KGs with literals for numerical, temporal, textual, and image information.
The results on the kgbench dataset with three different embedding methods show promising results.
arXiv Detail & Related papers (2023-09-06T14:08:46Z) - Knowledge Graph-Augmented Language Models for Knowledge-Grounded
Dialogue Generation [58.65698688443091]
We propose SUbgraph Retrieval-augmented GEneration (SURGE), a framework for generating context-relevant and knowledge-grounded dialogues with Knowledge Graphs (KGs)
Our framework first retrieves the relevant subgraph from the KG, and then enforces consistency across facts by perturbing their word embeddings conditioned by the retrieved subgraph.
We validate our SURGE framework on OpendialKG and KOMODIS datasets, showing that it generates high-quality dialogues that faithfully reflect the knowledge from KG.
arXiv Detail & Related papers (2023-05-30T08:36:45Z) - Deep Bidirectional Language-Knowledge Graph Pretraining [159.9645181522436]
DRAGON is a self-supervised approach to pretraining a deeply joint language-knowledge foundation model from text and KG at scale.
Our model takes pairs of text segments and relevant KG subgraphs as input and bidirectionally fuses information from both modalities.
arXiv Detail & Related papers (2022-10-17T18:02:52Z) - KELM: Knowledge Enhanced Pre-Trained Language Representations with
Message Passing on Hierarchical Relational Graphs [26.557447199727758]
We propose a novel knowledge-aware language model framework based on fine-tuning process.
Our model can efficiently incorporate world knowledge from KGs into existing language models such as BERT.
arXiv Detail & Related papers (2021-09-09T12:39:17Z) - Few-shot Knowledge Graph-to-Text Generation with Pretrained Language
Models [42.38563175680914]
This paper studies how to automatically generate a natural language text that describes the facts in knowledge graph (KG)
Considering the few-shot setting, we leverage the excellent capacities of pretrained language models (PLMs) in language understanding and generation.
arXiv Detail & Related papers (2021-06-03T06:48:00Z) - ENT-DESC: Entity Description Generation by Exploring Knowledge Graph [53.03778194567752]
In practice, the input knowledge could be more than enough, since the output description may only cover the most significant knowledge.
We introduce a large-scale and challenging dataset to facilitate the study of such a practical scenario in KG-to-text.
We propose a multi-graph structure that is able to represent the original graph information more comprehensively.
arXiv Detail & Related papers (2020-04-30T14:16:19Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z) - Entity Type Prediction in Knowledge Graphs using Embeddings [2.7528170226206443]
Open Knowledge Graphs (such as DBpedia, Wikidata, YAGO) have been recognized as the backbone of diverse applications in the field of data mining and information retrieval.
Most of these KGs are mostly created either via an automated information extraction from snapshots or information accumulation provided by the users or using Wikipedias.
It has been observed that the type information of these KGs is often noisy, incomplete, and incorrect.
A multi-label classification approach is proposed in this work for entity typing using KG embeddings.
arXiv Detail & Related papers (2020-04-28T17:57:08Z) - Generative Adversarial Zero-Shot Relational Learning for Knowledge
Graphs [96.73259297063619]
We consider a novel formulation, zero-shot learning, to free this cumbersome curation.
For newly-added relations, we attempt to learn their semantic features from their text descriptions.
We leverage Generative Adrial Networks (GANs) to establish the connection between text and knowledge graph domain.
arXiv Detail & Related papers (2020-01-08T01:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.