Can Knowledge Graphs Simplify Text?
- URL: http://arxiv.org/abs/2308.06975v3
- Date: Wed, 25 Oct 2023 00:57:54 GMT
- Title: Can Knowledge Graphs Simplify Text?
- Authors: Anthony Colas, Haodi Ma, Xuanli He, Yang Bai, Daisy Zhe Wang
- Abstract summary: KGSimple is a novel approach to unsupervised text simplification.
Our model is capable of simplifying text when starting from a KG by learning to keep important information.
We evaluate various settings of the KGSimple model on currently-available KG-to-text datasets.
- Score: 17.642947840076577
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge Graph (KG)-to-Text Generation has seen recent improvements in
generating fluent and informative sentences which describe a given KG. As KGs
are widespread across multiple domains and contain important entity-relation
information, and as text simplification aims to reduce the complexity of a text
while preserving the meaning of the original text, we propose KGSimple, a novel
approach to unsupervised text simplification which infuses KG-established
techniques in order to construct a simplified KG path and generate a concise
text which preserves the original input's meaning. Through an iterative and
sampling KG-first approach, our model is capable of simplifying text when
starting from a KG by learning to keep important information while harnessing
KG-to-text generation to output fluent and descriptive sentences. We evaluate
various settings of the KGSimple model on currently-available KG-to-text
datasets, demonstrating its effectiveness compared to unsupervised text
simplification models which start with a given complex text. Our code is
available on GitHub.
Related papers
- FedMKGC: Privacy-Preserving Federated Multilingual Knowledge Graph
Completion [21.4302940596294]
Knowledge graph completion (KGC) aims to predict missing facts in knowledge graphs (KGs)
Previous methods that rely on transferring raw data among KGs raise privacy concerns.
We propose a new federated learning framework that implicitly aggregates knowledge from multiple KGs without demanding raw data exchange and entity alignment.
arXiv Detail & Related papers (2023-12-17T08:09:27Z) - Select and Augment: Enhanced Dense Retrieval Knowledge Graph
Augmentation [0.59829224684009]
We propose a framework that jointly selects a set of text descriptions relevant to KG entities as well as align or augment KG embeddings with text descriptions.
Experiment results for Link Prediction demonstrate a 5.5% and 3.5% percentage increase in the Mean Reciprocal Rank (MRR) and Hits@10 scores respectively.
arXiv Detail & Related papers (2023-07-28T19:33:18Z) - Using Large Language Models for Zero-Shot Natural Language Generation
from Knowledge Graphs [4.56877715768796]
We show that ChatGPT achieves near state-of-the-art performance on some measures of the WebNLG 2020 challenge.
We also show that there is a significant connection between what the LLM already knows about the data it is parsing and the quality of the output text.
arXiv Detail & Related papers (2023-07-14T12:45:03Z) - Knowledge Graph-Augmented Language Models for Knowledge-Grounded
Dialogue Generation [58.65698688443091]
We propose SUbgraph Retrieval-augmented GEneration (SURGE), a framework for generating context-relevant and knowledge-grounded dialogues with Knowledge Graphs (KGs)
Our framework first retrieves the relevant subgraph from the KG, and then enforces consistency across facts by perturbing their word embeddings conditioned by the retrieved subgraph.
We validate our SURGE framework on OpendialKG and KOMODIS datasets, showing that it generates high-quality dialogues that faithfully reflect the knowledge from KG.
arXiv Detail & Related papers (2023-05-30T08:36:45Z) - Deep Bidirectional Language-Knowledge Graph Pretraining [159.9645181522436]
DRAGON is a self-supervised approach to pretraining a deeply joint language-knowledge foundation model from text and KG at scale.
Our model takes pairs of text segments and relevant KG subgraphs as input and bidirectionally fuses information from both modalities.
arXiv Detail & Related papers (2022-10-17T18:02:52Z) - Syntax Controlled Knowledge Graph-to-Text Generation with Order and
Semantic Consistency [10.7334441041015]
Knowledge graph-to-text (KG-to-text) generation aims to generate easy-to-understand sentences from the knowledge graph.
In this paper, we optimize the knowledge description order prediction under the order supervision extracted from the caption.
We incorporate the Part-of-Speech (POS) syntactic tags to constrain the positions to copy words from the KG.
arXiv Detail & Related papers (2022-07-02T02:42:14Z) - BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from
Pretrained Language Models [65.51390418485207]
We propose a new approach of harvesting massive KGs of arbitrary relations from pretrained LMs.
With minimal input of a relation definition, the approach efficiently searches in the vast entity pair space to extract diverse accurate knowledge.
We deploy the approach to harvest KGs of over 400 new relations from different LMs.
arXiv Detail & Related papers (2022-06-28T19:46:29Z) - UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding
with Text-to-Text Language Models [170.88745906220174]
We propose the SKG framework, which unifies 21 SKG tasks into a text-to-text format.
We show that UnifiedSKG achieves state-of-the-art performance on almost all of the 21 tasks.
We also use UnifiedSKG to conduct a series of experiments on structured knowledge encoding variants across SKG tasks.
arXiv Detail & Related papers (2022-01-16T04:36:18Z) - Few-shot Knowledge Graph-to-Text Generation with Pretrained Language
Models [42.38563175680914]
This paper studies how to automatically generate a natural language text that describes the facts in knowledge graph (KG)
Considering the few-shot setting, we leverage the excellent capacities of pretrained language models (PLMs) in language understanding and generation.
arXiv Detail & Related papers (2021-06-03T06:48:00Z) - Language Models are Open Knowledge Graphs [75.48081086368606]
Recent deep language models automatically acquire knowledge from large-scale corpora via pre-training.
In this paper, we propose an unsupervised method to cast the knowledge contained within language models into KGs.
We show that KGs are constructed with a single forward pass of the pre-trained language models (without fine-tuning) over the corpora.
arXiv Detail & Related papers (2020-10-22T18:01:56Z) - ENT-DESC: Entity Description Generation by Exploring Knowledge Graph [53.03778194567752]
In practice, the input knowledge could be more than enough, since the output description may only cover the most significant knowledge.
We introduce a large-scale and challenging dataset to facilitate the study of such a practical scenario in KG-to-text.
We propose a multi-graph structure that is able to represent the original graph information more comprehensively.
arXiv Detail & Related papers (2020-04-30T14:16:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.