Syntax Controlled Knowledge Graph-to-Text Generation with Order and
Semantic Consistency
- URL: http://arxiv.org/abs/2207.00719v1
- Date: Sat, 2 Jul 2022 02:42:14 GMT
- Title: Syntax Controlled Knowledge Graph-to-Text Generation with Order and
Semantic Consistency
- Authors: Jin Liu and Chongfeng Fan and Fengyu Zhou and Huijuan Xu
- Abstract summary: Knowledge graph-to-text (KG-to-text) generation aims to generate easy-to-understand sentences from the knowledge graph.
In this paper, we optimize the knowledge description order prediction under the order supervision extracted from the caption.
We incorporate the Part-of-Speech (POS) syntactic tags to constrain the positions to copy words from the KG.
- Score: 10.7334441041015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The knowledge graph (KG) stores a large amount of structural knowledge, while
it is not easy for direct human understanding. Knowledge graph-to-text
(KG-to-text) generation aims to generate easy-to-understand sentences from the
KG, and at the same time, maintains semantic consistency between generated
sentences and the KG. Existing KG-to-text generation methods phrase this task
as a sequence-to-sequence generation task with linearized KG as input and
consider the consistency issue of the generated texts and KG through a simple
selection between decoded sentence word and KG node word at each time step.
However, the linearized KG order is commonly obtained through a heuristic
search without data-driven optimization. In this paper, we optimize the
knowledge description order prediction under the order supervision extracted
from the caption and further enhance the consistency of the generated sentences
and KG through syntactic and semantic regularization. We incorporate the
Part-of-Speech (POS) syntactic tags to constrain the positions to copy words
from the KG and employ a semantic context scoring function to evaluate the
semantic fitness for each word in its local context when decoding each word in
the generated sentence. Extensive experiments are conducted on two datasets,
WebNLG and DART, and achieve state-of-the-art performances.
Related papers
- Optimizing Factual Accuracy in Text Generation through Dynamic Knowledge
Selection [71.20871905457174]
Language models (LMs) have revolutionized the way we interact with information, but they often generate nonfactual text.
Previous methods use external knowledge as references for text generation to enhance factuality but often struggle with the knowledge mix-up of irrelevant references.
We present DKGen, which divide the text generation process into an iterative process.
arXiv Detail & Related papers (2023-08-30T02:22:40Z) - Can Knowledge Graphs Simplify Text? [17.642947840076577]
KGSimple is a novel approach to unsupervised text simplification.
Our model is capable of simplifying text when starting from a KG by learning to keep important information.
We evaluate various settings of the KGSimple model on currently-available KG-to-text datasets.
arXiv Detail & Related papers (2023-08-14T07:20:49Z) - Text-To-KG Alignment: Comparing Current Methods on Classification Tasks [2.191505742658975]
knowledge graphs (KG) provide dense and structured representations of factual information.
Recent work has focused on creating pipeline models that retrieve information from KGs as additional context.
It is not known how current methods compare to a scenario where the aligned subgraph is completely relevant to the query.
arXiv Detail & Related papers (2023-06-05T13:45:45Z) - Knowledge Graph-Augmented Language Models for Knowledge-Grounded
Dialogue Generation [58.65698688443091]
We propose SUbgraph Retrieval-augmented GEneration (SURGE), a framework for generating context-relevant and knowledge-grounded dialogues with Knowledge Graphs (KGs)
Our framework first retrieves the relevant subgraph from the KG, and then enforces consistency across facts by perturbing their word embeddings conditioned by the retrieved subgraph.
We validate our SURGE framework on OpendialKG and KOMODIS datasets, showing that it generates high-quality dialogues that faithfully reflect the knowledge from KG.
arXiv Detail & Related papers (2023-05-30T08:36:45Z) - Clustering Semantic Predicates in the Open Research Knowledge Graph [0.0]
We describe our approach tailoring two AI-based clustering algorithms for recommending predicates about resources in the Open Research Knowledge Graph (ORKG)
Our experiments show very promising results: a high precision with relatively high recall in linear runtime performance.
This work offers novel insights into the predicate groups that automatically accrue loosely as generic semantification patterns for semantification of scholarly knowledge spanning 44 research fields.
arXiv Detail & Related papers (2022-10-05T05:48:39Z) - Collaborative Knowledge Graph Fusion by Exploiting the Open Corpus [59.20235923987045]
It is challenging to enrich a Knowledge Graph with newly harvested triples while maintaining the quality of the knowledge representation.
This paper proposes a system to refine a KG using information harvested from an additional corpus.
arXiv Detail & Related papers (2022-06-15T12:16:10Z) - CoSe-Co: Text Conditioned Generative CommonSense Contextualizer [13.451001884972033]
Pre-trained Language Models (PTLMs) have been shown to perform well on natural language tasks.
We propose a CommonSense Contextualizer (CoSe-Co) conditioned on sentences as input to make it generically usable in tasks.
arXiv Detail & Related papers (2022-06-12T09:57:32Z) - UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding
with Text-to-Text Language Models [170.88745906220174]
We propose the SKG framework, which unifies 21 SKG tasks into a text-to-text format.
We show that UnifiedSKG achieves state-of-the-art performance on almost all of the 21 tasks.
We also use UnifiedSKG to conduct a series of experiments on structured knowledge encoding variants across SKG tasks.
arXiv Detail & Related papers (2022-01-16T04:36:18Z) - SenSeNet: Neural Keyphrase Generation with Document Structure [42.641790028836795]
We propose a new method called Sentence Selective Network (SenSeNet) to incorporate the meta-sentence inductive bias into Keyphrase Generation (KG)
SenSeNet can consistently improve the performance of major KG models based on seq2seq framework.
arXiv Detail & Related papers (2020-12-12T08:21:08Z) - Inductive Learning on Commonsense Knowledge Graph Completion [89.72388313527296]
Commonsense knowledge graph (CKG) is a special type of knowledge graph (CKG) where entities are composed of free-form text.
We propose to study the inductive learning setting for CKG completion where unseen entities may present at test time.
InductivE significantly outperforms state-of-the-art baselines in both standard and inductive settings on ATOMIC and ConceptNet benchmarks.
arXiv Detail & Related papers (2020-09-19T16:10:26Z) - Cross-lingual Entity Alignment with Incidental Supervision [76.66793175159192]
We propose an incidentally supervised model, JEANS, which jointly represents multilingual KGs and text corpora in a shared embedding scheme.
Experiments on benchmark datasets show that JEANS leads to promising improvement on entity alignment with incidental supervision.
arXiv Detail & Related papers (2020-05-01T01:53:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.