Towards information-rich, logical text generation with
knowledge-enhanced neural models
- URL: http://arxiv.org/abs/2003.00814v1
- Date: Mon, 2 Mar 2020 12:41:02 GMT
- Title: Towards information-rich, logical text generation with
knowledge-enhanced neural models
- Authors: Hao Wang, Bin Guo, Wei Wu, Zhiwen Yu
- Abstract summary: Text generation system has made massive promising progress contributed by deep learning techniques and has been widely applied in our life.
Existing end-to-end neural models suffer from the problem of tending to generate uninformative and generic text because they cannot ground input context with background knowledge.
This survey gives a comprehensive review of knowledge-enhanced text generation systems, summarizes research progress to solving these challenges and proposes some open issues and research directions.
- Score: 15.931791215286879
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text generation system has made massive promising progress contributed by
deep learning techniques and has been widely applied in our life. However,
existing end-to-end neural models suffer from the problem of tending to
generate uninformative and generic text because they cannot ground input
context with background knowledge. In order to solve this problem, many
researchers begin to consider combining external knowledge in text generation
systems, namely knowledge-enhanced text generation. The challenges of knowledge
enhanced text generation including how to select the appropriate knowledge from
large-scale knowledge bases, how to read and understand extracted knowledge,
and how to integrate knowledge into generation process. This survey gives a
comprehensive review of knowledge-enhanced text generation systems, summarizes
research progress to solving these challenges and proposes some open issues and
research directions.
Related papers
- Well Begun is Half Done: Generator-agnostic Knowledge Pre-Selection for
Knowledge-Grounded Dialogue [24.395322923436026]
We focus on the third under-explored category of study, which can not only select knowledge accurately in advance, but has the advantage to reduce the learning, adjustment, and interpretation burden.
We propose GATE, a generator-agnostic knowledge selection method, to prepare knowledge for subsequent response generation models.
arXiv Detail & Related papers (2023-10-11T17:00:29Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Open-world Story Generation with Structured Knowledge Enhancement: A
Comprehensive Survey [38.56791838401675]
We present a systematic taxonomy regarding how existing methods integrate structured knowledge into story generation.
We give multidimensional insights into the challenges of knowledge-enhanced story generation.
arXiv Detail & Related papers (2022-12-09T02:19:07Z) - Multimodal Dialog Systems with Dual Knowledge-enhanced Generative Pretrained Language Model [63.461030694700014]
We propose a novel dual knowledge-enhanced generative pretrained language model for multimodal task-oriented dialog systems (DKMD)
The proposed DKMD consists of three key components: dual knowledge selection, dual knowledge-enhanced context learning, and knowledge-enhanced response generation.
Experiments on a public dataset verify the superiority of the proposed DKMD over state-of-the-art competitors.
arXiv Detail & Related papers (2022-07-16T13:02:54Z) - TegTok: Augmenting Text Generation via Task-specific and Open-world
Knowledge [83.55215993730326]
We propose augmenting TExt Generation via Task-specific and Open-world Knowledge (TegTok) in a unified framework.
Our model selects knowledge entries from two types of knowledge sources through dense retrieval and then injects them into the input encoding and output decoding stages respectively.
arXiv Detail & Related papers (2022-03-16T10:37:59Z) - Knowledge-Grounded Dialogue Generation with a Unified Knowledge
Representation [78.85622982191522]
Existing systems perform poorly on unseen topics due to limited topics covered in the training data.
We present PLUG, a language model that homogenizes different knowledge sources to a unified knowledge representation.
It can achieve comparable performance with state-of-the-art methods under a fully-supervised setting.
arXiv Detail & Related papers (2021-12-15T07:11:02Z) - Prediction, Selection, and Generation: Exploration of Knowledge-Driven
Conversation System [24.537862151735006]
In open-domain conversational systems, it is important but challenging to leverage background knowledge.
We combine the knowledge bases and pre-training model to propose a knowledge-driven conversation system.
We study the performance factors that maybe affect the generation of knowledge-driven dialogue.
arXiv Detail & Related papers (2021-04-23T07:59:55Z) - Contextualized Knowledge-aware Attentive Neural Network: Enhancing
Answer Selection with Knowledge [77.77684299758494]
We extensively investigate approaches to enhancing the answer selection model with external knowledge from knowledge graph (KG)
First, we present a context-knowledge interaction learning framework, Knowledge-aware Neural Network (KNN), which learns the QA sentence representations by considering a tight interaction with the external knowledge from KG and the textual information.
To handle the diversity and complexity of KG information, we propose a Contextualized Knowledge-aware Attentive Neural Network (CKANN), which improves the knowledge representation learning with structure information via a customized Graph Convolutional Network (GCN) and comprehensively learns context-based and knowledge-based sentence representation via
arXiv Detail & Related papers (2021-04-12T05:52:20Z) - A Survey of Knowledge-Enhanced Text Generation [81.24633231919137]
The goal of text generation is to make machines express in human language.
Various neural encoder-decoder models have been proposed to achieve the goal by learning to map input text to output text.
To address this issue, researchers have considered incorporating various forms of knowledge beyond the input text into the generation models.
arXiv Detail & Related papers (2020-10-09T06:46:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.