Open-world Story Generation with Structured Knowledge Enhancement: A
Comprehensive Survey
- URL: http://arxiv.org/abs/2212.04634v3
- Date: Tue, 12 Sep 2023 17:38:30 GMT
- Title: Open-world Story Generation with Structured Knowledge Enhancement: A
Comprehensive Survey
- Authors: Yuxin Wang, Jieru Lin, Zhiwei Yu, Wei Hu, B\"orje F. Karlsson
- Abstract summary: We present a systematic taxonomy regarding how existing methods integrate structured knowledge into story generation.
We give multidimensional insights into the challenges of knowledge-enhanced story generation.
- Score: 38.56791838401675
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Storytelling and narrative are fundamental to human experience, intertwined
with our social and cultural engagement. As such, researchers have long
attempted to create systems that can generate stories automatically. In recent
years, powered by deep learning and massive data resources, automatic story
generation has shown significant advances. However, considerable challenges,
like the need for global coherence in generated stories, still hamper
generative models from reaching the same storytelling ability as human
narrators. To tackle these challenges, many studies seek to inject structured
knowledge into the generation process, which is referred to as structured
knowledge-enhanced story generation. Incorporating external knowledge can
enhance the logical coherence among story events, achieve better knowledge
grounding, and alleviate over-generalization and repetition problems in
stories. This survey provides the latest and comprehensive review of this
research field: (i) we present a systematic taxonomy regarding how existing
methods integrate structured knowledge into story generation; (ii) we summarize
involved story corpora, structured knowledge datasets, and evaluation metrics;
(iii) we give multidimensional insights into the challenges of
knowledge-enhanced story generation and cast light on promising directions for
future study.
Related papers
- SARD: A Human-AI Collaborative Story Generation [0.0]
We propose SARD, a drag-and-drop visual interface for generating a multi-chapter story using large language models.
Our evaluation of the usability of SARD and its creativity support shows that while node-based visualization of the narrative may help writers build a mental model, it exerts unnecessary mental overhead to the writer.
We also found that AI generates stories that are less lexically diverse, irrespective of the complexity of the story.
arXiv Detail & Related papers (2024-03-03T17:48:42Z) - Embedding Knowledge for Document Summarization: A Survey [66.76415502727802]
Previous works proved that knowledge-embedded document summarizers excel at generating superior digests.
We propose novel to recapitulate knowledge and knowledge embeddings under the document summarization view.
arXiv Detail & Related papers (2022-04-24T04:36:07Z) - TegTok: Augmenting Text Generation via Task-specific and Open-world
Knowledge [83.55215993730326]
We propose augmenting TExt Generation via Task-specific and Open-world Knowledge (TegTok) in a unified framework.
Our model selects knowledge entries from two types of knowledge sources through dense retrieval and then injects them into the input encoding and output decoding stages respectively.
arXiv Detail & Related papers (2022-03-16T10:37:59Z) - Incorporating Commonsense Knowledge into Story Ending Generation via
Heterogeneous Graph Networks [16.360265861788253]
We propose a Story Heterogeneous Graph Network (SHGN) to explicitly model both the information of story context at different levels and the multi-grained interactive relations among them.
In detail, we consider commonsense knowledge, words and sentences as three types of nodes.
We design two auxiliary tasks to implicitly capture the sentiment trend and key events lie in the context.
arXiv Detail & Related papers (2022-01-29T09:33:11Z) - Knowledge-Grounded Dialogue Generation with a Unified Knowledge
Representation [78.85622982191522]
Existing systems perform poorly on unseen topics due to limited topics covered in the training data.
We present PLUG, a language model that homogenizes different knowledge sources to a unified knowledge representation.
It can achieve comparable performance with state-of-the-art methods under a fully-supervised setting.
arXiv Detail & Related papers (2021-12-15T07:11:02Z) - A guided journey through non-interactive automatic story generation [0.0]
The article presents requirements for creative systems, three types of models of creativity (computational, socio-cultural, and individual), and models of human creative writing.
The article concludes that the autonomous generation and adoption of the main idea to be conveyed and the autonomous design of the creativity ensuring criteria are possibly two of most important topics for future research.
arXiv Detail & Related papers (2021-10-08T10:01:36Z) - A Survey of Knowledge-Enhanced Text Generation [81.24633231919137]
The goal of text generation is to make machines express in human language.
Various neural encoder-decoder models have been proposed to achieve the goal by learning to map input text to output text.
To address this issue, researchers have considered incorporating various forms of knowledge beyond the input text into the generation models.
arXiv Detail & Related papers (2020-10-09T06:46:46Z) - Towards information-rich, logical text generation with
knowledge-enhanced neural models [15.931791215286879]
Text generation system has made massive promising progress contributed by deep learning techniques and has been widely applied in our life.
Existing end-to-end neural models suffer from the problem of tending to generate uninformative and generic text because they cannot ground input context with background knowledge.
This survey gives a comprehensive review of knowledge-enhanced text generation systems, summarizes research progress to solving these challenges and proposes some open issues and research directions.
arXiv Detail & Related papers (2020-03-02T12:41:02Z) - A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation [98.25464306634758]
We propose to utilize commonsense knowledge from external knowledge bases to generate reasonable stories.
We employ multi-task learning which combines a discriminative objective to distinguish true and fake stories.
Our model can generate more reasonable stories than state-of-the-art baselines, particularly in terms of logic and global coherence.
arXiv Detail & Related papers (2020-01-15T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.