Evaluating Diversity in Automatic Poetry Generation
- URL: http://arxiv.org/abs/2406.15267v2
- Date: Fri, 08 Nov 2024 14:02:13 GMT
- Title: Evaluating Diversity in Automatic Poetry Generation
- Authors: Yanran Chen, Hannes Gröner, Sina Zarrieß, Steffen Eger,
- Abstract summary: We evaluate the diversity of automatically generated poetry along structural, lexical, semantic and stylistic dimensions.
We find that current automatic poetry systems are considerably underdiverse along multiple dimensions.
Our identified limitations may serve as the basis for more genuinely diverse future poetry generation models.
- Score: 25.53206868552533
- License:
- Abstract: Natural Language Generation (NLG), and more generally generative AI, are among the currently most impactful research fields. Creative NLG, such as automatic poetry generation, is a fascinating niche in this area. While most previous research has focused on forms of the Turing test when evaluating automatic poetry generation -- can humans distinguish between automatic and human generated poetry -- we evaluate the diversity of automatically generated poetry (with a focus on quatrains), by comparing distributions of generated poetry to distributions of human poetry along structural, lexical, semantic and stylistic dimensions, assessing different model types (word vs. character-level, general purpose LLMs vs. poetry-specific models), including the very recent LLaMA3-8B, and types of fine-tuning (conditioned vs. unconditioned). We find that current automatic poetry systems are considerably underdiverse along multiple dimensions -- they often do not rhyme sufficiently, are semantically too uniform and even do not match the length distribution of human poetry. Our experiments reveal, however, that style-conditioning and character-level modeling clearly increases diversity across virtually all dimensions we explore. Our identified limitations may serve as the basis for more genuinely diverse future poetry generation models.
Related papers
- Sonnet or Not, Bot? Poetry Evaluation for Large Models and Datasets [3.0040661953201475]
Large language models (LLMs) can now generate and recognize poetry.
We develop a task to evaluate how well LLMs recognize one aspect of English-language poetry.
We show that state-of-the-art LLMs can successfully identify both common and uncommon fixed poetic forms.
arXiv Detail & Related papers (2024-06-27T05:36:53Z) - PoetryDiffusion: Towards Joint Semantic and Metrical Manipulation in
Poetry Generation [58.36105306993046]
Controllable text generation is a challenging and meaningful field in natural language generation (NLG)
In this paper, we pioneer the use of the Diffusion model for generating sonnets and Chinese SongCi poetry.
Our model outperforms existing models in automatic evaluation of semantic, metrical, and overall performance as well as human evaluation.
arXiv Detail & Related papers (2023-06-14T11:57:31Z) - Generation of Chinese classical poetry based on pre-trained model [1.6114012813668934]
This paper mainly tries to use BART and other pre training models to generate metrical poetry text.
It developed a set of AI poetry Turing problems, it was reviewed by a group of poets and poetry writing researchers.
The model of poetry generation studied by the author generalizes works that cannot be distinguished from those of advanced scholars.
arXiv Detail & Related papers (2022-11-04T16:05:31Z) - BACON: Deep-Learning Powered AI for Poetry Generation with Author
Linguistic Style Transfer [91.3755431537592]
This paper describes BACON, a prototype of an automatic poetry generator with author linguistic style transfer.
It combines concepts and techniques from finite state machinery, probabilistic models, artificial neural networks and deep learning, to write original poetry with rich aesthetic-qualities in the style of any given author.
arXiv Detail & Related papers (2021-12-14T00:08:36Z) - Don't Go Far Off: An Empirical Study on Neural Poetry Translation [13.194404923699782]
We present an empirical investigation for poetry translation along several dimensions.
We contribute a parallel dataset of poetry translations for several language pairs.
Our results show that multilingual fine-tuning on poetic text significantly outperforms multilingual fine-tuning on non-poetic text that is 35X larger in size.
arXiv Detail & Related papers (2021-09-07T10:00:44Z) - Lingxi: A Diversity-aware Chinese Modern Poetry Generation System [43.36560720793425]
Lingxi is a diversity-aware Chinese modern poetry generation system.
We propose nucleus sampling with randomized head (NS-RH) algorithm.
We find that even when a large portion of filtered vocabulary is randomized, it can actually generate fluent poetry.
arXiv Detail & Related papers (2021-08-27T03:33:28Z) - CCPM: A Chinese Classical Poetry Matching Dataset [50.90794811956129]
We propose a novel task to assess a model's semantic understanding of poetry by poem matching.
This task requires the model to select one line of Chinese classical poetry among four candidates according to the modern Chinese translation of a line of poetry.
To construct this dataset, we first obtain a set of parallel data of Chinese classical poetry and modern Chinese translation.
arXiv Detail & Related papers (2021-06-03T16:49:03Z) - My Teacher Thinks The World Is Flat! Interpreting Automatic Essay
Scoring Mechanism [71.34160809068996]
Recent work shows that automated scoring systems are prone to even common-sense adversarial samples.
We utilize recent advances in interpretability to find the extent to which features such as coherence, content and relevance are important for automated scoring mechanisms.
We also find that since the models are not semantically grounded with world-knowledge and common sense, adding false facts such as the world is flat'' actually increases the score instead of decreasing it.
arXiv Detail & Related papers (2020-12-27T06:19:20Z) - MixPoet: Diverse Poetry Generation via Learning Controllable Mixed
Latent Space [79.70053419040902]
We propose MixPoet, a novel model that absorbs multiple factors to create various styles and promote diversity.
Based on a semi-supervised variational autoencoder, our model disentangles the latent space into some subspaces, with each conditioned on one influence factor by adversarial training.
Experiment results on Chinese poetry demonstrate that MixPoet improves both diversity and quality against three state-of-the-art models.
arXiv Detail & Related papers (2020-03-13T03:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.