Let the Poem Hit the Rhythm: Using a Byte-Based Transformer for Beat-Aligned Poetry Generation
- URL: http://arxiv.org/abs/2406.10174v1
- Date: Fri, 14 Jun 2024 16:54:48 GMT
- Title: Let the Poem Hit the Rhythm: Using a Byte-Based Transformer for Beat-Aligned Poetry Generation
- Authors: Mohamad Elzohbi, Richard Zhao,
- Abstract summary: This paper explores whether a byte-based language model can generate words that fit specific beat patterns within the context of poetry.
We develop a method to train a transformer model, ByT5, to align poems with beat patterns.
The results demonstrate a high level of beat alignment while maintaining semantic coherence.
- Score: 1.03590082373586
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The intersection between poetry and music provides an interesting case for computational creativity, yet remains relatively unexplored. This paper explores the integration of poetry and music through the lens of beat patterns, investigating whether a byte-based language model can generate words that fit specific beat patterns within the context of poetry. Drawing on earlier studies, we developed a method to train a byte-based transformer model, ByT5, to align poems with beat patterns. The results demonstrate a high level of beat alignment while maintaining semantic coherence. Future work will aim to improve the model's ability to create complete beat-aligned poems.
Related papers
- Syllable-level lyrics generation from melody exploiting character-level
language model [14.851295355381712]
We propose to exploit fine-tuning character-level language models for syllable-level lyrics generation from symbolic melody.
In particular, our method endeavors to incorporate linguistic knowledge of the language model into the beam search process of a syllable-level Transformer generator network.
arXiv Detail & Related papers (2023-10-02T02:53:29Z) - PoetryDiffusion: Towards Joint Semantic and Metrical Manipulation in
Poetry Generation [58.36105306993046]
Controllable text generation is a challenging and meaningful field in natural language generation (NLG)
In this paper, we pioneer the use of the Diffusion model for generating sonnets and Chinese SongCi poetry.
Our model outperforms existing models in automatic evaluation of semantic, metrical, and overall performance as well as human evaluation.
arXiv Detail & Related papers (2023-06-14T11:57:31Z) - Unsupervised Melody-to-Lyric Generation [91.29447272400826]
We propose a method for generating high-quality lyrics without training on any aligned melody-lyric data.
We leverage the segmentation and rhythm alignment between melody and lyrics to compile the given melody into decoding constraints.
Our model can generate high-quality lyrics that are more on-topic, singable, intelligible, and coherent than strong baselines.
arXiv Detail & Related papers (2023-05-30T17:20:25Z) - Unsupervised Melody-Guided Lyrics Generation [84.22469652275714]
We propose to generate pleasantly listenable lyrics without training on melody-lyric aligned data.
We leverage the crucial alignments between melody and lyrics and compile the given melody into constraints to guide the generation process.
arXiv Detail & Related papers (2023-05-12T20:57:20Z) - PoeLM: A Meter- and Rhyme-Controllable Language Model for Unsupervised
Poetry Generation [42.12348554537587]
Formal verse poetry imposes strict constraints on the meter and rhyme scheme of poems.
Most prior work on generating this type of poetry uses existing poems for supervision.
We propose an unsupervised approach to generate poems following any given meter and rhyme scheme.
arXiv Detail & Related papers (2022-05-24T17:09:55Z) - Zero-shot Sonnet Generation with Discourse-level Planning and Aesthetics
Features [37.45490765899826]
We present a novel framework to generate sonnets that does not require training on poems.
Specifically, a content planning module is trained on non-poetic texts to obtain discourse-level coherence.
We also design a constrained decoding algorithm to impose the meter-and-rhyme constraint of the generated sonnets.
arXiv Detail & Related papers (2022-05-03T23:44:28Z) - BACON: Deep-Learning Powered AI for Poetry Generation with Author
Linguistic Style Transfer [91.3755431537592]
This paper describes BACON, a prototype of an automatic poetry generator with author linguistic style transfer.
It combines concepts and techniques from finite state machinery, probabilistic models, artificial neural networks and deep learning, to write original poetry with rich aesthetic-qualities in the style of any given author.
arXiv Detail & Related papers (2021-12-14T00:08:36Z) - A pattern recognition approach for distinguishing between prose and
poetry [0.8971132850029492]
We propose an automated method to distinguish between poetry and prose based solely on aural and rhythmic properties.
The classification of the considered texts using the set of features extracted resulted in a best accuracy of 0.78, obtained with a neural network.
arXiv Detail & Related papers (2021-07-18T18:44:17Z) - CCPM: A Chinese Classical Poetry Matching Dataset [50.90794811956129]
We propose a novel task to assess a model's semantic understanding of poetry by poem matching.
This task requires the model to select one line of Chinese classical poetry among four candidates according to the modern Chinese translation of a line of poetry.
To construct this dataset, we first obtain a set of parallel data of Chinese classical poetry and modern Chinese translation.
arXiv Detail & Related papers (2021-06-03T16:49:03Z) - Acrostic Poem Generation [26.604889384391726]
We propose a new task in the area of computational creativity: acrostic poem generation in English.
Acrostic poems are poems that contain a hidden message; typically, the first letter of each line spells out a word or short phrase.
Our experiments show that the acrostic poems generated by our baseline are received well by humans and do not lose much quality due to the additional constraints.
arXiv Detail & Related papers (2020-10-05T18:00:15Z) - MixPoet: Diverse Poetry Generation via Learning Controllable Mixed
Latent Space [79.70053419040902]
We propose MixPoet, a novel model that absorbs multiple factors to create various styles and promote diversity.
Based on a semi-supervised variational autoencoder, our model disentangles the latent space into some subspaces, with each conditioned on one influence factor by adversarial training.
Experiment results on Chinese poetry demonstrate that MixPoet improves both diversity and quality against three state-of-the-art models.
arXiv Detail & Related papers (2020-03-13T03:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.