BACON: Deep-Learning Powered AI for Poetry Generation with Author
Linguistic Style Transfer
- URL: http://arxiv.org/abs/2112.11483v1
- Date: Tue, 14 Dec 2021 00:08:36 GMT
- Title: BACON: Deep-Learning Powered AI for Poetry Generation with Author
Linguistic Style Transfer
- Authors: Alejandro Rodriguez Pascual
- Abstract summary: This paper describes BACON, a prototype of an automatic poetry generator with author linguistic style transfer.
It combines concepts and techniques from finite state machinery, probabilistic models, artificial neural networks and deep learning, to write original poetry with rich aesthetic-qualities in the style of any given author.
- Score: 91.3755431537592
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper describes BACON, a basic prototype of an automatic poetry
generator with author linguistic style transfer. It combines concepts and
techniques from finite state machinery, probabilistic models, artificial neural
networks and deep learning, to write original poetry with rich
aesthetic-qualities in the style of any given author. Extrinsic evaluation of
the output generated by BACON shows that participants were unable to tell the
difference between human and AI-generated poems in any statistically
significant way.
Related papers
- StyleSpeech: Self-supervised Style Enhancing with VQ-VAE-based
Pre-training for Expressive Audiobook Speech Synthesis [63.019962126807116]
The expressive quality of synthesized speech for audiobooks is limited by generalized model architecture and unbalanced style distribution.
We propose a self-supervised style enhancing method with VQ-VAE-based pre-training for expressive audiobook speech synthesis.
arXiv Detail & Related papers (2023-12-19T14:13:26Z) - Evaluating the Efficacy of Hybrid Deep Learning Models in Distinguishing
AI-Generated Text [0.0]
My research investigates the use of cutting-edge hybrid deep learning models to accurately differentiate between AI-generated text and human writing.
I applied a robust methodology, utilising a carefully selected dataset comprising AI and human texts from various sources, each tagged with instructions.
arXiv Detail & Related papers (2023-11-27T06:26:53Z) - Erato: Automatizing Poetry Evaluation [6.5990719141691825]
We present Erato, a framework designed to facilitate the automated evaluation of poetry.
Using Erato, we compare and contrast human-authored poetry with automatically-generated poetry.
arXiv Detail & Related papers (2023-10-31T10:06:37Z) - PoetryDiffusion: Towards Joint Semantic and Metrical Manipulation in
Poetry Generation [58.36105306993046]
Controllable text generation is a challenging and meaningful field in natural language generation (NLG)
In this paper, we pioneer the use of the Diffusion model for generating sonnets and Chinese SongCi poetry.
Our model outperforms existing models in automatic evaluation of semantic, metrical, and overall performance as well as human evaluation.
arXiv Detail & Related papers (2023-06-14T11:57:31Z) - From Textual Experiments to Experimental Texts: Expressive Repetition in
"Artificial Intelligence Literature" [0.0]
AI literature integrates primitive problems including machine thinking, text generation, and machine creativity.
In the early stage, the mutual support between technological path and artistic ideas turned out to be a failure.
arXiv Detail & Related papers (2022-01-07T02:44:58Z) - Prose2Poem: The Blessing of Transformers in Translating Prose to Persian
Poetry [2.15242029196761]
We introduce a novel Neural Machine Translation (NMT) approach to translate prose to ancient Persian poetry.
We trained a Transformer model from scratch to obtain initial translations and pretrained different variations of BERT to obtain final translations.
arXiv Detail & Related papers (2021-09-30T09:04:11Z) - Metaphor Generation with Conceptual Mappings [58.61307123799594]
We aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs.
We propose to control the generation process by encoding conceptual mappings between cognitive domains.
We show that the unsupervised CM-Lex model is competitive with recent deep learning metaphor generation systems.
arXiv Detail & Related papers (2021-06-02T15:27:05Z) - GTAE: Graph-Transformer based Auto-Encoders for Linguistic-Constrained
Text Style Transfer [119.70961704127157]
Non-parallel text style transfer has attracted increasing research interests in recent years.
Current approaches still lack the ability to preserve the content and even logic of original sentences.
We propose a method called Graph Transformer based Auto-GTAE, which models a sentence as a linguistic graph and performs feature extraction and style transfer at the graph level.
arXiv Detail & Related papers (2021-02-01T11:08:45Z) - Artificial Intelligence versus Maya Angelou: Experimental evidence that
people cannot differentiate AI-generated from human-written poetry [0.0]
Natural language generation algorithms (NLG) have spurred much public attention and debate.
One reason lies in the algorithms' purported ability to generate human-like text across various domains.
We conducted two experiments assessing behavioral reactions to the state-of-the-art Natural Language Generation algorithm GPT-2.
We discuss what these results convey about the performance of NLG algorithms to produce human-like text and propose methodologies to study such learning algorithms in human-agent experimental settings.
arXiv Detail & Related papers (2020-05-20T11:52:28Z) - PALM: Pre-training an Autoencoding&Autoregressive Language Model for
Context-conditioned Generation [92.7366819044397]
Self-supervised pre-training has emerged as a powerful technique for natural language understanding and generation.
This work presents PALM with a novel scheme that jointly pre-trains an autoencoding and autoregressive language model on a large unlabeled corpus.
An extensive set of experiments show that PALM achieves new state-of-the-art results on a variety of language generation benchmarks.
arXiv Detail & Related papers (2020-04-14T06:25:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.