Bits of Grass: Does GPT already know how to write like Whitman?
- URL: http://arxiv.org/abs/2305.11064v1
- Date: Wed, 10 May 2023 09:02:34 GMT
- Title: Bits of Grass: Does GPT already know how to write like Whitman?
- Authors: Piotr Sawicki, Marek Grzes, Fabricio Goes, Dan Brown, Max Peeperkorn,
Aisha Khatun
- Abstract summary: This study examines the ability of GPT-3.5, GPT-3.5-turbo (ChatGPT) and GPT-4 models to generate poems in the style of specific authors using zero-shot and many-shot prompts.
We assess the performance of models that are not fine-tuned for generating poetry in the style of specific authors, via automated evaluation.
- Score: 1.9084046244608193
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study examines the ability of GPT-3.5, GPT-3.5-turbo (ChatGPT) and GPT-4
models to generate poems in the style of specific authors using zero-shot and
many-shot prompts (which use the maximum context length of 8192 tokens). We
assess the performance of models that are not fine-tuned for generating poetry
in the style of specific authors, via automated evaluation. Our findings
indicate that without fine-tuning, even when provided with the maximum number
of 17 poem examples (8192 tokens) in the prompt, these models do not generate
poetry in the desired style.
Related papers
- Does ChatGPT Have a Poetic Style? [0.6827423171182154]
We prompt the GPT-3.5 and GPT-4 models to generate English-language poems in 24 different poetic forms and styles.
We analyze the resulting 5.7k poems, comparing them to a sample of 3.7k poems from the Poetry Foundation and the Academy of American Poets.
We find that the GPT models, especially GPT-4, can successfully produce poems in a range of both common and uncommon English-language forms.
arXiv Detail & Related papers (2024-10-20T06:01:34Z) - Identifying the style by a qualified reader on a short fragment of
generated poetry [0.0]
I used 3 character-based LSTM-models to work with style reproducing assessment.
All three models were trained on the corpus of texts by famous Russian-speaking poets.
accuracy of definition of style increases if the assessor can quote the poet by heart.
arXiv Detail & Related papers (2023-06-05T10:55:15Z) - Paraphrasing evades detectors of AI-generated text, but retrieval is an
effective defense [56.077252790310176]
We present a paraphrase generation model (DIPPER) that can paraphrase paragraphs, condition on surrounding context, and control lexical diversity and content reordering.
Using DIPPER to paraphrase text generated by three large language models (including GPT3.5-davinci-003) successfully evades several detectors, including watermarking.
We introduce a simple defense that relies on retrieving semantically-similar generations and must be maintained by a language model API provider.
arXiv Detail & Related papers (2023-03-23T16:29:27Z) - A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models [71.42197262495056]
GPT series models have gained considerable attention due to their exceptional natural language processing capabilities.
We select six representative models, comprising two GPT-3 series models and four GPT-3.5 series models.
We evaluate their performance on nine natural language understanding (NLU) tasks using 21 datasets.
Our experiments reveal that the overall ability of GPT series models on NLU tasks does not increase gradually as the models evolve.
arXiv Detail & Related papers (2023-03-18T14:02:04Z) - ByGPT5: End-to-End Style-conditioned Poetry Generation with Token-free
Language Models [23.381986209234157]
In this work, we investigate end-to-end poetry generation conditioned on styles such as rhyme, meter, and alliteration.
We successfully pre-train ByGPT5, a new token-free decoder-only language model, and fine-tune it on a large custom corpus of English and German quatrains annotated with our styles.
We show that ByGPT5 outperforms other models such as mT5, ByT5, GPT-2 and ChatGPT, while also being more parameter efficient and performing favorably compared to humans.
arXiv Detail & Related papers (2022-12-20T17:49:49Z) - Generation of Chinese classical poetry based on pre-trained model [1.6114012813668934]
This paper mainly tries to use BART and other pre training models to generate metrical poetry text.
It developed a set of AI poetry Turing problems, it was reviewed by a group of poets and poetry writing researchers.
The model of poetry generation studied by the author generalizes works that cannot be distinguished from those of advanced scholars.
arXiv Detail & Related papers (2022-11-04T16:05:31Z) - SP-GPT2: Semantics Improvement in Vietnamese Poetry Generation [1.9107347888374506]
Generative Pretraining Transformer 2 (GPT2) is one of the state of the art approaches that have excellent successes.
In this paper, we took the first step to investigate the power of GPT2 in traditional Vietnamese poetry generation.
We released the first computational scoring module for poems generated in the template containing the style rule dictionary.
arXiv Detail & Related papers (2021-10-10T14:31:08Z) - Reframing Instructional Prompts to GPTk's Language [72.69833640335519]
We propose reframing techniques for model designers to create effective prompts for language models.
Our results show that reframing improves few-shot learning performance by 14% while reducing sample complexity.
The performance gains are particularly important on large language models, such as GPT3 where tuning models or prompts on large datasets is not feasible.
arXiv Detail & Related papers (2021-09-16T09:44:43Z) - CCPM: A Chinese Classical Poetry Matching Dataset [50.90794811956129]
We propose a novel task to assess a model's semantic understanding of poetry by poem matching.
This task requires the model to select one line of Chinese classical poetry among four candidates according to the modern Chinese translation of a line of poetry.
To construct this dataset, we first obtain a set of parallel data of Chinese classical poetry and modern Chinese translation.
arXiv Detail & Related papers (2021-06-03T16:49:03Z) - TuringAdvice: A Generative and Dynamic Evaluation of Language Use [90.3029315711237]
We propose TuringAdvice, a new challenge task and dataset for language understanding models.
Given a written situation that a real person is currently facing, a model must generate helpful advice in natural language.
Empirical results show that today's models struggle at TuringAdvice.
arXiv Detail & Related papers (2020-04-07T18:00:03Z) - Generating Major Types of Chinese Classical Poetry in a Uniformed
Framework [88.57587722069239]
We propose a GPT-2 based framework for generating major types of Chinese classical poems.
Preliminary results show this enhanced model can generate Chinese classical poems of major types with high quality in both form and content.
arXiv Detail & Related papers (2020-03-13T14:16:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.