How good is GPT at writing political speeches for the White House?
- URL: http://arxiv.org/abs/2412.14617v1
- Date: Thu, 19 Dec 2024 08:06:09 GMT
- Title: How good is GPT at writing political speeches for the White House?
- Authors: Jacques Savoy,
- Abstract summary: Using large language models (LLMs), computers are able to generate a written text in response to a us er request.<n>This study analyses the written style of one LLM called GPT by comparing its generated speeches with those of the recent US presidents.
- Score: 1.7948767405202701
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Using large language models (LLMs), computers are able to generate a written text in response to a us er request. As this pervasive technology can be applied in numerous contexts, this study analyses the written style of one LLM called GPT by comparing its generated speeches with those of the recent US presidents. To achieve this objective, the State of the Union (SOTU) addresses written by Reagan to Biden are contrasted to those produced by both GPT-3.5 and GPT-4.o versions. Compared to US presidents, GPT tends to overuse the lemma "we" and produce shorter messages with, on average, longer sentences. Moreover, GPT opts for an optimistic tone, opting more often for political (e.g., president, Congress), symbolic (e.g., freedom), and abstract terms (e.g., freedom). Even when imposing an author's style to GPT, the resulting speech remains distinct from addresses written by the target author. Finally, the two GPT versions present distinct characteristics, but both appear overall dissimilar to true presidential messages.
Related papers
- ChatGPT for President! Presupposed content in politicians versus GPT-generated texts [0.0]
This study examines ChatGPT-4's capability to replicate linguistic strategies used in political discourse.
Using a corpus-based pragmatic analysis, this study assesses how well ChatGPT can mimic these persuasive strategies.
arXiv Detail & Related papers (2025-03-03T07:48:04Z) - ChatGPT as speechwriter for the French presidents [2.3895981099137535]
We analyze the written style of one large language model called ChatGPT by comparing its generated messages with those of the recent French presidents.<n>We found that ChatGPT tends to overuse nouns, possessive determiners, and numbers.<n>In addition, when a short text is provided as example to ChatGPT, the machine can generate a short message with a style closed to the original wording.
arXiv Detail & Related papers (2024-11-27T14:29:10Z) - GPT as ghostwriter at the White House [1.7948767405202701]
We analyze the written style of one large language model called ChatGPT 3.5 by comparing its generated messages with those of the recent US presidents.<n>We found that ChatGPT tends to overuse the lemma "we" as well as nouns and commas.<n>We show that the GPT's style exposes distinct features compared to real presidential addresses.
arXiv Detail & Related papers (2024-11-27T14:12:36Z) - Quantifying the Uniqueness of Donald Trump in Presidential Discourse [51.76056700705539]
This paper introduces a novel metric of uniqueness based on large language models.
We find considerable evidence that Trump's speech patterns diverge from those of all major party nominees for the presidency in recent history.
arXiv Detail & Related papers (2024-01-02T19:00:17Z) - A ripple in time: a discontinuity in American history [49.84018914962972]
We suggest a novel approach to discover temporal (related and unrelated to language dilation) and personality (authorship attribution) aspects in historical datasets.
We exemplify our approach on the State of the Union addresses given by the past 42 US presidents.
arXiv Detail & Related papers (2023-12-02T17:24:17Z) - RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text [81.33699837678229]
We introduce RecurrentGPT, a language-based simulacrum of the recurrence mechanism in RNNs.
At each timestep, RecurrentGPT generates a paragraph of text and updates its language-based long-short term memory.
RecurrentGPT is an initial step towards next-generation computer-assisted writing systems.
arXiv Detail & Related papers (2023-05-22T17:58:10Z) - Collaborative Generative AI: Integrating GPT-k for Efficient Editing in
Text-to-Image Generation [114.80518907146792]
We investigate the potential of utilizing large-scale language models, such as GPT-k, to improve the prompt editing process for text-to-image generation.
We compare the common edits made by humans and GPT-k, evaluate the performance of GPT-k in prompting T2I, and examine factors that may influence this process.
arXiv Detail & Related papers (2023-05-18T21:53:58Z) - AI, write an essay for me: A large-scale comparison of human-written
versus ChatGPT-generated essays [66.36541161082856]
ChatGPT and similar generative AI models have attracted hundreds of millions of users.
This study compares human-written versus ChatGPT-generated argumentative student essays.
arXiv Detail & Related papers (2023-04-24T12:58:28Z) - Can GPT-3 Perform Statutory Reasoning? [37.66486350122862]
We explore the capabilities of the most capable GPT-3 model, text-davinci-003, on an established statutory-reasoning dataset called SARA.
We find GPT-3 performs poorly at answering straightforward questions about simple synthetic statutes.
arXiv Detail & Related papers (2023-02-13T04:56:11Z) - The political ideology of conversational AI: Converging evidence on
ChatGPT's pro-environmental, left-libertarian orientation [0.0]
OpenAI introduced ChatGPT, a state-of-the-art dialogue model that can converse with its human counterparts.
This paper focuses on one of democratic society's most important decision-making processes: political elections.
We uncover ChatGPT's pro-environmental, left-libertarian ideology.
arXiv Detail & Related papers (2023-01-05T07:13:13Z) - On Prosody Modeling for ASR+TTS based Voice Conversion [82.65378387724641]
In voice conversion, an approach showing promising results in the latest voice conversion challenge (VCC) 2020 is to first use an automatic speech recognition (ASR) model to transcribe the source speech into the underlying linguistic contents.
Such a paradigm, referred to as ASR+TTS, overlooks the modeling of prosody, which plays an important role in speech naturalness and conversion similarity.
We propose to directly predict prosody from the linguistic representation in a target-speaker-dependent manner, referred to as target text prediction (TTP)
arXiv Detail & Related papers (2021-07-20T13:30:23Z) - Investigating African-American Vernacular English in Transformer-Based
Text Generation [55.53547556060537]
Social media has encouraged the written use of African American Vernacular English (AAVE)
We investigate the performance of GPT-2 on AAVE text by creating a dataset of intent-equivalent parallel AAVE/SAE tweet pairs.
We find that while AAVE text results in more classifications of negative sentiment than SAE, the use of GPT-2 generally increases occurrences of positive sentiment for both.
arXiv Detail & Related papers (2020-10-06T06:27:02Z) - Text-Based Ideal Points [26.981303055207267]
We introduce the text-based ideal point model (TBIP), an unsupervised probabilistic topic model that analyzes texts to quantify the political positions of its authors.
The TBIP separates lawmakers by party, learns interpretable politicized topics, and infers ideal points close to the classical vote-based ideal points.
It can estimate ideal points of anyone who authors political texts, including non-voting actors.
arXiv Detail & Related papers (2020-05-08T21:16:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.