GPT as ghostwriter at the White House
- URL: http://arxiv.org/abs/2411.18365v1
- Date: Wed, 27 Nov 2024 14:12:36 GMT
- Title: GPT as ghostwriter at the White House
- Authors: Jacques Savoy,
- Abstract summary: We analyze the written style of one large language model called ChatGPT 3.5 by comparing its generated messages with those of the recent US presidents.<n>We found that ChatGPT tends to overuse the lemma "we" as well as nouns and commas.<n>We show that the GPT's style exposes distinct features compared to real presidential addresses.
- Score: 1.7948767405202701
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently several large language models (LLMs) have demonstrated their capability to generate a message in response to a user request. Such scientific breakthroughs promote new perspectives but also some fears. The main focus of this study is to analyze the written style of one LLM called ChatGPT 3.5 by comparing its generated messages with those of the recent US presidents. To achieve this objective, we compare the State of the Union addresses written by Reagan to Obama with those automatically produced by ChatGPT. We found that ChatGPT tends to overuse the lemma "we" as well as nouns and commas. On the other hand, the generated speeches employ less verbs and include, in mean, longer sentences. Even when imposing a given style to ChatGPT, the resulting speech remains distinct from messages written by the target author. Moreover, ChatGPT opts for a neutral tone with mainly positive emotional expressions and symbolic terms (e.g., freedom, nation). Finally, we show that the GPT's style exposes distinct features compared to real presidential addresses.
Related papers
- ChatGPT for President! Presupposed content in politicians versus GPT-generated texts [0.0]
This study examines ChatGPT-4's capability to replicate linguistic strategies used in political discourse.
Using a corpus-based pragmatic analysis, this study assesses how well ChatGPT can mimic these persuasive strategies.
arXiv Detail & Related papers (2025-03-03T07:48:04Z) - How good is GPT at writing political speeches for the White House? [1.7948767405202701]
Using large language models (LLMs), computers are able to generate a written text in response to a us er request.
This study analyses the written style of one LLM called GPT by comparing its generated speeches with those of the recent US presidents.
arXiv Detail & Related papers (2024-12-19T08:06:09Z) - ChatGPT as speechwriter for the French presidents [2.3895981099137535]
We analyze the written style of one large language model called ChatGPT by comparing its generated messages with those of the recent French presidents.<n>We found that ChatGPT tends to overuse nouns, possessive determiners, and numbers.<n>In addition, when a short text is provided as example to ChatGPT, the machine can generate a short message with a style closed to the original wording.
arXiv Detail & Related papers (2024-11-27T14:29:10Z) - Primacy Effect of ChatGPT [69.49920102917598]
We study the primacy effect of ChatGPT: the tendency of selecting the labels at earlier positions as the answer.
We hope that our experiments and analyses provide additional insights into building more reliable ChatGPT-based solutions.
arXiv Detail & Related papers (2023-10-20T00:37:28Z) - Is ChatGPT Involved in Texts? Measure the Polish Ratio to Detect
ChatGPT-Generated Text [48.36706154871577]
We introduce a novel dataset termed HPPT (ChatGPT-polished academic abstracts)
It diverges from extant corpora by comprising pairs of human-written and ChatGPT-polished abstracts instead of purely ChatGPT-generated texts.
We also propose the "Polish Ratio" method, an innovative measure of the degree of modification made by ChatGPT compared to the original human-written text.
arXiv Detail & Related papers (2023-07-21T06:38:37Z) - ChatGPT vs Human-authored Text: Insights into Controllable Text
Summarization and Sentence Style Transfer [8.64514166615844]
We conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks.
We evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts.
We observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
arXiv Detail & Related papers (2023-06-13T14:21:35Z) - ChatGPT for Us: Preserving Data Privacy in ChatGPT via Dialogue Text
Ambiguation to Expand Mental Health Care Delivery [52.73936514734762]
ChatGPT has gained popularity for its ability to generate human-like dialogue.
Data-sensitive domains face challenges in using ChatGPT due to privacy and data-ownership concerns.
We propose a text ambiguation framework that preserves user privacy.
arXiv Detail & Related papers (2023-05-19T02:09:52Z) - Is ChatGPT A Good Keyphrase Generator? A Preliminary Study [51.863368917344864]
ChatGPT has recently garnered significant attention from the computational linguistics community.
We evaluate its performance in various aspects, including keyphrase generation prompts, keyphrase generation diversity, and long document understanding.
We find that ChatGPT performs exceptionally well on all six candidate prompts, with minor performance differences observed across the datasets.
arXiv Detail & Related papers (2023-03-23T02:50:38Z) - Can ChatGPT Understand Too? A Comparative Study on ChatGPT and
Fine-tuned BERT [103.57103957631067]
ChatGPT has attracted great attention, as it can generate fluent and high-quality responses to human inquiries.
We evaluate ChatGPT's understanding ability by evaluating it on the most popular GLUE benchmark, and comparing it with 4 representative fine-tuned BERT-style models.
We find that: 1) ChatGPT falls short in handling paraphrase and similarity tasks; 2) ChatGPT outperforms all BERT models on inference tasks by a large margin; 3) ChatGPT achieves comparable performance compared with BERT on sentiment analysis and question answering tasks.
arXiv Detail & Related papers (2023-02-19T12:29:33Z) - Is ChatGPT better than Human Annotators? Potential and Limitations of
ChatGPT in Explaining Implicit Hate Speech [8.761064812847078]
We examine whether ChatGPT can be used for providing natural language explanations (NLEs) for implicit hateful speech detection.
We design our prompt to elicit concise ChatGPT-generated NLEs and conduct user studies to evaluate their qualities.
We discuss the potential and limitations of ChatGPT in the context of implicit hateful speech research.
arXiv Detail & Related papers (2023-02-11T03:13:54Z) - Is ChatGPT A Good Translator? Yes With GPT-4 As The Engine [97.8609714773255]
We evaluate ChatGPT for machine translation, including translation prompt, multilingual translation, and translation robustness.
ChatGPT performs competitively with commercial translation products but lags behind significantly on low-resource or distant languages.
With the launch of the GPT-4 engine, the translation performance of ChatGPT is significantly boosted.
arXiv Detail & Related papers (2023-01-20T08:51:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.