ChatGPT for President! Presupposed content in politicians versus GPT-generated texts
- URL: http://arxiv.org/abs/2503.01269v1
- Date: Mon, 03 Mar 2025 07:48:04 GMT
- Title: ChatGPT for President! Presupposed content in politicians versus GPT-generated texts
- Authors: Davide Garassino, Nicola Brocca, Viviana Masia,
- Abstract summary: This study examines ChatGPT-4's capability to replicate linguistic strategies used in political discourse.<n>Using a corpus-based pragmatic analysis, this study assesses how well ChatGPT can mimic these persuasive strategies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study examines ChatGPT-4's capability to replicate linguistic strategies used in political discourse, focusing on its potential for manipulative language generation. As large language models become increasingly popular for text generation, concerns have grown regarding their role in spreading fake news and propaganda. This research compares real political speeches with those generated by ChatGPT, emphasizing presuppositions (a rhetorical device that subtly influences audiences by packaging some content as already known at the moment of utterance, thus swaying opinions without explicit argumentation). Using a corpus-based pragmatic analysis, this study assesses how well ChatGPT can mimic these persuasive strategies. The findings reveal that although ChatGPT-generated texts contain many manipulative presuppositions, key differences emerge in their frequency, form, and function compared with those of politicians. For instance, ChatGPT often relies on change-of-state verbs used in fixed phrases, whereas politicians use presupposition triggers in more varied and creative ways. Such differences, however, are challenging to detect with the naked eye, underscoring the potential risks posed by large language models in political and public discourse.Using a corpus-based pragmatic analysis, this study assesses how well ChatGPT can mimic these persuasive strategies. The findings reveal that although ChatGPT-generated texts contain many manipulative presuppositions, key differences emerge in their frequency, form, and function compared with those of politicians. For instance, ChatGPT often relies on change-of-state verbs used in fixed phrases, whereas politicians use presupposition triggers in more varied and creative ways. Such differences, however, are challenging to detect with the naked eye, underscoring the potential risks posed by large language models in political and public discourse.
Related papers
- Politicians vs ChatGPT. A study of presuppositions in French and Italian political communication [0.0]
This study focuses on implicit communication, in particular on presuppositions and their functions in discourse.
This study also aims to contribute to the emerging literature on the pragmatic competences of Large Language Models.
arXiv Detail & Related papers (2024-11-27T14:46:41Z) - GPT as ghostwriter at the White House [1.7948767405202701]
We analyze the written style of one large language model called ChatGPT 3.5 by comparing its generated messages with those of the recent US presidents.
We found that ChatGPT tends to overuse the lemma "we" as well as nouns and commas.
We show that the GPT's style exposes distinct features compared to real presidential addresses.
arXiv Detail & Related papers (2024-11-27T14:12:36Z) - Measuring Bullshit in the Language Games played by ChatGPT [41.94295877935867]
Generative large language models (LLMs) create text without direct correspondence to truth value.
LLMs resemble the uses of language described in Frankfurt's popular monograph On Bullshit.
We show that a statistical model of the language of bullshit can reliably relate the Frankfurtian artificial bullshit of ChatGPT to the political and workplace functions of bullshit.
arXiv Detail & Related papers (2024-11-22T18:55:21Z) - Comprehensive Assessment of Toxicity in ChatGPT [49.71090497696024]
We evaluate the toxicity in ChatGPT by utilizing instruction-tuning datasets.
prompts in creative writing tasks can be 2x more likely to elicit toxic responses.
Certain deliberately toxic prompts, designed in earlier studies, no longer yield harmful responses.
arXiv Detail & Related papers (2023-11-03T14:37:53Z) - Playing with Words: Comparing the Vocabulary and Lexical Richness of
ChatGPT and Humans [3.0059120458540383]
generative language models such as ChatGPT have triggered a revolution that can transform how text is generated.
Will the use of tools such as ChatGPT increase or reduce the vocabulary used or the lexical richness?
This has implications for words, as those not included in AI-generated content will tend to be less and less popular and may eventually be lost.
arXiv Detail & Related papers (2023-08-14T21:19:44Z) - Is ChatGPT Involved in Texts? Measure the Polish Ratio to Detect
ChatGPT-Generated Text [48.36706154871577]
We introduce a novel dataset termed HPPT (ChatGPT-polished academic abstracts)
It diverges from extant corpora by comprising pairs of human-written and ChatGPT-polished abstracts instead of purely ChatGPT-generated texts.
We also propose the "Polish Ratio" method, an innovative measure of the degree of modification made by ChatGPT compared to the original human-written text.
arXiv Detail & Related papers (2023-07-21T06:38:37Z) - ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer [13.83503100145004]
We conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks.<n>We evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts.<n>We observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
arXiv Detail & Related papers (2023-06-13T14:21:35Z) - Simple Linguistic Inferences of Large Language Models (LLMs): Blind Spots and Blinds [59.71218039095155]
We evaluate language understanding capacities on simple inference tasks that most humans find trivial.
We target (i) grammatically-specified entailments, (ii) premises with evidential adverbs of uncertainty, and (iii) monotonicity entailments.
The models exhibit moderate to low performance on these evaluation sets.
arXiv Detail & Related papers (2023-05-24T06:41:09Z) - Natural Language Decompositions of Implicit Content Enable Better Text Representations [52.992875653864076]
We introduce a method for the analysis of text that takes implicitly communicated content explicitly into account.
We use a large language model to produce sets of propositions that are inferentially related to the text that has been observed.
Our results suggest that modeling the meanings behind observed language, rather than the literal text alone, is a valuable direction for NLP.
arXiv Detail & Related papers (2023-05-23T23:45:20Z) - Uncovering the Potential of ChatGPT for Discourse Analysis in Dialogue:
An Empirical Study [51.079100495163736]
This paper systematically inspects ChatGPT's performance in two discourse analysis tasks: topic segmentation and discourse parsing.
ChatGPT demonstrates proficiency in identifying topic structures in general-domain conversations yet struggles considerably in specific-domain conversations.
Our deeper investigation indicates that ChatGPT can give more reasonable topic structures than human annotations but only linearly parses the hierarchical rhetorical structures.
arXiv Detail & Related papers (2023-05-15T07:14:41Z) - AI, write an essay for me: A large-scale comparison of human-written
versus ChatGPT-generated essays [66.36541161082856]
ChatGPT and similar generative AI models have attracted hundreds of millions of users.
This study compares human-written versus ChatGPT-generated argumentative student essays.
arXiv Detail & Related papers (2023-04-24T12:58:28Z) - To ChatGPT, or not to ChatGPT: That is the question! [78.407861566006]
This study provides a comprehensive and contemporary assessment of the most recent techniques in ChatGPT detection.
We have curated a benchmark dataset consisting of prompts from ChatGPT and humans, including diverse questions from medical, open Q&A, and finance domains.
Our evaluation results demonstrate that none of the existing methods can effectively detect ChatGPT-generated content.
arXiv Detail & Related papers (2023-04-04T03:04:28Z) - Consistency Analysis of ChatGPT [65.268245109828]
This paper investigates the trustworthiness of ChatGPT and GPT-4 regarding logically consistent behaviour.
Our findings suggest that while both models appear to show an enhanced language understanding and reasoning ability, they still frequently fall short of generating logically consistent predictions.
arXiv Detail & Related papers (2023-03-11T01:19:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.