Classifying populist language in American presidential and governor speeches using automatic text analysis
- URL: http://arxiv.org/abs/2408.15213v1
- Date: Tue, 27 Aug 2024 17:19:57 GMT
- Title: Classifying populist language in American presidential and governor speeches using automatic text analysis
- Authors: Olaf van der Veen, Semir Dzebo, Levi Littvay, Kirk Hawkins, Oren Dar,
- Abstract summary: We develop a pipeline to train and validate an automated classification model to estimate the use of populist language.
We find that these models classify most speeches correctly, including 84% of governor speeches and 89% of presidential speeches.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Populism is a concept that is often used but notoriously difficult to measure. Common qualitative measurements like holistic grading or content analysis require great amounts of time and labour, making it difficult to quickly scope out which politicians should be classified as populist and which should not, while quantitative methods show mixed results when it comes to classifying populist rhetoric. In this paper, we develop a pipeline to train and validate an automated classification model to estimate the use of populist language. We train models based on sentences that were identified as populist and pluralist in 300 US governors' speeches from 2010 to 2018 and in 45 speeches of presidential candidates in 2016. We find that these models classify most speeches correctly, including 84% of governor speeches and 89% of presidential speeches. These results extend to different time periods (with 92% accuracy on more recent American governors), different amounts of data (with as few as 70 training sentences per category achieving similar results), and when classifying politicians instead of individual speeches. This pipeline is thus an effective tool that can optimise the systematic and swift classification of the use of populist language in politicians' speeches.
Related papers
- Speechworthy Instruction-tuned Language Models [71.8586707840169]
We show that both prompting and preference learning increase the speech-suitability of popular instruction-tuned LLMs.
We share lexical, syntactical, and qualitative analyses to showcase how each method contributes to improving the speech-suitability of generated responses.
arXiv Detail & Related papers (2024-09-23T02:34:42Z) - Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - Quantifying the Uniqueness of Donald Trump in Presidential Discourse [51.76056700705539]
This paper introduces a novel metric of uniqueness based on large language models.
We find considerable evidence that Trump's speech patterns diverge from those of all major party nominees for the presidency in recent history.
arXiv Detail & Related papers (2024-01-02T19:00:17Z) - PopBERT. Detecting populism and its host ideologies in the German
Bundestag [0.0]
This paper aims to provide a reliable, valid, and scalable approach to measure populist stances.
We label moralizing references to the virtuous people or the corrupt elite as core dimensions of populist language.
To identify, in addition to how the thin ideology of populism is thickened, we annotate how populist statements are attached to left-wing or right-wing host ideologies.
arXiv Detail & Related papers (2023-09-22T14:48:02Z) - The Face of Populism: Examining Differences in Facial Emotional Expressions of Political Leaders Using Machine Learning [50.24983453990065]
We use a deep-learning approach to process a sample of 220 YouTube videos of political leaders from 15 different countries.
We observe statistically significant differences in the average score of negative emotions between groups of leaders with varying degrees of populist rhetoric.
arXiv Detail & Related papers (2023-04-19T18:32:49Z) - Design and analysis of tweet-based election models for the 2021 Mexican
legislative election [55.41644538483948]
We use a dataset of 15 million election-related tweets in the six months preceding election day.
We find that models using data with geographical attributes determine the results of the election with better precision and accuracy than conventional polling methods.
arXiv Detail & Related papers (2023-01-02T12:40:05Z) - United States Politicians' Tone Became More Negative with 2016 Primary
Campaigns [11.712441267029092]
We apply psycholinguistic tools to a novel, comprehensive corpus of 24 million quotes from online news attributed to 18,627 US politicians.
We show that, whereas the frequency of negative emotion words had decreased continuously during Obama's tenure, it suddenly and lastingly increased with the 2016 primary campaigns.
This work provides the first large-scale data-driven evidence of a drastic shift toward a more negative political tone following Trump's campaign start as a catalyst.
arXiv Detail & Related papers (2022-07-17T08:41:14Z) - Prediction of Listener Perception of Argumentative Speech in a
Crowdsourced Data Using (Psycho-)Linguistic and Fluency Features [24.14001104126045]
We aim to predict TED talk-style affective ratings in a crowdsourced dataset of argumentative speech.
We present an effective approach to the classification task of predicting these categories through fine-tuning a model pre-trained on a large dataset of TED talks public speeches.
arXiv Detail & Related papers (2021-11-13T15:07:13Z) - Quantifying Gender Bias Towards Politicians in Cross-Lingual Language
Models [104.41668491794974]
We quantify the usage of adjectives and verbs generated by language models surrounding the names of politicians as a function of their gender.
We find that while some words such as dead, and designated are associated with both male and female politicians, a few specific words such as beautiful and divorced are predominantly associated with female politicians.
arXiv Detail & Related papers (2021-04-15T15:03:26Z) - How Metaphors Impact Political Discourse: A Large-Scale Topic-Agnostic
Study Using Neural Metaphor Detection [29.55309950026882]
We present a large-scale data-driven study of metaphors used in political discourse.
We show that metaphor use correlates with ideological leanings in complex ways that depend on concurrent political events such as winning or losing elections.
We show that posts with metaphors elicit more engagement from their audience overall even after controlling for various socio-political factors such as gender and political party affiliation.
arXiv Detail & Related papers (2021-04-08T17:16:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.