Can LLMs Help Predict Elections? (Counter)Evidence from the World's Largest Democracy
- URL: http://arxiv.org/abs/2405.07828v1
- Date: Mon, 13 May 2024 15:13:23 GMT
- Title: Can LLMs Help Predict Elections? (Counter)Evidence from the World's Largest Democracy
- Authors: Pratik Gujral, Kshitij Awaldhi, Navya Jain, Bhavuk Bhandula, Abhijnan Chakraborty,
- Abstract summary: The study of how social media affects the formation of public opinion and its influence on political results has been a popular field of inquiry.
We introduce a new method: harnessing the capabilities of Large Language Models (LLMs) to examine social media data and forecast election outcomes.
- Score: 3.0915192911449796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The study of how social media affects the formation of public opinion and its influence on political results has been a popular field of inquiry. However, current approaches frequently offer a limited comprehension of the complex political phenomena, yielding inconsistent outcomes. In this work, we introduce a new method: harnessing the capabilities of Large Language Models (LLMs) to examine social media data and forecast election outcomes. Our research diverges from traditional methodologies in two crucial respects. First, we utilize the sophisticated capabilities of foundational LLMs, which can comprehend the complex linguistic subtleties and contextual details present in social media data. Second, we focus on data from X (Twitter) in India to predict state assembly election outcomes. Our method entails sentiment analysis of election-related tweets through LLMs to forecast the actual election results, and we demonstrate the superiority of our LLM-based method against more traditional exit and opinion polls. Overall, our research offers valuable insights into the unique dynamics of Indian politics and the remarkable impact of social media in molding public attitudes within this context.
Related papers
- Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - Sampled Datasets Risk Substantial Bias in the Identification of Political Polarization on Social Media [34.192291430580454]
We study the structural polarization of the Polish political debate on Twitter over a 24-hour period.
Large samples can be representative of the whole political discussion on a platform, but small samples consistently fail to accurately reflect the true structure of polarization online.
arXiv Detail & Related papers (2024-06-28T12:13:29Z) - Assessing Political Bias in Large Language Models [0.624709220163167]
We evaluate the political bias of open-source Large Language Models (LLMs) concerning political issues within the European Union (EU) from a German voter's perspective.
We show that larger models, such as Llama3-70B, tend to align more closely with left-leaning political parties, while smaller models often remain neutral.
arXiv Detail & Related papers (2024-05-17T15:30:18Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models [61.45529177682614]
We challenge the prevailing constrained evaluation paradigm for values and opinions in large language models.
We show that models give substantively different answers when not forced.
We distill these findings into recommendations and open challenges in evaluating values and opinions in LLMs.
arXiv Detail & Related papers (2024-02-26T18:00:49Z) - Inducing Political Bias Allows Language Models Anticipate Partisan
Reactions to Controversies [5.958974943807783]
This study addresses the challenge of understanding political bias in digitized discourse using Large Language Models (LLMs)
We present a comprehensive analytical framework, consisting of Partisan Bias Divergence Assessment and Partisan Class Tendency Prediction.
Our findings reveal the model's effectiveness in capturing emotional and moral nuances, albeit with some challenges in stance detection.
arXiv Detail & Related papers (2023-11-16T08:57:53Z) - Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis [86.49858739347412]
Large Language Models (LLMs) have sparked intense debate regarding the prevalence of bias in these models and its mitigation.
We propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the decision process.
We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment.
arXiv Detail & Related papers (2023-11-15T00:02:25Z) - Prediction of the 2023 Turkish Presidential Election Results Using
Social Media Data [0.5156484100374059]
We aim to predict the vote shares of parties participating in the 2023 elections in Turkey by combining social media data with traditional polling data.
Our approach is a volume-based approach that considers the number of social media interactions rather than content.
arXiv Detail & Related papers (2023-05-28T13:17:51Z) - The Face of Populism: Examining Differences in Facial Emotional
Expressions of Political Leaders Using Machine Learning [57.70351255180495]
We apply a deep-learning-based computer-vision algorithm to a sample of 220 YouTube videos depicting political leaders from 15 different countries.
We observe statistically significant differences in the average score of expressed negative emotions between groups of leaders with varying degrees of populist rhetoric.
arXiv Detail & Related papers (2023-04-19T18:32:49Z) - Design and analysis of tweet-based election models for the 2021 Mexican
legislative election [55.41644538483948]
We use a dataset of 15 million election-related tweets in the six months preceding election day.
We find that models using data with geographical attributes determine the results of the election with better precision and accuracy than conventional polling methods.
arXiv Detail & Related papers (2023-01-02T12:40:05Z) - Inferring Political Preferences from Twitter [0.0]
Political Sentiment Analysis of social media helps the political strategists to scrutinize the performance of a party or candidate.
During the time of elections, the social networks get flooded with blogs, chats, debates and discussions about the prospects of political parties and politicians.
In this work, we chose to identify the inclination of political opinions present in Tweets by modelling it as a text classification problem using classical machine learning.
arXiv Detail & Related papers (2020-07-21T05:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.