Demonstrations of the Potential of AI-based Political Issue Polling
- URL: http://arxiv.org/abs/2307.04781v2
- Date: Sat, 26 Aug 2023 16:32:26 GMT
- Title: Demonstrations of the Potential of AI-based Political Issue Polling
- Authors: Nathan E. Sanders, Alex Ulinich, Bruce Schneier
- Abstract summary: We develop a prompt engineering methodology for eliciting human-like survey responses from ChatGPT.
We execute large scale experiments, querying for thousands of simulated responses at a cost far lower than human surveys.
We find ChatGPT is effective at anticipating both the mean level and distribution of public opinion on a variety of policy issues.
But it is less successful at anticipating demographic-level differences.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Political polling is a multi-billion dollar industry with outsized influence
on the societal trajectory of the United States and nations around the world.
However, it has been challenged by factors that stress its cost, availability,
and accuracy. At the same time, artificial intelligence (AI) chatbots have
become compelling stand-ins for human behavior, powered by increasingly
sophisticated large language models (LLMs). Could AI chatbots be an effective
tool for anticipating public opinion on controversial issues to the extent that
they could be used by campaigns, interest groups, and polling firms? We have
developed a prompt engineering methodology for eliciting human-like survey
responses from ChatGPT, which simulate the response to a policy question of a
person described by a set of demographic factors, and produce both an ordinal
numeric response score and a textual justification. We execute large scale
experiments, querying for thousands of simulated responses at a cost far lower
than human surveys. We compare simulated data to human issue polling data from
the Cooperative Election Study (CES). We find that ChatGPT is effective at
anticipating both the mean level and distribution of public opinion on a
variety of policy issues such as abortion bans and approval of the US Supreme
Court, particularly in their ideological breakdown (correlation typically
>85%). However, it is less successful at anticipating demographic-level
differences. Moreover, ChatGPT tends to overgeneralize to new policy issues
that arose after its training data was collected, such as US support for
involvement in the war in Ukraine. Our work has implications for our
understanding of the strengths and limitations of the current generation of AI
chatbots as virtual publics or online listening platforms, future directions
for LLM development, and applications of AI tools to the political domain.
(Abridged)
Related papers
- Biased AI can Influence Political Decision-Making [64.9461133083473]
This paper presents two experiments investigating the effects of partisan bias in AI language models on political decision-making.
We found that participants exposed to politically biased models were significantly more likely to adopt opinions and make decisions aligning with the AI's bias.
arXiv Detail & Related papers (2024-10-08T22:56:00Z) - It is Time to Develop an Auditing Framework to Promote Value Aware Chatbots [3.539967259383779]
We argue that the speed of advancement of this technology requires us to mobilize and develop a values-based auditing framework.
We identify responses from GPT 3.5 and GPT 4 that are both consistent and not consistent with values derived from existing law.
We conclude this paper with recommendations for value-based strategies for improving the technologies.
arXiv Detail & Related papers (2024-09-03T02:15:34Z) - Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models [61.45529177682614]
We challenge the prevailing constrained evaluation paradigm for values and opinions in large language models.
We show that models give substantively different answers when not forced.
We distill these findings into recommendations and open challenges in evaluating values and opinions in LLMs.
arXiv Detail & Related papers (2024-02-26T18:00:49Z) - In Generative AI we Trust: Can Chatbots Effectively Verify Political
Information? [39.58317527488534]
This article presents a comparative analysis of the ability of two large language model (LLM)-based chatbots, ChatGPT and Bing Chat, to detect veracity of political information.
We use AI auditing methodology to investigate how chatbots evaluate true, false, and borderline statements on five topics: COVID-19, Russian aggression against Ukraine, the Holocaust, climate change, and LGBTQ+ related debates.
The results show high performance of ChatGPT for the baseline veracity evaluation task, with 72 percent of the cases evaluated correctly on average across languages without pre-training.
arXiv Detail & Related papers (2023-12-20T15:17:03Z) - Characteristics of ChatGPT users from Germany: implications for the digital divide from web tracking data [2.638878351659023]
We examine user characteristics that predict usage of the AI-powered conversational agent ChatGPT.
We find full-time employment and more children to be barriers to ChatGPT activity.
arXiv Detail & Related papers (2023-09-05T11:31:54Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - The political ideology of conversational AI: Converging evidence on
ChatGPT's pro-environmental, left-libertarian orientation [0.0]
OpenAI introduced ChatGPT, a state-of-the-art dialogue model that can converse with its human counterparts.
This paper focuses on one of democratic society's most important decision-making processes: political elections.
We uncover ChatGPT's pro-environmental, left-libertarian ideology.
arXiv Detail & Related papers (2023-01-05T07:13:13Z) - Design and analysis of tweet-based election models for the 2021 Mexican
legislative election [55.41644538483948]
We use a dataset of 15 million election-related tweets in the six months preceding election day.
We find that models using data with geographical attributes determine the results of the election with better precision and accuracy than conventional polling methods.
arXiv Detail & Related papers (2023-01-02T12:40:05Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - The Threats of Artificial Intelligence Scale (TAI). Development,
Measurement and Test Over Three Application Domains [0.0]
Several opinion polls frequently query the public fear of autonomous robots and artificial intelligence (FARAI)
We propose a fine-grained scale to measure threat perceptions of AI that accounts for four functional classes of AI systems and is applicable to various domains of AI applications.
The data support the dimensional structure of the proposed Threats of AI (TAI) scale as well as the internal consistency and factoral validity of the indicators.
arXiv Detail & Related papers (2020-06-12T14:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.