Large Language Models can impersonate politicians and other public figures
- URL: http://arxiv.org/abs/2407.12855v1
- Date: Tue, 9 Jul 2024 11:16:19 GMT
- Title: Large Language Models can impersonate politicians and other public figures
- Authors: Steffen Herbold, Alexander Trautsch, Zlata Kikteva, Annette Hautli-Janisz,
- Abstract summary: Modern AI technology like Large language models (LLMs) has the potential to pollute the public information sphere with made-up content.
We present the results of a study based on a cross-section of British society.
LLMs are able to generate responses to debate questions that were part of a broadcast political debate programme in the UK.
- Score: 47.2573979612036
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern AI technology like Large language models (LLMs) has the potential to pollute the public information sphere with made-up content, which poses a significant threat to the cohesion of societies at large. A wide range of research has shown that LLMs are capable of generating text of impressive quality, including persuasive political speech, text with a pre-defined style, and role-specific content. But there is a crucial gap in the literature: We lack large-scale and systematic studies of how capable LLMs are in impersonating political and societal representatives and how the general public judges these impersonations in terms of authenticity, relevance and coherence. We present the results of a study based on a cross-section of British society that shows that LLMs are able to generate responses to debate questions that were part of a broadcast political debate programme in the UK. The impersonated responses are judged to be more authentic and relevant than the original responses given by people who were impersonated. This shows two things: (1) LLMs can be made to contribute meaningfully to the public political debate and (2) there is a dire need to inform the general public of the potential harm this can have on society.
Related papers
- Large Language Models Reflect the Ideology of their Creators [73.25935570218375]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.
We uncover notable diversity in the ideological stance exhibited across different LLMs and languages.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - Stars, Stripes, and Silicon: Unravelling the ChatGPT's All-American, Monochrome, Cis-centric Bias [0.0]
The paper calls for interdisciplinary efforts to address these challenges.
It highlights the need for collaboration between researchers, practitioners, and stakeholders to establish governance frameworks.
arXiv Detail & Related papers (2024-10-02T08:55:00Z) - Assessing Political Bias in Large Language Models [0.624709220163167]
We evaluate the political bias of open-source Large Language Models (LLMs) concerning political issues within the European Union (EU) from a German voter's perspective.
We show that larger models, such as Llama3-70B, tend to align more closely with left-leaning political parties, while smaller models often remain neutral.
arXiv Detail & Related papers (2024-05-17T15:30:18Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - Large Language Models: A Survey [69.72787936480394]
Large Language Models (LLMs) have drawn a lot of attention due to their strong performance on a wide range of natural language tasks.
LLMs' ability of general-purpose language understanding and generation is acquired by training billions of model's parameters on massive amounts of text data.
arXiv Detail & Related papers (2024-02-09T05:37:09Z) - Generative Echo Chamber? Effects of LLM-Powered Search Systems on
Diverse Information Seeking [49.02867094432589]
Large language models (LLMs) powered conversational search systems have already been used by hundreds of millions of people.
We investigate whether and how LLMs with opinion biases that either reinforce or challenge the user's view change the effect.
arXiv Detail & Related papers (2024-02-08T18:14:33Z) - Vox Populi, Vox ChatGPT: Large Language Models, Education and Democracy [0.0]
This paper explores the potential transformative impact of large language models (LLMs) on democratic societies.
The discussion emphasizes the essence of authorship, rooted in the unique human capacity for reason.
We advocate for an emphasis on education as a means to mitigate risks.
arXiv Detail & Related papers (2023-11-10T17:47:46Z) - Opportunities and Risks of LLMs for Scalable Deliberation with Polis [7.211025984598187]
Polis is a platform that leverages machine intelligence to scale up deliberative processes.
This paper explores the opportunities and risks associated with applying Large Language Models (LLMs) towards challenges with facilitating, moderating and summarizing the results of Polis engagements.
arXiv Detail & Related papers (2023-06-20T22:52:51Z) - Leveraging Large Language Models for Topic Classification in the Domain
of Public Affairs [65.9077733300329]
Large Language Models (LLMs) have the potential to greatly enhance the analysis of public affairs documents.
LLMs can be of great use to process domain-specific documents, such as those in the domain of public affairs.
arXiv Detail & Related papers (2023-06-05T13:35:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.