Framing Political Bias in Multilingual LLMs Across Pakistani Languages
- URL: http://arxiv.org/abs/2506.00068v2
- Date: Thu, 31 Jul 2025 04:41:18 GMT
- Title: Framing Political Bias in Multilingual LLMs Across Pakistani Languages
- Authors: Afrozah Nadeem, Mark Dras, Usman Naseem,
- Abstract summary: We present a systematic evaluation of political bias in 13 state-of-the-art Large Language Models (LLMs) across five Pakistani languages.<n>Our framework integrates a culturally adapted Political Compass Test (PCT) with multi-level framing analysis.<n>Results show that while LLMs predominantly reflect liberal-left orientations consistent with Western training data, they exhibit more authoritarian framing in regional languages.
- Score: 6.5137518437747
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) increasingly shape public discourse, yet most evaluations of political and economic bias have focused on high-resource, Western languages and contexts. This leaves critical blind spots in low-resource, multilingual regions such as Pakistan, where linguistic identity is closely tied to political, religious, and regional ideologies. We present a systematic evaluation of political bias in 13 state-of-the-art LLMs across five Pakistani languages: Urdu, Punjabi, Sindhi, Pashto, and Balochi. Our framework integrates a culturally adapted Political Compass Test (PCT) with multi-level framing analysis, capturing both ideological stance (economic/social axes) and stylistic framing (content, tone, emphasis). Prompts are aligned with 11 socio-political themes specific to the Pakistani context. Results show that while LLMs predominantly reflect liberal-left orientations consistent with Western training data, they exhibit more authoritarian framing in regional languages, highlighting language-conditioned ideological modulation. We also identify consistent model-specific bias patterns across languages. These findings show the need for culturally grounded, multilingual bias auditing frameworks in global NLP.
Related papers
- Do Political Opinions Transfer Between Western Languages? An Analysis of Unaligned and Aligned Multilingual LLMs [8.34389139211844]
Cross-cultural differences in political opinions may translate to cross-lingual differences in multilingual large language models (MLLMs)<n>We analyze whether opinions transfer between languages or whether there are separate opinions for each language in MLLMs of various sizes across five Western languages.<n>We conclude that in Western language contexts, political opinions transfer between languages, demonstrating the challenges in achieving explicit socio-linguistic, cultural, and political alignment of MLLMs.
arXiv Detail & Related papers (2025-08-07T16:33:45Z) - MyCulture: Exploring Malaysia's Diverse Culture under Low-Resource Language Constraints [7.822567458977689]
MyCulture is a benchmark designed to comprehensively evaluate Large Language Models (LLMs) on Malaysian culture.<n>Unlike conventional benchmarks, MyCulture employs a novel open-ended multiple-choice question format without predefined options.<n>We analyze structural bias by comparing model performance on structured versus free-form outputs, and assess language bias through multilingual prompt variations.
arXiv Detail & Related papers (2025-08-07T14:17:43Z) - Multilingual Political Views of Large Language Models: Identification and Steering [9.340686908318776]
Large language models (LLMs) are increasingly used in everyday tools and applications, raising concerns about their potential influence on political views.<n>We evaluate seven models across 14 languages using the Political Compass Test with 11 semantically equivalent paraphrases per statement to ensure robust measurement.<n>Our results reveal that larger models consistently shift toward libertarian-left positions, with significant variations across languages and model families.
arXiv Detail & Related papers (2025-07-30T12:42:35Z) - Democratic or Authoritarian? Probing a New Dimension of Political Biases in Large Language Models [72.89977583150748]
We propose a novel methodology to assess how Large Language Models align with broader geopolitical value systems.<n>We find that LLMs generally favor democratic values and leaders, but exhibit increases favorability toward authoritarian figures when prompted in Mandarin.
arXiv Detail & Related papers (2025-06-15T07:52:07Z) - Geopolitical biases in LLMs: what are the "good" and the "bad" countries according to contemporary language models [52.00270888041742]
We introduce a novel dataset with neutral event descriptions and contrasting viewpoints from different countries.<n>Our findings show significant geopolitical biases, with models favoring specific national narratives.<n>Simple debiasing prompts had a limited effect on reducing these biases.
arXiv Detail & Related papers (2025-06-07T10:45:17Z) - Measuring South Asian Biases in Large Language Models [1.5903891569492878]
This work addresses gaps by conducting a multilingual and intersectional analysis of Large Language Models (LLMs)<n>We construct a culturally grounded bias lexicon capturing previously unexplored intersectional dimensions including gender, religion, marital status, and number of children.<n>We evaluate two self-debiasing strategies to measure their effectiveness in reducing culturally specific bias in Indo-Aryan and Dravidian languages.
arXiv Detail & Related papers (2025-05-24T02:18:17Z) - KOKKAI DOC: An LLM-driven framework for scaling parliamentary representatives [0.0]
This paper introduces an LLM-driven framework designed to accurately scale the political issue stances of parliamentary representatives.<n>By leveraging advanced natural language processing techniques and large language models, the proposed methodology refines and enhances previous approaches.<n>The framework incorporates three major innovations: (1) de-noising parliamentary speeches via summarization to produce cleaner, more consistent opinion embeddings; (2) automatic extraction of axes of political controversy from legislators' speech summaries; and (3) a diachronic analysis that tracks the evolution of party positions over time.
arXiv Detail & Related papers (2025-05-11T21:03:53Z) - Language-Dependent Political Bias in AI: A Study of ChatGPT and Gemini [0.0]
This study investigates the political tendency of large language models and the existence of differentiation according to the query language.<n>ChatGPT and Gemini were subjected to a political axis test using 14 different languages.<n>A comparative analysis revealed that Gemini exhibited a more pronounced liberal and left-wing tendency compared to ChatGPT.
arXiv Detail & Related papers (2025-04-08T21:13:01Z) - Mapping Geopolitical Bias in 11 Large Language Models: A Bilingual, Dual-Framing Analysis of U.S.-China Tensions [2.8202443616982884]
This study systematically analyzes geopolitical bias across 11 prominent Large Language Models (LLMs)<n>We generated 19,712 prompts designed to detect ideological leanings in model outputs.<n>U.S.-based models predominantly favored Pro-U.S. stances, while Chinese-origin models exhibited pronounced Pro-China biases.
arXiv Detail & Related papers (2025-03-31T03:38:17Z) - Beyond Partisan Leaning: A Comparative Analysis of Political Bias in Large Language Models [6.549047699071195]
This study adopts a persona-free, topic-specific approach to evaluate political behavior in large language models.<n>We analyze responses from 43 large language models developed in the U.S., Europe, China, and the Middle East.<n>Findings show most models lean center-left or left ideologically and vary in their nonpartisan engagement patterns.
arXiv Detail & Related papers (2024-12-21T19:42:40Z) - Large Language Models Reflect the Ideology of their Creators [71.65505524599888]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.<n>This paper shows that the ideological stance of an LLM appears to reflect the worldview of its creators.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - Language Model Alignment in Multilingual Trolley Problems [138.5684081822807]
Building on the Moral Machine experiment, we develop a cross-lingual corpus of moral dilemma vignettes in over 100 languages called MultiTP.<n>Our analysis explores the alignment of 19 different LLMs with human judgments, capturing preferences across six moral dimensions.<n>We discover significant variance in alignment across languages, challenging the assumption of uniform moral reasoning in AI systems.
arXiv Detail & Related papers (2024-07-02T14:02:53Z) - Inducing Political Bias Allows Language Models Anticipate Partisan
Reactions to Controversies [5.958974943807783]
This study addresses the challenge of understanding political bias in digitized discourse using Large Language Models (LLMs)
We present a comprehensive analytical framework, consisting of Partisan Bias Divergence Assessment and Partisan Class Tendency Prediction.
Our findings reveal the model's effectiveness in capturing emotional and moral nuances, albeit with some challenges in stance detection.
arXiv Detail & Related papers (2023-11-16T08:57:53Z) - Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis [86.49858739347412]
Large Language Models (LLMs) have sparked intense debate regarding the prevalence of bias in these models and its mitigation.
We propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the decision process.
We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment.
arXiv Detail & Related papers (2023-11-15T00:02:25Z) - Multi-EuP: The Multilingual European Parliament Dataset for Analysis of
Bias in Information Retrieval [62.82448161570428]
This dataset is designed to investigate fairness in a multilingual information retrieval context.
It boasts an authentic multilingual corpus, featuring topics translated into all 24 languages.
It offers rich demographic information associated with its documents, facilitating the study of demographic bias.
arXiv Detail & Related papers (2023-11-03T12:29:11Z) - Quantifying the Dialect Gap and its Correlates Across Languages [69.18461982439031]
This work will lay the foundation for furthering the field of dialectal NLP by laying out evident disparities and identifying possible pathways for addressing them through mindful data collection.
arXiv Detail & Related papers (2023-10-23T17:42:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.