Large Language Models are Geographically Biased
- URL: http://arxiv.org/abs/2402.02680v1
- Date: Mon, 5 Feb 2024 02:32:09 GMT
- Title: Large Language Models are Geographically Biased
- Authors: Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon
- Abstract summary: We study what Large Language Models (LLMs) know about the world we live in through the lens of geography.
We show various problematic geographic biases, which we define as systemic errors in geospatial predictions.
- Score: 51.37609528538606
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) inherently carry the biases contained in their
training corpora, which can lead to the perpetuation of societal harm. As the
impact of these foundation models grows, understanding and evaluating their
biases becomes crucial to achieving fairness and accuracy. We propose to study
what LLMs know about the world we live in through the lens of geography. This
approach is particularly powerful as there is ground truth for the numerous
aspects of human life that are meaningfully projected onto geographic space
such as culture, race, language, politics, and religion. We show various
problematic geographic biases, which we define as systemic errors in geospatial
predictions. Initially, we demonstrate that LLMs are capable of making accurate
zero-shot geospatial predictions in the form of ratings that show strong
monotonic correlation with ground truth (Spearman's $\rho$ of up to 0.89). We
then show that LLMs exhibit common biases across a range of objective and
subjective topics. In particular, LLMs are clearly biased against locations
with lower socioeconomic conditions (e.g. most of Africa) on a variety of
sensitive subjective topics such as attractiveness, morality, and intelligence
(Spearman's $\rho$ of up to 0.70). Finally, we introduce a bias score to
quantify this and find that there is significant variation in the magnitude of
bias across existing LLMs.
Related papers
- Investigating Bias in LLM-Based Bias Detection: Disparities between LLMs and Human Perception [13.592532358127293]
We investigate the presence and nature of bias within Large Language Models (LLMs)
We probe whether LLMs exhibit biases, particularly in political bias prediction and text continuation tasks.
We propose debiasing strategies, including prompt engineering and model fine-tuning.
arXiv Detail & Related papers (2024-03-22T00:59:48Z) - Exploring Value Biases: How LLMs Deviate Towards the Ideal [57.99044181599786]
Large-Language-Models (LLMs) are deployed in a wide range of applications, and their response has an increasing social impact.
We show that value bias is strong in LLMs across different categories, similar to the results found in human studies.
arXiv Detail & Related papers (2024-02-16T18:28:43Z) - GPTBIAS: A Comprehensive Framework for Evaluating Bias in Large Language
Models [83.30078426829627]
Large language models (LLMs) have gained popularity and are being widely adopted by a large user community.
The existing evaluation methods have many constraints, and their results exhibit a limited degree of interpretability.
We propose a bias evaluation framework named GPTBIAS that leverages the high performance of LLMs to assess bias in models.
arXiv Detail & Related papers (2023-12-11T12:02:14Z) - Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis [86.49858739347412]
Large Language Models (LLMs) have sparked intense debate regarding the prevalence of bias in these models and its mitigation.
We propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the decision process.
We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment.
arXiv Detail & Related papers (2023-11-15T00:02:25Z) - Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs [67.51906565969227]
We study the unintended side-effects of persona assignment on the ability of LLMs to perform basic reasoning tasks.
Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse personas (e.g. an Asian person) spanning 5 socio-demographic groups.
arXiv Detail & Related papers (2023-11-08T18:52:17Z) - Do LLMs exhibit human-like response biases? A case study in survey
design [66.1850490474361]
We investigate the extent to which large language models (LLMs) reflect human response biases, if at all.
We design a dataset and framework to evaluate whether LLMs exhibit human-like response biases in survey questionnaires.
Our comprehensive evaluation of nine models shows that popular open and commercial LLMs generally fail to reflect human-like behavior.
arXiv Detail & Related papers (2023-11-07T15:40:43Z) - Investigating Subtler Biases in LLMs: Ageism, Beauty, Institutional, and Nationality Bias in Generative Models [0.0]
This paper investigates bias along less-studied but still consequential, dimensions, such as age and beauty.
We ask whether LLMs hold wide-reaching biases of positive or negative sentiment for specific social groups similar to the "what is beautiful is good" bias found in people in experimental psychology.
arXiv Detail & Related papers (2023-09-16T07:07:04Z) - Gender bias and stereotypes in Large Language Models [0.6882042556551611]
This paper investigates Large Language Models' behavior with respect to gender stereotypes.
We use a simple paradigm to test the presence of gender bias, building on but differing from WinoBias.
Our contributions in this paper are as follows: (a) LLMs are 3-6 times more likely to choose an occupation that stereotypically aligns with a person's gender; (b) these choices align with people's perceptions better than with the ground truth as reflected in official job statistics; (d) LLMs ignore crucial ambiguities in sentence structure 95% of the time in our study items, but when explicitly prompted, they recognize
arXiv Detail & Related papers (2023-08-28T22:32:05Z) - The Unequal Opportunities of Large Language Models: Revealing
Demographic Bias through Job Recommendations [5.898806397015801]
We propose a simple method for analyzing and comparing demographic bias in Large Language Models (LLMs)
We demonstrate the effectiveness of our method by measuring intersectional biases within ChatGPT and LLaMA.
We identify distinct biases in both models toward various demographic identities, such as both models consistently suggesting low-paying jobs for Mexican workers.
arXiv Detail & Related papers (2023-08-03T21:12:54Z) - Geographic and Geopolitical Biases of Language Models [43.62238334380897]
We propose an approach to study the geographic bias (and knowledge) present in pretrained language models (PLMs)
Our findings suggest PLMs' representations map surprisingly well to the physical world in terms of country-to-country associations.
Last, we explain how large PLMs despite exhibiting notions of geographical proximity, over-amplify geopoliticalitism at inference time.
arXiv Detail & Related papers (2022-12-20T16:32:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.