Divine LLaMAs: Bias, Stereotypes, Stigmatization, and Emotion Representation of Religion in Large Language Models
- URL: http://arxiv.org/abs/2407.06908v1
- Date: Tue, 9 Jul 2024 14:45:15 GMT
- Title: Divine LLaMAs: Bias, Stereotypes, Stigmatization, and Emotion Representation of Religion in Large Language Models
- Authors: Flor Miriam Plaza-del-Arco, Amanda Cercas Curry, Susanna Paoli, Alba Curry, Dirk Hovy,
- Abstract summary: Religion as a socio-cultural system prescribes a set of beliefs and values for its followers.
Unlike gender, which says little about our values, religion prescribes a set of beliefs and values for its followers.
Major religions in the US and European countries are represented with more nuance.
Eastern religions like Hinduism and Buddhism are strongly stereotyped.
- Score: 19.54202714712677
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Emotions play important epistemological and cognitive roles in our lives, revealing our values and guiding our actions. Previous work has shown that LLMs display biases in emotion attribution along gender lines. However, unlike gender, which says little about our values, religion, as a socio-cultural system, prescribes a set of beliefs and values for its followers. Religions, therefore, cultivate certain emotions. Moreover, these rules are explicitly laid out and interpreted by religious leaders. Using emotion attribution, we explore how different religions are represented in LLMs. We find that: Major religions in the US and European countries are represented with more nuance, displaying a more shaded model of their beliefs. Eastern religions like Hinduism and Buddhism are strongly stereotyped. Judaism and Islam are stigmatized -- the models' refusal skyrocket. We ascribe these to cultural bias in LLMs and the scarcity of NLP literature on religion. In the rare instances where religion is discussed, it is often in the context of toxic language, perpetuating the perception of these religions as inherently toxic. This finding underscores the urgent need to address and rectify these biases. Our research underscores the crucial role emotions play in our lives and how our values influence them.
Related papers
- An Empirical Study of Gendered Stereotypes in Emotional Attributes for Bangla in Multilingual Large Language Models [2.98683507969764]
Women are associated with emotions like empathy, fear, and guilt, while men are linked to anger, bravado, and authority.
We show the existence of gender bias in the context of emotions in Bangla through analytical methods.
All of our resources including code and data are made publicly available to support future research on Bangla NLP.
arXiv Detail & Related papers (2024-07-08T22:22:15Z) - Multilingual Trolley Problems for Language Models [138.0995992619116]
This study is inspired by a large-scale cross-cultural study of human moral preferences, "The Moral Machine Experiment"
We show that large language models (LLMs) are more aligned with human preferences in languages such as English, Korean, Hungarian, and Chinese, but less aligned in languages such as Hindi and Somali (in Africa)
We also characterize the explanations LLMs give for their moral choices and find that fairness is the most dominant supporting reason behind GPT-4's decisions and utilitarianism by GPT-3.
arXiv Detail & Related papers (2024-07-02T14:02:53Z) - CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting [68.37589899302161]
We uncover culture perceptions of three SOTA models on 110 countries and regions on 8 culture-related topics through culture-conditioned generations.
We discover that culture-conditioned generation consist of linguistic "markers" that distinguish marginalized cultures apart from default cultures.
arXiv Detail & Related papers (2024-04-16T00:50:43Z) - Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution [20.21748776472278]
We investigate whether emotions are gendered, and whether these variations are based on societal stereotypes.
We find that all models consistently exhibit gendered emotions, influenced by gender stereotypes.
Our study sheds light on the complex societal interplay between language, gender, and emotion.
arXiv Detail & Related papers (2024-03-05T17:04:05Z) - Large Language Models are Geographically Biased [51.37609528538606]
We study what Large Language Models (LLMs) know about the world we live in through the lens of geography.
We show various problematic geographic biases, which we define as systemic errors in geospatial predictions.
arXiv Detail & Related papers (2024-02-05T02:32:09Z) - Western, Religious or Spiritual: An Evaluation of Moral Justification in
Large Language Models [5.257719744958368]
This paper aims to find out which values and principles are embedded in Large Language Models (LLMs) in the process of moral justification.
We come up with three different moral perspective categories: Western tradition perspective (WT), Abrahamic tradition perspective (AT), and Spiritualist/Mystic tradition perspective (SMT)
arXiv Detail & Related papers (2023-11-13T23:01:19Z) - Multilingual Language Models are not Multicultural: A Case Study in
Emotion [8.73324795579955]
We investigate whether the widely-used multilingual LMs in 2023 reflect differences in emotional expressions across cultures and languages.
We find that embeddings obtained from LMs are Anglocentric, and generative LMs reflect Western norms, even when responding to prompts in other languages.
arXiv Detail & Related papers (2023-07-03T21:54:28Z) - Religion and Spirituality on Social Media in the Aftermath of the Global
Pandemic [59.930429668324294]
We analyse the sudden change in religious activities twofold: we create and deliver a questionnaire, as well as analyse Twitter data.
Importantly, we also analyse the temporal variations in this process by analysing a period of 3 months: July-September 2020.
arXiv Detail & Related papers (2022-12-11T18:41:02Z) - A Moral- and Event- Centric Inspection of Gender Bias in Fairy Tales at
A Large Scale [50.92540580640479]
We computationally analyze gender bias in a fairy tale dataset containing 624 fairy tales from 7 different cultures.
We find that the number of male characters is two times that of female characters, showing a disproportionate gender representation.
Female characters turn out more associated with care-, loyalty- and sanctity- related moral words, while male characters are more associated with fairness- and authority- related moral words.
arXiv Detail & Related papers (2022-11-25T19:38:09Z) - Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection [75.54119209776894]
We investigate the effect of annotator identities (who) and beliefs (why) on toxic language annotations.
We consider posts with three characteristics: anti-Black language, African American English dialect, and vulgarity.
Our results show strong associations between annotator identity and beliefs and their ratings of toxicity.
arXiv Detail & Related papers (2021-11-15T18:58:20Z) - Persistent Anti-Muslim Bias in Large Language Models [13.984800635696566]
GPT-3, a state-of-the-art contextual language model, captures persistent Muslim-violence bias.
We probe GPT-3 in various ways, including prompt completion, analogical reasoning, and story generation.
For instance, "Muslim" is analogized to "terrorist" in 23% of test cases, while "Jewish" is mapped to "money" in 5% of test cases.
arXiv Detail & Related papers (2021-01-14T18:41:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.