Divine LLaMAs: Bias, Stereotypes, Stigmatization, and Emotion Representation of Religion in Large Language Models
- URL: http://arxiv.org/abs/2407.06908v1
- Date: Tue, 9 Jul 2024 14:45:15 GMT
- Title: Divine LLaMAs: Bias, Stereotypes, Stigmatization, and Emotion Representation of Religion in Large Language Models
- Authors: Flor Miriam Plaza-del-Arco, Amanda Cercas Curry, Susanna Paoli, Alba Curry, Dirk Hovy,
- Abstract summary: Religion as a socio-cultural system prescribes a set of beliefs and values for its followers.
Unlike gender, which says little about our values, religion prescribes a set of beliefs and values for its followers.
Major religions in the US and European countries are represented with more nuance.
Eastern religions like Hinduism and Buddhism are strongly stereotyped.
- Score: 19.54202714712677
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Emotions play important epistemological and cognitive roles in our lives, revealing our values and guiding our actions. Previous work has shown that LLMs display biases in emotion attribution along gender lines. However, unlike gender, which says little about our values, religion, as a socio-cultural system, prescribes a set of beliefs and values for its followers. Religions, therefore, cultivate certain emotions. Moreover, these rules are explicitly laid out and interpreted by religious leaders. Using emotion attribution, we explore how different religions are represented in LLMs. We find that: Major religions in the US and European countries are represented with more nuance, displaying a more shaded model of their beliefs. Eastern religions like Hinduism and Buddhism are strongly stereotyped. Judaism and Islam are stigmatized -- the models' refusal skyrocket. We ascribe these to cultural bias in LLMs and the scarcity of NLP literature on religion. In the rare instances where religion is discussed, it is often in the context of toxic language, perpetuating the perception of these religions as inherently toxic. This finding underscores the urgent need to address and rectify these biases. Our research underscores the crucial role emotions play in our lives and how our values influence them.
Related papers
- From Anger to Joy: How Nationality Personas Shape Emotion Attribution in Large Language Models [4.362338454684645]
We investigate how different countries are represented in pre-trained Large Language Models (LLMs) through emotion attributions.<n>Our analysis reveals significant nationality-based differences, with emotions such as shame, fear, and joy being disproportionately assigned across regions.
arXiv Detail & Related papers (2025-06-03T04:35:51Z) - Beauty and the Bias: Exploring the Impact of Attractiveness on Multimodal Large Language Models [51.590283139444814]
Physical attractiveness has been shown to influence human perception and decision-making.<n>The role that attractiveness plays in the assessments and decisions made by multimodal large language models (MLLMs) is unknown.<n>We conduct an empirical study with 7 diverse open-source MLLMs evaluated on 91 socially relevant scenarios and a diverse dataset of 924 face images.
arXiv Detail & Related papers (2025-04-16T16:02:55Z) - Analyzing Islamophobic Discourse Using Semi-Coded Terms and LLMs [2.5081530863229307]
This paper performs a large-scale analysis of specialized, semi-coded Islamophobic terms such as (muzrat, pislam, mudslime, mohammedan, muzzies) floated on extremist social platforms.
Using Google Perspective API, we also find that Islamophobic text is more toxic compared to other kinds of hate speech.
arXiv Detail & Related papers (2025-03-24T01:41:24Z) - Large Language Models Reflect the Ideology of their Creators [73.25935570218375]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.
We uncover notable diversity in the ideological stance exhibited across different LLMs and languages.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - Measuring Spiritual Values and Bias of Large Language Models [28.892254056685008]
Large language models (LLMs) have become integral tool for users from various backgrounds.
These models reflect linguistic and cultural nuances embedded in pre-training data.
values and perspectives inherent in this data can influence the behavior of LLMs, leading to potential biases.
arXiv Detail & Related papers (2024-10-15T14:33:23Z) - Exploring Bengali Religious Dialect Biases in Large Language Models with Evaluation Perspectives [5.648318448953635]
Large Language Models (LLM) can produce output that contains stereotypes and biases.
We explore bias from a religious perspective in Bengali, focusing specifically on two main religious dialects: Hindu and Muslim-majority dialects.
arXiv Detail & Related papers (2024-07-25T20:19:29Z) - An Empirical Study of Gendered Stereotypes in Emotional Attributes for Bangla in Multilingual Large Language Models [2.98683507969764]
Women are associated with emotions like empathy, fear, and guilt, while men are linked to anger, bravado, and authority.
We show the existence of gender bias in the context of emotions in Bangla through analytical methods.
All of our resources including code and data are made publicly available to support future research on Bangla NLP.
arXiv Detail & Related papers (2024-07-08T22:22:15Z) - See It from My Perspective: How Language Affects Cultural Bias in Image Understanding [60.70852566256668]
Vision-language models (VLMs) can respond to queries about images in many languages.
We characterize the Western bias of VLMs in image understanding and investigate the role that language plays in this disparity.
arXiv Detail & Related papers (2024-06-17T15:49:51Z) - CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting [73.94059188347582]
We uncover culture perceptions of three SOTA models on 110 countries and regions on 8 culture-related topics through culture-conditioned generations.
We discover that culture-conditioned generation consist of linguistic "markers" that distinguish marginalized cultures apart from default cultures.
arXiv Detail & Related papers (2024-04-16T00:50:43Z) - Western, Religious or Spiritual: An Evaluation of Moral Justification in
Large Language Models [5.257719744958368]
This paper aims to find out which values and principles are embedded in Large Language Models (LLMs) in the process of moral justification.
We come up with three different moral perspective categories: Western tradition perspective (WT), Abrahamic tradition perspective (AT), and Spiritualist/Mystic tradition perspective (SMT)
arXiv Detail & Related papers (2023-11-13T23:01:19Z) - Multilingual Language Models are not Multicultural: A Case Study in
Emotion [8.73324795579955]
We investigate whether the widely-used multilingual LMs in 2023 reflect differences in emotional expressions across cultures and languages.
We find that embeddings obtained from LMs are Anglocentric, and generative LMs reflect Western norms, even when responding to prompts in other languages.
arXiv Detail & Related papers (2023-07-03T21:54:28Z) - The Face of Populism: Examining Differences in Facial Emotional Expressions of Political Leaders Using Machine Learning [50.24983453990065]
We use a deep-learning approach to process a sample of 220 YouTube videos of political leaders from 15 different countries.
We observe statistically significant differences in the average score of negative emotions between groups of leaders with varying degrees of populist rhetoric.
arXiv Detail & Related papers (2023-04-19T18:32:49Z) - Religion and Spirituality on Social Media in the Aftermath of the Global
Pandemic [59.930429668324294]
We analyse the sudden change in religious activities twofold: we create and deliver a questionnaire, as well as analyse Twitter data.
Importantly, we also analyse the temporal variations in this process by analysing a period of 3 months: July-September 2020.
arXiv Detail & Related papers (2022-12-11T18:41:02Z) - A Moral- and Event- Centric Inspection of Gender Bias in Fairy Tales at
A Large Scale [50.92540580640479]
We computationally analyze gender bias in a fairy tale dataset containing 624 fairy tales from 7 different cultures.
We find that the number of male characters is two times that of female characters, showing a disproportionate gender representation.
Female characters turn out more associated with care-, loyalty- and sanctity- related moral words, while male characters are more associated with fairness- and authority- related moral words.
arXiv Detail & Related papers (2022-11-25T19:38:09Z) - Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection [75.54119209776894]
We investigate the effect of annotator identities (who) and beliefs (why) on toxic language annotations.
We consider posts with three characteristics: anti-Black language, African American English dialect, and vulgarity.
Our results show strong associations between annotator identity and beliefs and their ratings of toxicity.
arXiv Detail & Related papers (2021-11-15T18:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.