Multilingual Language Models are not Multicultural: A Case Study in
Emotion
- URL: http://arxiv.org/abs/2307.01370v2
- Date: Sun, 9 Jul 2023 15:21:22 GMT
- Title: Multilingual Language Models are not Multicultural: A Case Study in
Emotion
- Authors: Shreya Havaldar, Sunny Rai, Bhumika Singhal, Langchen Liu, Sharath
Chandra Guntuku, Lyle Ungar
- Abstract summary: We investigate whether the widely-used multilingual LMs in 2023 reflect differences in emotional expressions across cultures and languages.
We find that embeddings obtained from LMs are Anglocentric, and generative LMs reflect Western norms, even when responding to prompts in other languages.
- Score: 8.73324795579955
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Emotions are experienced and expressed differently across the world. In order
to use Large Language Models (LMs) for multilingual tasks that require
emotional sensitivity, LMs must reflect this cultural variation in emotion. In
this study, we investigate whether the widely-used multilingual LMs in 2023
reflect differences in emotional expressions across cultures and languages. We
find that embeddings obtained from LMs (e.g., XLM-RoBERTa) are Anglocentric,
and generative LMs (e.g., ChatGPT) reflect Western norms, even when responding
to prompts in other languages. Our results show that multilingual LMs do not
successfully learn the culturally appropriate nuances of emotion and we
highlight possible research directions towards correcting this.
Related papers
- BRIGHTER: BRIdging the Gap in Human-Annotated Textual Emotion Recognition Datasets for 28 Languages [93.92804151830744]
We present BRIGHTER, a collection of emotion-annotated datasets in 28 different languages.
We describe the data collection and annotation processes and the challenges of building these datasets.
We show that BRIGHTER datasets are a step towards bridging the gap in text-based emotion recognition.
arXiv Detail & Related papers (2025-02-17T15:39:50Z) - Evaluating the Capabilities of Large Language Models for Multi-label Emotion Understanding [20.581470997286146]
We present EthioEmo, a multi-label emotion classification dataset for four Ethiopian languages.
We perform extensive experiments with an additional English multi-label emotion dataset from SemEval 2018 Task 1.
The results show that accurate multi-label emotion classification is still insufficient even for high-resource languages.
arXiv Detail & Related papers (2024-12-17T07:42:39Z) - Analyzing Cultural Representations of Emotions in LLMs through Mixed Emotion Survey [2.9213203896291766]
This study focuses on analyzing the cultural representations of emotions in Large Language Models (LLMs)
Our methodology is based on the studies of Miyamoto et al. (2010), which identified distinctive emotional indicators in Japanese and American human responses.
We find that models have limited alignment with the evidence in the literature.
arXiv Detail & Related papers (2024-08-04T20:56:05Z) - Cultural Value Differences of LLMs: Prompt, Language, and Model Size [35.176429953825924]
Our study aims to identify behavior patterns in cultural values exhibited by large language models (LLMs)
The studied variants include question ordering, prompting language, and model size.
Our experiments reveal that query language and model size of LLM are the main factors resulting in cultural value differences.
arXiv Detail & Related papers (2024-06-17T12:35:33Z) - The Echoes of Multilinguality: Tracing Cultural Value Shifts during LM Fine-tuning [23.418656688405605]
We study how languages can exert influence on the cultural values encoded for different test languages, by studying how such values are revised during fine-tuning.
Lastly, we use a training data attribution method to find patterns in the fine-tuning examples, and the languages that they come from, that tend to instigate value shifts.
arXiv Detail & Related papers (2024-05-21T12:55:15Z) - Is Translation All You Need? A Study on Solving Multilingual Tasks with Large Language Models [79.46179534911019]
Large language models (LLMs) have demonstrated multilingual capabilities; yet, they are mostly English-centric due to imbalanced training corpora.
This work extends the evaluation from NLP tasks to real user queries.
For culture-related tasks that need deep language understanding, prompting in the native language tends to be more promising.
arXiv Detail & Related papers (2024-03-15T12:47:39Z) - Sociolinguistically Informed Interpretability: A Case Study on Hinglish
Emotion Classification [8.010713141364752]
We study the effect of language on emotion prediction across 3 PLMs on a Hinglish emotion classification dataset.
We find that models do learn these associations between language choice and emotional expression.
Having code-mixed data present in the pre-training can augment that learning when task-specific data is scarce.
arXiv Detail & Related papers (2024-02-05T16:05:32Z) - Divergences between Language Models and Human Brains [59.100552839650774]
We systematically explore the divergences between human and machine language processing.
We identify two domains that LMs do not capture well: social/emotional intelligence and physical commonsense.
Our results show that fine-tuning LMs on these domains can improve their alignment with human brain responses.
arXiv Detail & Related papers (2023-11-15T19:02:40Z) - Not All Countries Celebrate Thanksgiving: On the Cultural Dominance in
Large Language Models [89.94270049334479]
This paper identifies a cultural dominance issue within large language models (LLMs)
LLMs often provide inappropriate English-culture-related answers that are not relevant to the expected culture when users ask in non-English languages.
arXiv Detail & Related papers (2023-10-19T05:38:23Z) - Are Multilingual LLMs Culturally-Diverse Reasoners? An Investigation into Multicultural Proverbs and Sayings [73.48336898620518]
Large language models (LLMs) are highly adept at question answering and reasoning tasks.
We study the ability of a wide range of state-of-the-art multilingual LLMs to reason with proverbs and sayings in a conversational context.
arXiv Detail & Related papers (2023-09-15T17:45:28Z) - Multi-lingual and Multi-cultural Figurative Language Understanding [69.47641938200817]
Figurative language permeates human communication, but is relatively understudied in NLP.
We create a dataset for seven diverse languages associated with a variety of cultures: Hindi, Indonesian, Javanese, Kannada, Sundanese, Swahili and Yoruba.
Our dataset reveals that each language relies on cultural and regional concepts for figurative expressions, with the highest overlap between languages originating from the same region.
All languages exhibit a significant deficiency compared to English, with variations in performance reflecting the availability of pre-training and fine-tuning data.
arXiv Detail & Related papers (2023-05-25T15:30:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.