Rethinking AI Cultural Alignment
- URL: http://arxiv.org/abs/2501.07751v2
- Date: Fri, 07 Mar 2025 21:15:07 GMT
- Title: Rethinking AI Cultural Alignment
- Authors: Michal Bravansky, Filip Trhlik, Fazl Barez,
- Abstract summary: We show that humans' cultural values must be understood within the context of specific AI systems.<n>We argue that cultural alignment should be reframed as a bidirectional process.
- Score: 1.8434042562191815
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As general-purpose artificial intelligence (AI) systems become increasingly integrated with diverse human communities, cultural alignment has emerged as a crucial element in their deployment. Most existing approaches treat cultural alignment as one-directional, embedding predefined cultural values from standardized surveys and repositories into AI systems. To challenge this perspective, we highlight research showing that humans' cultural values must be understood within the context of specific AI systems. We then use a GPT-4o case study to demonstrate that AI systems' cultural alignment depends on how humans structure their interactions with the system. Drawing on these findings, we argue that cultural alignment should be reframed as a bidirectional process: rather than merely imposing standardized values on AIs, we should query the human cultural values most relevant to each AI-based system and align it to these values through interaction frameworks shaped by human users.
Related papers
- CultureVLM: Characterizing and Improving Cultural Understanding of Vision-Language Models for over 100 Countries [63.00147630084146]
Vision-language models (VLMs) have advanced human-AI interaction but struggle with cultural understanding.<n>CultureVerse is a large-scale multimodal benchmark covering 19, 682 cultural concepts, 188 countries/regions, 15 cultural concepts, and 3 question types.<n>We propose CultureVLM, a series of VLMs fine-tuned on our dataset to achieve significant performance improvement in cultural understanding.
arXiv Detail & Related papers (2025-01-02T14:42:37Z) - ValuesRAG: Enhancing Cultural Alignment Through Retrieval-Augmented Contextual Learning [1.1343849658875087]
ValuesRAG is a framework that integrates cultural and demographic knowledge dynamically during text generation.<n>It consistently outperforms baseline methods in the main experiment and in the ablation study.<n>It could foster culturally aligned AI systems and enhance the inclusivity of AI-driven applications.
arXiv Detail & Related papers (2025-01-02T03:26:13Z) - Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation [50.38159901496538]
Cultural biases in multilingual datasets pose significant challenges for their effectiveness as global benchmarks.<n>We show that progress on MMLU depends heavily on learning Western-centric concepts, with 28% of all questions requiring culturally sensitive knowledge.<n>We release Global-MMLU, an improved MMLU with evaluation coverage across 42 languages.
arXiv Detail & Related papers (2024-12-04T13:27:09Z) - Aligning Generalisation Between Humans and Machines [74.120848518198]
Recent advances in AI have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals.
The responsible use of AI increasingly shows the need for human-AI teaming.
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - LLM-GLOBE: A Benchmark Evaluating the Cultural Values Embedded in LLM Output [8.435090588116973]
We propose the LLM-GLOBE benchmark for evaluating the cultural value systems of LLMs.
We then leverage the benchmark to compare the values of Chinese and US LLMs.
Our methodology includes a novel "LLMs-as-a-Jury" pipeline which automates the evaluation of open-ended content.
arXiv Detail & Related papers (2024-11-09T01:38:55Z) - Extrinsic Evaluation of Cultural Competence in Large Language Models [53.626808086522985]
We focus on extrinsic evaluation of cultural competence in two text generation tasks.
We evaluate model outputs when an explicit cue of culture, specifically nationality, is perturbed in the prompts.
We find weak correlations between text similarity of outputs for different countries and the cultural values of these countries.
arXiv Detail & Related papers (2024-06-17T14:03:27Z) - Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
We introduce a systematic review of over 400 papers published between 2019 and January 2024, spanning multiple domains such as Human-Computer Interaction (HCI), Natural Language Processing (NLP), Machine Learning (ML)
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - CulturePark: Boosting Cross-cultural Understanding in Large Language Models [63.452948673344395]
This paper introduces CulturePark, an LLM-powered multi-agent communication framework for cultural data collection.
It generates high-quality cross-cultural dialogues encapsulating human beliefs, norms, and customs.
We evaluate these models across three downstream tasks: content moderation, cultural alignment, and cultural education.
arXiv Detail & Related papers (2024-05-24T01:49:02Z) - How Culture Shapes What People Want From AI [0.0]
There is an urgent need to incorporate the perspectives of culturally diverse groups into AI developments.
We present a novel conceptual framework for research that aims to expand, reimagine, and reground mainstream visions of AI.
arXiv Detail & Related papers (2024-03-08T07:08:19Z) - Culturally-Attuned Moral Machines: Implicit Learning of Human Value
Systems by AI through Inverse Reinforcement Learning [11.948092546676687]
We argue that the value system of an AI should be culturally attuned.
How AI systems might acquire such codes from human observation and interaction has remained an open question.
We show that an AI agent learning from the average behavior of a particular cultural group can acquire altruistic characteristics reflective of that group's behavior.
arXiv Detail & Related papers (2023-12-29T05:39:10Z) - Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties [68.66719970507273]
Value pluralism is the view that multiple correct values may be held in tension with one another.
As statistical learners, AI systems fit to averages by default, washing out potentially irreducible value conflicts.
We introduce ValuePrism, a large-scale dataset of 218k values, rights, and duties connected to 31k human-written situations.
arXiv Detail & Related papers (2023-09-02T01:24:59Z) - Cultural Incongruencies in Artificial Intelligence [5.817158625734485]
We describe a set of cultural dependencies and incongruencies in the context of AI-based language and vision technologies.
Problems arise when these technologies interact with globally diverse societies and cultures, with different values and interpretive practices.
arXiv Detail & Related papers (2022-11-19T18:45:02Z) - An Analytics of Culture: Modeling Subjectivity, Scalability,
Contextuality, and Temporality [13.638494941763637]
There is a bidirectional relationship between culture and AI; AI models are increasingly used to analyse culture, thereby shaping our understanding of culture.
On the other hand, the models are trained on collections of cultural artifacts thereby implicitly, and not always correctly, encoding expressions of culture.
This creates a tension that both limits the use of AI for analysing culture and leads to problems in AI with respect to cultural complex issues such as bias.
arXiv Detail & Related papers (2022-11-14T15:42:27Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.