Cross-cultural value alignment frameworks for responsible AI governance: Evidence from China-West comparative analysis
- URL: http://arxiv.org/abs/2511.17256v1
- Date: Fri, 21 Nov 2025 14:02:33 GMT
- Title: Cross-cultural value alignment frameworks for responsible AI governance: Evidence from China-West comparative analysis
- Authors: Haijiang Liu, Jinguang Gu, Xun Wu, Daniel Hershcovich, Qiaoling Xiao,
- Abstract summary: This study systematically evaluates cross-cultural value alignment in China-origin and Western-origin Large Language Models (LLMs)<n>Our comparative analysis of leading models, such as Qwen, GPT-4o, Claude, LLaMA, and DeepSeek, reveals universal challenges-fundamental instability in value systems, systematic under-representation of younger demographics, and non-linear relationships between model scale and alignment quality-alongside divergent regional development trajectories.
- Score: 20.31675378963816
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As Large Language Models (LLMs) increasingly influence high-stakes decision-making across global contexts, ensuring their alignment with diverse cultural values has become a critical governance challenge. This study presents a Multi-Layered Auditing Platform for Responsible AI that systematically evaluates cross-cultural value alignment in China-origin and Western-origin LLMs through four integrated methodologies: Ethical Dilemma Corpus for assessing temporal stability, Diversity-Enhanced Framework (DEF) for quantifying cultural fidelity, First-Token Probability Alignment for distributional accuracy, and Multi-stAge Reasoning frameworK (MARK) for interpretable decision-making. Our comparative analysis of 20+ leading models, such as Qwen, GPT-4o, Claude, LLaMA, and DeepSeek, reveals universal challenges-fundamental instability in value systems, systematic under-representation of younger demographics, and non-linear relationships between model scale and alignment quality-alongside divergent regional development trajectories. While China-origin models increasingly emphasize multilingual data integration for context-specific optimization, Western models demonstrate greater architectural experimentation but persistent U.S.-centric biases. Neither paradigm achieves robust cross-cultural generalization. We establish that Mistral-series architectures significantly outperform LLaMA3-series in cross-cultural alignment, and that Full-Parameter Fine-Tuning on diverse datasets surpasses Reinforcement Learning from Human Feedback in preserving cultural variation...
Related papers
- LiveCultureBench: a Multi-Agent, Multi-Cultural Benchmark for Large Language Models in Dynamic Social Simulations [63.478832978278014]
Large language models (LLMs) are increasingly deployed as autonomous agents, yet evaluations focus primarily on task success rather than cultural appropriateness or evaluator reliability.<n>We introduce LiveCultureBench, a multi-cultural, dynamic benchmark that embeds LLMs as agents in a simulated town and evaluates them on both task completion and adherence to socio-cultural norms.
arXiv Detail & Related papers (2026-03-02T15:04:16Z) - Toward Culturally Aligned LLMs through Ontology-Guided Multi-Agent Reasoning [6.102462703832761]
We propose OG-MAR, an Ontology-Guided Multi-Agent Reasoning framework.<n> OG-MAR summarizes respondent-specific values from the World Values Survey (WVS).<n>It constructs a global cultural ontology by eliciting relations over a fixed taxonomy via competency questions.<n>At inference time, it retrieves demographically similar profiles to instantiate multiple value-persona agents.
arXiv Detail & Related papers (2026-01-29T13:31:45Z) - Diverse Human Value Alignment for Large Language Models via Ethical Reasoning [13.406831056051034]
Large Language Models (LLMs) must align with diverse human values across different regions and cultures.<n>Current alignment approaches yield superficial conformity rather than genuine ethical understanding.<n>We propose a novel ethical reasoning paradigm for LLMs inspired by well-established ethical decision-making models.
arXiv Detail & Related papers (2025-11-01T03:26:24Z) - MMA-ASIA: A Multilingual and Multimodal Alignment Framework for Culturally-Grounded Evaluation [91.22008265721952]
MMA-ASIA centers on a human-curated, multilingual, and multimodally aligned benchmark covering 8 Asian countries and 10 languages.<n>This is the first dataset aligned at the input level across three modalities: text, image (visual question answering), and speech.<n>We propose a five-dimensional evaluation protocol that measures: (i) cultural-awareness disparities across countries, (ii) cross-lingual consistency, (iii) cross-modal consistency, (iv) cultural knowledge generalization, and (v) grounding validity.
arXiv Detail & Related papers (2025-10-07T14:12:12Z) - A Game-Theoretic Negotiation Framework for Cross-Cultural Consensus in LLMs [10.655783463895325]
Large language models (LLMs) exhibit a pronounced WEIRD (Western, Educated, Industrialized, Rich, Democratic) cultural bias.<n>This monocultural perspective may reinforce dominant values and marginalize diverse cultural viewpoints.<n>We introduce a systematic framework designed to boost fair and robust cross-cultural consensus.
arXiv Detail & Related papers (2025-06-16T08:42:39Z) - Multimodal Cultural Safety: Evaluation Frameworks and Alignment Strategies [58.88053690412802]
Large vision-language models (LVLMs) are increasingly deployed in globally distributed applications, such as tourism assistants.<n> CROSS is a benchmark designed to assess the cultural safety reasoning capabilities of LVLMs.<n>We evaluate 21 leading LVLMs, including mixture-of-experts models and reasoning models.
arXiv Detail & Related papers (2025-05-20T23:20:38Z) - WorldView-Bench: A Benchmark for Evaluating Global Cultural Perspectives in Large Language Models [1.094065133109559]
Large Language Models (LLMs) are predominantly trained and aligned in ways that reinforce Western-centric epistemologies and socio-cultural norms.<n>We introduce WorldView-Bench, a benchmark designed to evaluate Global Cultural Inclusivity (GCI) in LLMs by analyzing their ability to accommodate diverse worldviews.
arXiv Detail & Related papers (2025-05-14T17:43:40Z) - A Survey on Post-training of Large Language Models [185.51013463503946]
Large Language Models (LLMs) have fundamentally transformed natural language processing, making them indispensable across domains ranging from conversational systems to scientific exploration.<n>These challenges necessitate advanced post-training language models (PoLMs) to address shortcomings, such as restricted reasoning capacities, ethical uncertainties, and suboptimal domain-specific performance.<n>This paper presents the first comprehensive survey of PoLMs, systematically tracing their evolution across five core paradigms: Fine-tuning, which enhances task-specific accuracy; Alignment, which ensures ethical coherence and alignment with human preferences; Reasoning, which advances multi-step inference despite challenges in reward design; Integration and Adaptation, which
arXiv Detail & Related papers (2025-03-08T05:41:42Z) - ValuesRAG: Enhancing Cultural Alignment Through Retrieval-Augmented Contextual Learning [1.1343849658875087]
ValuesRAG is a novel framework that integrates cultural and demographic knowledge dynamically during text generation.<n>We evaluate ValuesRAG using 6 diverse regional datasets and show that it consistently outperforms baselines.<n>Our findings underscore the potential of dynamic retrieval-based methods to bridge the gap between global LLM capabilities and localized cultural values.
arXiv Detail & Related papers (2025-01-02T03:26:13Z) - Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation [71.59208664920452]
Cultural biases in multilingual datasets pose significant challenges for their effectiveness as global benchmarks.<n>We show that progress on MMLU depends heavily on learning Western-centric concepts, with 28% of all questions requiring culturally sensitive knowledge.<n>We release Global MMLU, an improved MMLU with evaluation coverage across 42 languages.
arXiv Detail & Related papers (2024-12-04T13:27:09Z) - CulturalTeaming: AI-Assisted Interactive Red-Teaming for Challenging LLMs' (Lack of) Multicultural Knowledge [69.82940934994333]
We introduce CulturalTeaming, an interactive red-teaming system that leverages human-AI collaboration to build challenging evaluation dataset.
Our study reveals that CulturalTeaming's various modes of AI assistance support annotators in creating cultural questions.
CULTURALBENCH-V0.1 is a compact yet high-quality evaluation dataset with users' red-teaming attempts.
arXiv Detail & Related papers (2024-04-10T00:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.