Grounding Multilingual Multimodal LLMs With Cultural Knowledge
- URL: http://arxiv.org/abs/2508.07414v2
- Date: Tue, 12 Aug 2025 05:39:35 GMT
- Title: Grounding Multilingual Multimodal LLMs With Cultural Knowledge
- Authors: Jean de Dieu Nyandwi, Yueqi Song, Simran Khanuja, Graham Neubig,
- Abstract summary: We propose a data-centric approach that grounds MLLMs in cultural knowledge.<n>CulturalGround comprises 22 million high-quality, culturally-rich VQA pairs spanning 42 countries and 39 languages.<n>We train an open-source MLLM CulturalPangea on CulturalGround, interleaving standard multilingual instruction-tuning data to preserve general abilities.
- Score: 48.95126394270723
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multimodal Large Language Models excel in high-resource settings, but often misinterpret long-tail cultural entities and underperform in low-resource languages. To address this gap, we propose a data-centric approach that directly grounds MLLMs in cultural knowledge. Leveraging a large scale knowledge graph from Wikidata, we collect images that represent culturally significant entities, and generate synthetic multilingual visual question answering data. The resulting dataset, CulturalGround, comprises 22 million high-quality, culturally-rich VQA pairs spanning 42 countries and 39 languages. We train an open-source MLLM CulturalPangea on CulturalGround, interleaving standard multilingual instruction-tuning data to preserve general abilities. CulturalPangea achieves state-of-the-art performance among open models on various culture-focused multilingual multimodal benchmarks, outperforming prior models by an average of 5.0 without degrading results on mainstream vision-language tasks. Our findings show that our targeted, culturally grounded approach could substantially narrow the cultural gap in MLLMs and offer a practical path towards globally inclusive multimodal systems.
Related papers
- LLMs as Cultural Archives: Cultural Commonsense Knowledge Graph Extraction [57.23766971626989]
Large language models (LLMs) encode rich cultural knowledge learned from diverse web-scale data.<n>We present an iterative, prompt-based framework for constructing a Cultural Commonsense Knowledge Graph (CCKG)<n>We find that the cultural knowledge graphs are better realized in English, even when the target culture is non-English.
arXiv Detail & Related papers (2026-01-25T20:05:04Z) - From Facts to Folklore: Evaluating Large Language Models on Bengali Cultural Knowledge [7.322034156204158]
We show that large language models (LLMs) struggle with cultural knowledge and performance when context is provided.<n>Our work addresses these limitations through a Bengali Language Cultural Knowledge dataset including folk traditions, culinary arts, and regional dialects.<n>Our investigation of several multilingual language models shows that while these models perform well in non-cultural categories, they struggle significantly with cultural knowledge and performance improves substantially when context is provided.
arXiv Detail & Related papers (2025-10-22T21:42:59Z) - CultureScope: A Dimensional Lens for Probing Cultural Understanding in LLMs [57.653830744706305]
CultureScope is the most comprehensive evaluation framework to date for assessing cultural understanding in large language models.<n>Inspired by the cultural iceberg theory, we design a novel dimensional schema for cultural knowledge classification.<n> Experimental results demonstrate that our method can effectively evaluate cultural understanding.
arXiv Detail & Related papers (2025-09-19T17:47:48Z) - MELLA: Bridging Linguistic Capability and Cultural Groundedness for Low-Resource Language MLLMs [24.075526141969625]
Multimodal Large Language Models (MLLMs) have shown remarkable performance in high-resource languages.<n>However, their effectiveness diminishes significantly in the contexts of low-resource languages.<n>We propose a dual-source strategy that guides the collection of data tailored to each goal, sourcing native web alt-text for culture and MLLM-generated captions for linguistics.<n>Experiment results show that after fine-tuning on MELLA, there is a general performance improvement for the eight languages on various MLLM backbones.
arXiv Detail & Related papers (2025-08-07T15:36:24Z) - MAKIEval: A Multilingual Automatic WiKidata-based Framework for Cultural Awareness Evaluation for LLMs [26.806566827956875]
MAKIEval is an automatic multilingual framework for evaluating cultural awareness in large language models.<n>It automatically identifies cultural entities in model outputs and links them to structured knowledge.<n>We assess 7 LLMs developed from different parts of the world, encompassing both open-source and proprietary systems.
arXiv Detail & Related papers (2025-05-27T19:29:40Z) - Multilingual != Multicultural: Evaluating Gaps Between Multilingual Capabilities and Cultural Alignment in LLMs [2.5212698425008377]
Large Language Models (LLMs) are becoming increasingly capable across global languages.<n>However, the ability to communicate across languages does not necessarily translate to appropriate cultural representations.<n>We compare two families of models: Google's Gemma models and OpenAI's turbo-series.<n>We find no consistent relationships between language capabilities and cultural alignment.
arXiv Detail & Related papers (2025-02-23T11:02:41Z) - Pangea: A Fully Open Multilingual Multimodal LLM for 39 Languages [55.36534539177367]
This paper introduces Pangea, a multilingual multimodal large language model (MLLM) trained on a diverse 6M instruction dataset spanning 39 languages.<n>P Pangea significantly outperforms existing open-source models in multilingual settings and diverse cultural contexts.<n>We fully open-source our data, code, and trained checkpoints, to facilitate the development of inclusive and robust multilingual MLLMs.
arXiv Detail & Related papers (2024-10-21T16:19:41Z) - CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark [68.21939124278065]
Culturally-diverse multilingual Visual Question Answering benchmark designed to cover a rich set of languages and cultures.
CVQA includes culturally-driven images and questions from across 30 countries on four continents, covering 31 languages with 13 scripts, providing a total of 10k questions.
We benchmark several Multimodal Large Language Models (MLLMs) on CVQA, and show that the dataset is challenging for the current state-of-the-art models.
arXiv Detail & Related papers (2024-06-10T01:59:00Z) - Massively Multi-Cultural Knowledge Acquisition & LM Benchmarking [48.21982147529661]
This paper introduces a novel approach for massively multicultural knowledge acquisition.
Our method strategically navigates from densely informative Wikipedia documents on cultural topics to an extensive network of linked pages.
Our work marks an important step towards deeper understanding and bridging the gaps of cultural disparities in AI.
arXiv Detail & Related papers (2024-02-14T18:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.