Culture Affordance Atlas: Reconciling Object Diversity Through Functional Mapping
- URL: http://arxiv.org/abs/2512.03173v1
- Date: Tue, 02 Dec 2025 19:16:39 GMT
- Title: Culture Affordance Atlas: Reconciling Object Diversity Through Functional Mapping
- Authors: Joan Nwatu, Longju Bai, Oana Ignat, Rada Mihalcea,
- Abstract summary: Vision-Language (VL) datasets exhibit cultural biases, disproportionately favoring higher-income, Western contexts.<n>We propose a novel function-centric framework that categorizes objects by the functions they fulfill, across diverse cultural and economic contexts.
- Score: 38.345727498425
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Culture shapes the objects people use and for what purposes, yet mainstream Vision-Language (VL) datasets frequently exhibit cultural biases, disproportionately favoring higher-income, Western contexts. This imbalance reduces model generalizability and perpetuates performance disparities, especially impacting lower-income and non-Western communities. To address these disparities, we propose a novel function-centric framework that categorizes objects by the functions they fulfill, across diverse cultural and economic contexts. We implement this framework by creating the Culture Affordance Atlas, a re-annotated and culturally grounded restructuring of the Dollar Street dataset spanning 46 functions and 288 objects publicly available at https://lit.eecs.umich.edu/CultureAffordance-Atlas/index.html. Through extensive empirical analyses using the CLIP model, we demonstrate that function-centric labels substantially reduce socioeconomic performance gaps between high- and low-income groups by a median of 6 pp (statistically significant), improving model effectiveness for lower-income contexts. Furthermore, our analyses reveals numerous culturally essential objects that are frequently overlooked in prominent VL datasets. Our contributions offer a scalable pathway toward building inclusive VL datasets and equitable AI systems.
Related papers
- Evaluating and Improving Cultural Awareness of Reward Models for LLM Alignment [38.24188183584244]
Reward models (RMs) are crucial for aligning large language models with diverse cultures.<n>Existing RM evaluations fall short in assessing cultural awareness due to the scarcity of culturally relevant datasets.<n>We propose Cultural Awareness Reward modeling Benchmark (CARB), covering 10 distinct cultures across 4 cultural domains.
arXiv Detail & Related papers (2025-09-26T02:56:06Z) - From Surveys to Narratives: Rethinking Cultural Value Adaptation in LLMs [62.9861554207279]
Adapting cultural values in Large Language Models (LLMs) presents significant challenges.<n>Prior work primarily aligns LLMs with different cultural values using World Values Survey (WVS) data.<n>We investigate WVS-based training for cultural value adaptation and find that relying solely on survey data cane cultural norms and interfere with factual knowledge.
arXiv Detail & Related papers (2025-05-22T09:00:01Z) - RAVENEA: A Benchmark for Multimodal Retrieval-Augmented Visual Culture Understanding [79.44246283490665]
We introduce RAVENEA, a new benchmark designed to advance visual culture understanding through retrieval.<n>RAVENEA focuses on two tasks: culture-focused visual question answering (cVQA) and culture-informed image captioning (cIC)<n>We train and evaluate seven multimodal retrievers for each image query, and measure the downstream impact of retrieval-augmented inputs across fourteen state-of-the-art vision-language models.
arXiv Detail & Related papers (2025-05-20T14:57:16Z) - CAReDiO: Cultural Alignment of LLM via Representativeness and Distinctiveness Guided Data Optimization [50.90288681622152]
Large Language Models (LLMs) more deeply integrate into human life across various regions.<n>Existing approaches develop culturally aligned LLMs through fine-tuning with culture-specific corpora.<n>We introduce CAReDiO, a novel cultural data construction framework.
arXiv Detail & Related papers (2025-04-09T13:40:13Z) - Cultural Learning-Based Culture Adaptation of Language Models [70.1063219524999]
Adapting large language models (LLMs) to diverse cultural values is a challenging task.<n>We present CLCA, a novel framework for enhancing LLM alignment with cultural values based on cultural learning.
arXiv Detail & Related papers (2025-04-03T18:16:26Z) - CultureVLM: Characterizing and Improving Cultural Understanding of Vision-Language Models for over 100 Countries [63.00147630084146]
Vision-language models (VLMs) have advanced human-AI interaction but struggle with cultural understanding.<n>CultureVerse is a large-scale multimodal benchmark covering 19, 682 cultural concepts, 188 countries/regions, 15 cultural concepts, and 3 question types.<n>We propose CultureVLM, a series of VLMs fine-tuned on our dataset to achieve significant performance improvement in cultural understanding.
arXiv Detail & Related papers (2025-01-02T14:42:37Z) - ValuesRAG: Enhancing Cultural Alignment Through Retrieval-Augmented Contextual Learning [1.1343849658875087]
ValuesRAG is a novel framework that integrates cultural and demographic knowledge dynamically during text generation.<n>We evaluate ValuesRAG using 6 diverse regional datasets and show that it consistently outperforms baselines.<n>Our findings underscore the potential of dynamic retrieval-based methods to bridge the gap between global LLM capabilities and localized cultural values.
arXiv Detail & Related papers (2025-01-02T03:26:13Z) - Uplifting Lower-Income Data: Strategies for Socioeconomic Perspective Shifts in Large Multi-modal Models [28.3552578648979]
We propose and evaluate several prompting strategies using non-English, geographic, and socioeconomic attributes.
We show that these geographic and socioeconomic integrated prompts favor retrieving topic appearances commonly found in data from low-income households across different countries.
arXiv Detail & Related papers (2024-07-02T19:27:00Z) - No Filter: Cultural and Socioeconomic Diversity in Contrastive Vision-Language Models [38.932610459192105]
We study cultural and socioeconomic diversity in contrastive vision-language models (VLMs)
Our work underscores the value of using diverse data to create more inclusive multimodal systems.
arXiv Detail & Related papers (2024-05-22T16:04:22Z) - D3CODE: Disentangling Disagreements in Data across Cultures on Offensiveness Detection and Evaluation [5.9053106775634685]
We introduce the dataset: a large-scale cross-cultural dataset of parallel annotations for offensive language in over 4.5K sentences annotated by a pool of over 4k annotators.
The dataset contains annotators' moral values captured along six moral foundations: care, equality, proportionality, authority, loyalty, and purity.
Our analyses reveal substantial regional variations in annotators' perceptions that are shaped by individual moral values.
arXiv Detail & Related papers (2024-04-16T19:12:03Z) - One-Shot Open Affordance Learning with Foundation Models [54.15857111929812]
We introduce One-shot Open Affordance Learning (OOAL), where a model is trained with just one example per base object category.
We propose a vision-language framework with simple and effective designs that boost the alignment between visual features and affordance text embeddings.
Experiments on two affordance segmentation benchmarks show that the proposed method outperforms state-of-the-art models with less than 1% of the full training data.
arXiv Detail & Related papers (2023-11-29T16:23:06Z) - Bridging the Digital Divide: Performance Variation across Socio-Economic
Factors in Vision-Language Models [31.868468221653025]
We evaluate the performance of a vision-language model (CLIP) on a geo-diverse dataset containing household images associated with different income values.
Our results indicate that performance for the poorer groups is consistently lower than the wealthier groups across various topics and countries.
arXiv Detail & Related papers (2023-11-09T21:10:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.