Back to the Communities: A Mixed-Methods and Community-Driven Evaluation of Cultural Sensitivity in Text-to-Image Models
- URL: http://arxiv.org/abs/2510.27361v1
- Date: Fri, 31 Oct 2025 10:46:26 GMT
- Title: Back to the Communities: A Mixed-Methods and Community-Driven Evaluation of Cultural Sensitivity in Text-to-Image Models
- Authors: Sarah Kiden, Oriane Peter, Gisela Reyes-Cruz, Maira Klyshbekova, Sena Choi, Aislinn Gomez Bergin, Maria Waheed, Damian Eke, Tayyaba Azim, Sarvapali Ramchurn, Sebastian Stein, Elvira Perez Vallejos, Kate Devlin, Joel E Fischer,
- Abstract summary: This paper draws on a state-of-the-art review and co-creation workshops involving 59 individuals from 19 different countries.<n>We developed and validated a mixed-methods community-based evaluation methodology to assess cultural sensitivity in T2I models.
- Score: 6.504576283410799
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Evidence shows that text-to-image (T2I) models disproportionately reflect Western cultural norms, amplifying misrepresentation and harms to minority groups. However, evaluating cultural sensitivity is inherently complex due to its fluid and multifaceted nature. This paper draws on a state-of-the-art review and co-creation workshops involving 59 individuals from 19 different countries. We developed and validated a mixed-methods community-based evaluation methodology to assess cultural sensitivity in T2I models, which embraces first-person methods. Quantitative scores and qualitative inquiries expose convergence and disagreement within and across communities, illuminate the downstream consequences of misrepresentation, and trace how training data shaped by unequal power relations distort depictions. Extensive assessments are constrained by high resource requirements and the dynamic nature of culture, a tension we alleviate through a context-based and iterative methodology. The paper provides actionable recommendations for stakeholders, highlighting pathways to investigate the sources, mechanisms, and impacts of cultural (mis)representation in T2I models.
Related papers
- CURE: Cultural Understanding and Reasoning Evaluation - A Framework for "Thick" Culture Alignment Evaluation in LLMs [24.598338950728234]
Large language models (LLMs) are increasingly deployed in culturally diverse environments.<n>Existing methods focus on de-contextualized correctness or forced-choice judgments.<n>We introduce a set of benchmarks that present models with realistic situational contexts.
arXiv Detail & Related papers (2025-11-15T03:39:13Z) - Hire Your Anthropologist! Rethinking Culture Benchmarks Through an Anthropological Lens [9.000522371422628]
We introduce a four-part framework that categorizes how benchmarks frame culture.<n>We qualitatively examine 20 cultural benchmarks and identify six recurring methodological issues.<n>Our aim is to guide the development of cultural benchmarks that go beyond static recall tasks.
arXiv Detail & Related papers (2025-10-07T13:42:44Z) - Culture is Everywhere: A Call for Intentionally Cultural Evaluation [36.20861746863831]
We argue for textbfintentionally cultural evaluation: an approach that systematically examines the cultural assumptions embedded in all aspects of evaluation.<n>We discuss implications and future directions for moving beyond current benchmarking practices.
arXiv Detail & Related papers (2025-09-01T09:39:21Z) - CAIRe: Cultural Attribution of Images by Retrieval-Augmented Evaluation [61.130639734982395]
We introduce CAIRe, a novel evaluation metric that assesses the degree of cultural relevance of an image.<n>Our framework grounds entities and concepts in the image to a knowledge base and uses factual information to give independent graded judgments for each culture label.
arXiv Detail & Related papers (2025-06-10T17:16:23Z) - CulturalFrames: Assessing Cultural Expectation Alignment in Text-to-Image Models and Evaluation Metrics [23.567641319277943]
We quantify the alignment of text-to-image (T2I) models and evaluation metrics.<n>CulturalFrames is a novel benchmark for rigorous human evaluation of cultural representation.<n>We find that across models and countries, cultural expectations are missed an average of 44% of the time.
arXiv Detail & Related papers (2025-06-10T14:21:46Z) - From Word to World: Evaluate and Mitigate Culture Bias in LLMs via Word Association Test [50.51344198689069]
We extend the human-centered word association test (WAT) to assess the alignment of large language models with cross-cultural cognition.<n>To address culture preference, we propose CultureSteer, an innovative approach by embedding cultural-specific semantic associations directly within the model's internal representation space.
arXiv Detail & Related papers (2025-05-24T07:05:10Z) - Deconstructing Bias: A Multifaceted Framework for Diagnosing Cultural and Compositional Inequities in Text-to-Image Generative Models [3.6335172274433414]
This paper benchmarks the Component Inclusion Score (CIS), a metric designed to evaluate the fidelity of image generation across cultural contexts.<n>We quantify biases in terms of compositional fragility and contextual misalignment, revealing significant performance gaps between Western and non-Western cultural prompts.
arXiv Detail & Related papers (2025-04-05T06:17:43Z) - Cultural Learning-Based Culture Adaptation of Language Models [70.1063219524999]
Adapting large language models (LLMs) to diverse cultural values is a challenging task.<n>We present CLCA, a novel framework for enhancing LLM alignment with cultural values based on cultural learning.
arXiv Detail & Related papers (2025-04-03T18:16:26Z) - Extrinsic Evaluation of Cultural Competence in Large Language Models [53.626808086522985]
We focus on extrinsic evaluation of cultural competence in two text generation tasks.
We evaluate model outputs when an explicit cue of culture, specifically nationality, is perturbed in the prompts.
We find weak correlations between text similarity of outputs for different countries and the cultural values of these countries.
arXiv Detail & Related papers (2024-06-17T14:03:27Z) - T-HITL Effectively Addresses Problematic Associations in Image
Generation and Maintains Overall Visual Quality [52.5529784801908]
We focus on addressing the generation of problematic associations between demographic groups and semantic concepts.
We propose a new methodology with twice-human-in-the-loop (T-HITL) that promises improvements in both reducing problematic associations and also maintaining visual quality.
arXiv Detail & Related papers (2024-02-27T00:29:33Z) - On the Cultural Gap in Text-to-Image Generation [75.69755281031951]
One challenge in text-to-image (T2I) generation is the inadvertent reflection of culture gaps present in the training data.
There is no benchmark to systematically evaluate a T2I model's ability to generate cross-cultural images.
We propose a Challenging Cross-Cultural (C3) benchmark with comprehensive evaluation criteria, which can assess how well-suited a model is to a target culture.
arXiv Detail & Related papers (2023-07-06T13:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.