Global urban visual perception varies across demographics and personalities
- URL: http://arxiv.org/abs/2505.12758v3
- Date: Thu, 17 Jul 2025 09:28:28 GMT
- Title: Global urban visual perception varies across demographics and personalities
- Authors: Matias Quintana, Youlong Gu, Xiucheng Liang, Yujun Hou, Koichi Ito, Yihan Zhu, Mahmoud Abdelrahman, Filip Biljecki,
- Abstract summary: We conducted a large-scale urban visual perception survey of streetscapes worldwide using street view imagery.<n>We examined how demographics -- including gender, age, income, education, race and ethnicity, and, for the first time, personality traits -- shape perceptions.<n>This dataset reveals demographic- and personality-based differences across six traditional indicators.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding people's preferences is crucial for urban planning, yet current approaches often combine responses from multi-cultural populations, obscuring demographic differences and risking amplifying biases. We conducted a large-scale urban visual perception survey of streetscapes worldwide using street view imagery, examining how demographics -- including gender, age, income, education, race and ethnicity, and, for the first time, personality traits -- shape perceptions among 1,000 participants with balanced demographics from five countries and 45 nationalities. This dataset, Street Perception Evaluation Considering Socioeconomics (SPECS), reveals demographic- and personality-based differences across six traditional indicators (safe, lively, wealthy, beautiful, boring, depressing) and four new ones (live nearby, walk, cycle, green). Location-based sentiments further shape these preferences. Machine learning models trained on existing global datasets tend to overestimate positive indicators and underestimate negative ones compared to human responses, underscoring the need for local context. Our study aspires to rectify the myopic treatment of street perception, which rarely considers demographics or personality traits.
Related papers
- Cultural Awareness in Vision-Language Models: A Cross-Country Exploration [5.921976812527759]
Vision-Language Models (VLMs) are increasingly deployed in diverse cultural contexts.<n>We propose a novel framework to evaluate how VLMs encode cultural differences and biases related to race, gender, and physical traits across countries.
arXiv Detail & Related papers (2025-05-23T18:47:52Z) - Vision-Language Models under Cultural and Inclusive Considerations [53.614528867159706]
Large vision-language models (VLMs) can assist visually impaired people by describing images from their daily lives.
Current evaluation datasets may not reflect diverse cultural user backgrounds or the situational context of this use case.
We create a survey to determine caption preferences and propose a culture-centric evaluation benchmark by filtering VizWiz, an existing dataset with images taken by people who are blind.
We then evaluate several VLMs, investigating their reliability as visual assistants in a culturally diverse setting.
arXiv Detail & Related papers (2024-07-08T17:50:00Z) - Towards Geographic Inclusion in the Evaluation of Text-to-Image Models [25.780536950323683]
We study how much annotators in Africa, Europe, and Southeast Asia vary in their perception of geographic representation, visual appeal, and consistency in real and generated images.
For example, annotators in different locations often disagree on whether exaggerated, stereotypical depictions of a region are considered geographically representative.
We recommend steps for improved automatic and human evaluations.
arXiv Detail & Related papers (2024-05-07T16:23:06Z) - The PRISM Alignment Dataset: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models [67.38144169029617]
We map the sociodemographics and stated preferences of 1,500 diverse participants from 75 countries, to their contextual preferences and fine-grained feedback in 8,011 live conversations with 21 Large Language Models (LLMs)<n>With PRISM, we contribute (i) wider geographic and demographic participation in feedback; (ii) census-representative samples for two countries (UK, US); and (iii) individualised ratings that link to detailed participant profiles, permitting personalisation and attribution of sample artefacts.<n>We use PRISM in three case studies to demonstrate the need for careful consideration of which humans provide what alignment data.
arXiv Detail & Related papers (2024-04-24T17:51:36Z) - D3CODE: Disentangling Disagreements in Data across Cultures on Offensiveness Detection and Evaluation [5.9053106775634685]
We introduce the dataset: a large-scale cross-cultural dataset of parallel annotations for offensive language in over 4.5K sentences annotated by a pool of over 4k annotators.
The dataset contains annotators' moral values captured along six moral foundations: care, equality, proportionality, authority, loyalty, and purity.
Our analyses reveal substantial regional variations in annotators' perceptions that are shaped by individual moral values.
arXiv Detail & Related papers (2024-04-16T19:12:03Z) - Synthetic Data for the Mitigation of Demographic Biases in Face
Recognition [10.16490522214987]
This study investigates the possibility of mitigating the demographic biases that affect face recognition technologies through the use of synthetic data.
We use synthetic datasets generated with GANDiffFace, a novel framework able to synthesize datasets for face recognition with controllable demographic distribution and realistic intra-class variations.
Our results support the proposed approach and the use of synthetic data to mitigate demographic biases in face recognition.
arXiv Detail & Related papers (2024-02-02T14:57:42Z) - On the steerability of large language models toward data-driven personas [98.9138902560793]
Large language models (LLMs) are known to generate biased responses where the opinions of certain groups and populations are underrepresented.
Here, we present a novel approach to achieve controllable generation of specific viewpoints using LLMs.
arXiv Detail & Related papers (2023-11-08T19:01:13Z) - Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona
Biases in Dialogue Systems [103.416202777731]
We study "persona biases", which we define to be the sensitivity of dialogue models' harmful behaviors contingent upon the personas they adopt.
We categorize persona biases into biases in harmful expression and harmful agreement, and establish a comprehensive evaluation framework to measure persona biases in five aspects: Offensiveness, Toxic Continuation, Regard, Stereotype Agreement, and Toxic Agreement.
arXiv Detail & Related papers (2023-10-08T21:03:18Z) - Towards Fair Face Verification: An In-depth Analysis of Demographic
Biases [11.191375513738361]
Deep learning-based person identification and verification systems have remarkably improved in terms of accuracy in recent years.
However, such systems have been found to exhibit significant biases related to race, age, and gender.
This paper presents an in-depth analysis, with a particular emphasis on the intersectionality of these demographic factors.
arXiv Detail & Related papers (2023-07-19T14:49:14Z) - Spatiotemporal gender differences in urban vibrancy [0.0]
We show that there are differences between males and females in terms of urban vibrancy.
We also find that there are both positive and negative spatial spillovers existing across each city.
Our results increase our understanding of inequality in cities and how we can make future cities fairer.
arXiv Detail & Related papers (2023-04-25T14:12:58Z) - City-Wide Perceptions of Neighbourhood Quality using Street View Images [5.340189314359048]
This paper describes our methodology, based in London, including collection of images and ratings, web development, model training and mapping.
Perceived neighbourhood quality is a core component of urban vitality, influencing social cohesion, sense of community, safety, activity and mental health of residents.
arXiv Detail & Related papers (2022-11-22T10:16:35Z) - Towards Understanding and Mitigating Social Biases in Language Models [107.82654101403264]
Large-scale pretrained language models (LMs) can be potentially dangerous in manifesting undesirable representational biases.
We propose steps towards mitigating social biases during text generation.
Our empirical results and human evaluation demonstrate effectiveness in mitigating bias while retaining crucial contextual information.
arXiv Detail & Related papers (2021-06-24T17:52:43Z) - Investigating Bias in Deep Face Analysis: The KANFace Dataset and
Empirical Study [67.3961439193994]
We introduce the most comprehensive, large-scale dataset of facial images and videos to date.
The data are manually annotated in terms of identity, exact age, gender and kinship.
A method to debias network embeddings is introduced and tested on the proposed benchmarks.
arXiv Detail & Related papers (2020-05-15T00:14:39Z) - From Paris to Berlin: Discovering Fashion Style Influences Around the
World [108.58097776743331]
We propose to quantify fashion influences from everyday images of people wearing clothes.
We introduce an approach that detects which cities influence which other cities in terms of propagating their styles.
We then leverage the discovered influence patterns to inform a forecasting model that predicts the popularity of any given style at any given city into the future.
arXiv Detail & Related papers (2020-04-03T00:54:23Z) - Indexical Cities: Articulating Personal Models of Urban Preference with
Geotagged Data [0.0]
This research characterizes personal preference in urban spaces and predicts a spectrum of unknown likeable places for a specific observer.
Unlike most urban perception studies, our intention is not by any means to provide an objective measure of urban quality, but rather to portray personal views of the city or Cities of Cities.
arXiv Detail & Related papers (2020-01-23T11:00:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.