Street Review: A Participatory AI-Based Framework for Assessing Streetscape Inclusivity
- URL: http://arxiv.org/abs/2508.11708v1
- Date: Thu, 14 Aug 2025 02:40:56 GMT
- Title: Street Review: A Participatory AI-Based Framework for Assessing Streetscape Inclusivity
- Authors: Rashid Mushkani, Shin Koseki,
- Abstract summary: This study presents Street Review, a mixed-methods approach that combines participatory research with AI-based analysis to assess streetscape inclusivity.<n>In Montr'eal, Canada, 28 residents participated in semi-directed interviews and image evaluations, supported by the analysis of 45,000 street-view images from Mapillary.<n>Findings reveal variations in perceptions of inclusivity and accessibility across demographic groups, demonstrating that incorporating diverse user feedback can enhance machine learning models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Urban centers undergo social, demographic, and cultural changes that shape public street use and require systematic evaluation of public spaces. This study presents Street Review, a mixed-methods approach that combines participatory research with AI-based analysis to assess streetscape inclusivity. In Montr\'eal, Canada, 28 residents participated in semi-directed interviews and image evaluations, supported by the analysis of approximately 45,000 street-view images from Mapillary. The approach produced visual analytics, such as heatmaps, to correlate subjective user ratings with physical attributes like sidewalk, maintenance, greenery, and seating. Findings reveal variations in perceptions of inclusivity and accessibility across demographic groups, demonstrating that incorporating diverse user feedback can enhance machine learning models through careful data-labeling and co-production strategies. The Street Review framework offers a systematic method for urban planners and policy analysts to inform planning, policy development, and management of public streets.
Related papers
- Exploring Sidewalk Sheds in New York City through Chatbot Surveys and Human Computer Interaction [47.311965900698084]
We develop an AI-based survey that collects image-based annotations and route choices from pedestrians.<n>This paper conducts a grid-based analysis of entrance annotations and applies logistic mixed-effects modeling to assess sidewalk choice patterns.<n>By integrating generative AI into urban research, this study demonstrates a novel method for evaluating sidewalk shed designs.
arXiv Detail & Related papers (2026-01-30T15:41:44Z) - Evaluating Cognitive-Behavioral Fixation via Multimodal User Viewing Patterns on Social Media [52.313084466769375]
We propose a novel framework for assessing cognitive-behavioral fixation by analyzing users' multimodal social media engagement patterns.<n> Experiments on existing benchmarks and a newly curated multimodal dataset demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2025-09-05T05:50:00Z) - A Multidimensional AI-powered Framework for Analyzing Tourist Perception in Historic Urban Quarters: A Case Study in Shanghai [5.077286019454655]
This study proposes a multidimensional AI-powered framework for analyzing tourist perception in historic urban quarters.<n> Applied to twelve historic quarters in central Shanghai, the framework integrates focal point extraction, color theme analysis, and sentiment mining.
arXiv Detail & Related papers (2025-09-04T02:35:14Z) - Street View Sociability: Interpretable Analysis of Urban Social Behavior Across 15 Cities [1.256245863497516]
We analyzed 2,998 street view images from 15 cities using a multimodal large language model.<n>Results align with long-standing urban planning theory.<n>Further research could establish street view imagery as a scalable, privacy-preserving tool for studying urban sociability.
arXiv Detail & Related papers (2025-08-08T14:15:58Z) - StreetLens: Enabling Human-Centered AI Agents for Neighborhood Assessment from Street View Imagery [5.987690246378683]
We present StreetLens, a researcher-configurable AI system for neighborhood studies.<n>StreetLens embeds relevant social science expertise in a vision-language model for scalable neighborhood environmental assessments.<n>It generates a wide spectrum of semantic annotations from objective features to subjective perceptions.
arXiv Detail & Related papers (2025-06-17T16:06:03Z) - Identifying Aspects in Peer Reviews [61.374437855024844]
We develop a data-driven schema for deriving aspects from a corpus of peer reviews.<n>We introduce a dataset of peer reviews augmented with aspects and show how it can be used for community-level review analysis.
arXiv Detail & Related papers (2025-04-09T14:14:42Z) - Negotiative Alignment: Embracing Disagreement to Achieve Fairer Outcomes -- Insights from Urban Studies [3.510270856154939]
We present findings from a community-centered study in Montreal involving 35 residents with diverse demographic and social identities.<n>We propose negotiative alignment, an AI framework that treats disagreement as an essential input to be preserved, analyzed, and addressed.
arXiv Detail & Related papers (2025-03-16T18:55:54Z) - InclusiViz: Visual Analytics of Human Mobility Data for Understanding and Mitigating Urban Segregation [41.758626973743525]
InclusiViz is a novel visual analytics system for multi-level analysis of urban segregation.<n>We developed a deep learning model to predict mobility patterns across social groups using environmental features, augmented with explainable AI.<n>The system integrates innovative visualizations that allow users to explore segregation patterns from broad overviews to fine-grained detail.
arXiv Detail & Related papers (2025-01-07T07:50:36Z) - Vision-Language Models under Cultural and Inclusive Considerations [53.614528867159706]
Large vision-language models (VLMs) can assist visually impaired people by describing images from their daily lives.
Current evaluation datasets may not reflect diverse cultural user backgrounds or the situational context of this use case.
We create a survey to determine caption preferences and propose a culture-centric evaluation benchmark by filtering VizWiz, an existing dataset with images taken by people who are blind.
We then evaluate several VLMs, investigating their reliability as visual assistants in a culturally diverse setting.
arXiv Detail & Related papers (2024-07-08T17:50:00Z) - City-Wide Perceptions of Neighbourhood Quality using Street View Images [5.340189314359048]
This paper describes our methodology, based in London, including collection of images and ratings, web development, model training and mapping.
Perceived neighbourhood quality is a core component of urban vitality, influencing social cohesion, sense of community, safety, activity and mental health of residents.
arXiv Detail & Related papers (2022-11-22T10:16:35Z) - Evaluation of Self-taught Learning-based Representations for Facial
Emotion Recognition [62.30451764345482]
This work describes different strategies to generate unsupervised representations obtained through the concept of self-taught learning for facial emotion recognition.
The idea is to create complementary representations promoting diversity by varying the autoencoders' initialization, architecture, and training data.
Experimental results on Jaffe and Cohn-Kanade datasets using a leave-one-subject-out protocol show that FER methods based on the proposed diverse representations compare favorably against state-of-the-art approaches.
arXiv Detail & Related papers (2022-04-26T22:48:15Z) - BEV-Net: Assessing Social Distancing Compliance by Joint People
Localization and Geometric Reasoning [77.08836528980248]
Social distancing, an essential public health measure, has gained significant attention since the outbreak of the COVID-19 pandemic.
In this work, the problem of visual social distancing compliance assessment in busy public areas with wide field-of-view cameras is considered.
A dataset of crowd scenes with people annotations under a bird's eye view (BEV) and ground truth for metric distances is introduced.
A multi-branch network, BEV-Net, is proposed to localize individuals in world coordinates and identify high-risk regions where social distancing is violated.
arXiv Detail & Related papers (2021-10-10T23:56:37Z) - Predicting Livelihood Indicators from Community-Generated Street-Level
Imagery [70.5081240396352]
We propose an inexpensive, scalable, and interpretable approach to predict key livelihood indicators from public crowd-sourced street-level imagery.
By comparing our results against ground data collected in nationally-representative household surveys, we demonstrate the performance of our approach in accurately predicting indicators of poverty, population, and health.
arXiv Detail & Related papers (2020-06-15T18:12:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.