Accessibility Scout: Personalized Accessibility Scans of Built Environments
- URL: http://arxiv.org/abs/2507.23190v1
- Date: Thu, 31 Jul 2025 02:07:31 GMT
- Title: Accessibility Scout: Personalized Accessibility Scans of Built Environments
- Authors: William Huang, Xia Su, Jon E. Froehlich, Yang Zhang,
- Abstract summary: Assessment of accessibility of unfamiliar built environments is critical for people with disabilities.<n>Recent advances in Large Language Models (LLMs) enable novel approaches to this problem.<n>We present Accessibility Scout, an LLM-based accessibility scanning system that identifies accessibility concerns from photos of built environments.
- Score: 10.083187958861812
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Assessing the accessibility of unfamiliar built environments is critical for people with disabilities. However, manual assessments, performed by users or their personal health professionals, are laborious and unscalable, while automatic machine learning methods often neglect an individual user's unique needs. Recent advances in Large Language Models (LLMs) enable novel approaches to this problem, balancing personalization with scalability to enable more adaptive and context-aware assessments of accessibility. We present Accessibility Scout, an LLM-based accessibility scanning system that identifies accessibility concerns from photos of built environments. With use, Accessibility Scout becomes an increasingly capable "accessibility scout", tailoring accessibility scans to an individual's mobility level, preferences, and specific environmental interests through collaborative Human-AI assessments. We present findings from three studies: a formative study with six participants to inform the design of Accessibility Scout, a technical evaluation of 500 images of built environments, and a user study with 10 participants of varying mobility. Results from our technical evaluation and user study show that Accessibility Scout can generate personalized accessibility scans that extend beyond traditional ADA considerations. Finally, we conclude with a discussion on the implications of our work and future steps for building more scalable and personalized accessibility assessments of the physical world.
Related papers
- Stochastic Encodings for Active Feature Acquisition [100.47043816019888]
Active Feature Acquisition is an instance-wise, sequential decision making problem.<n>The aim is to dynamically select which feature to measure based on current observations, independently for each test instance.<n>Common approaches either use Reinforcement Learning, which experiences training difficulties, or greedily maximize the conditional mutual information of the label and unobserved features, which makes myopic.<n>We introduce a latent variable model, trained in a supervised manner. Acquisitions are made by reasoning about the features across many possible unobserved realizations in a latent space.
arXiv Detail & Related papers (2025-08-03T23:48:46Z) - AI-based Wearable Vision Assistance System for the Visually Impaired: Integrating Real-Time Object Recognition and Contextual Understanding Using Large Vision-Language Models [0.0]
This paper introduces a novel wearable vision assistance system with artificial intelligence (AI) technology to deliver real-time feedback to a user through a sound beep mechanism.<n>The system provides detailed descriptions of objects in the user's environment using a large vision language model (LVLM)
arXiv Detail & Related papers (2024-12-28T07:26:39Z) - A Survey of Accessible Explainable Artificial Intelligence Research [0.0]
This paper presents a systematic literature review of the research on the accessibility of Explainable Artificial Intelligence (XAI)
Our methodology includes searching several academic databases with search terms to capture intersections between XAI and accessibility.
We stress the importance of including the disability community in XAI development to promote digital inclusion and accessibility.
arXiv Detail & Related papers (2024-07-02T21:09:46Z) - Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Predicting the Intention to Interact with a Service Robot:the Role of Gaze Cues [51.58558750517068]
Service robots need to perceive as early as possible that an approaching person intends to interact.
We solve this perception task with a sequence-to-sequence classifier of a potential user intention to interact.
Our main contribution is a study of the benefit of features representing the person's gaze in this context.
arXiv Detail & Related papers (2024-04-02T14:22:54Z) - How Human-Centered Explainable AI Interface Are Designed and Evaluated: A Systematic Survey [48.97104365617498]
The emerging area of em Explainable Interfaces (EIs) focuses on the user interface and user experience design aspects of XAI.
This paper presents a systematic survey of 53 publications to identify current trends in human-XAI interaction and promising directions for EI design and development.
arXiv Detail & Related papers (2024-03-21T15:44:56Z) - AccessLens: Auto-detecting Inaccessibility of Everyday Objects [17.269659576368536]
We introduce AccessLens, an end-to-end system designed to identify inaccessible interfaces in daily objects.
Our approach involves training a detector using the novel AccessDB dataset designed to automatically recognize 21 distinct Inaccessibility Classes.
AccessMeta serves as a robust way to build a comprehensive dictionary linking these accessibility classes to open-source 3D augmentation designs.
arXiv Detail & Related papers (2024-01-29T09:27:55Z) - Driving Towards Inclusion: A Systematic Review of AI-powered Accessibility Enhancements for People with Disability in Autonomous Vehicles [4.080497848091375]
We review inclusive human-computer interaction (HCI) within autonomous vehicles (AVs) and human-driven cars with partial autonomy.<n>Key technologies discussed include brain-computer interfaces, anthropomorphic interaction, virtual reality, augmented reality, mode adaptation, voice-activated interfaces, haptic feedback, etc.<n>Building on these findings, we propose an end-to-end design framework that addresses accessibility requirements across diverse user demographics.
arXiv Detail & Related papers (2024-01-26T00:06:08Z) - Can Foundation Models Watch, Talk and Guide You Step by Step to Make a
Cake? [62.59699229202307]
Despite advances in AI, it remains a significant challenge to develop interactive task guidance systems.
We created a new multimodal benchmark dataset, Watch, Talk and Guide (WTaG) based on natural interaction between a human user and a human instructor.
We leveraged several foundation models to study to what extent these models can be quickly adapted to perceptually enabled task guidance.
arXiv Detail & Related papers (2023-11-01T15:13:49Z) - Revisiting the Reliability of Psychological Scales on Large Language Models [62.57981196992073]
This study aims to determine the reliability of applying personality assessments to Large Language Models.
Analysis of 2,500 settings per model, including GPT-3.5, GPT-4, Gemini-Pro, and LLaMA-3.1, reveals that various LLMs show consistency in responses to the Big Five Inventory.
arXiv Detail & Related papers (2023-05-31T15:03:28Z) - Integrating Accessibility in a Mobile App Development Course [0.0]
The course introduced three accessibility-related topics using various interventions: Accessibility Awareness (a guest lecture by a legal expert), Technical Knowledge (lectures on Android accessibility guidelines and testing practices), and Empathy (an activity that required students to blindfold themselves and interact with their phones using a screen-reader)
All students could correctly identify at least one accessibility issue in the user interface of a real-world app given its screenshot, and 90% of them could provide a correct solution to fix it.
arXiv Detail & Related papers (2022-10-12T12:44:33Z) - ASHA: Assistive Teleoperation via Human-in-the-Loop Reinforcement
Learning [91.58711082348293]
Reinforcement learning from online user feedback on the system's performance presents a natural solution to this problem.
This approach tends to require a large amount of human-in-the-loop training data, especially when feedback is sparse.
We propose a hierarchical solution that learns efficiently from sparse user feedback.
arXiv Detail & Related papers (2022-02-05T02:01:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.