OmniAcc: Personalized Accessibility Assistant Using Generative AI
- URL: http://arxiv.org/abs/2509.07220v1
- Date: Mon, 08 Sep 2025 21:03:48 GMT
- Title: OmniAcc: Personalized Accessibility Assistant Using Generative AI
- Authors: Siddhant Karki, Ethan Han, Nadim Mahmud, Suman Bhunia, John Femiani, Vaskar Raychoudhury,
- Abstract summary: This paper presents OmniAcc, an AI-powered interactive navigation system.<n>It identifies wheelchair-accessible features such as ramps and crosswalks in the built environment.<n>With a crosswalk detection accuracy of 97.5%, OmniAcc highlights the transformative potential of AI in improving navigation and fostering more inclusive urban spaces.
- Score: 3.093938262967334
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Individuals with ambulatory disabilities often encounter significant barriers when navigating urban environments due to the lack of accessible information and tools. This paper presents OmniAcc, an AI-powered interactive navigation system that utilizes GPT-4, satellite imagery, and OpenStreetMap data to identify, classify, and map wheelchair-accessible features such as ramps and crosswalks in the built environment. OmniAcc offers personalized route planning, real-time hands-free navigation, and instant query responses regarding physical accessibility. By using zero-shot learning and customized prompts, the system ensures precise detection of accessibility features, while supporting validation through structured workflows. This paper introduces OmniAcc and explores its potential to assist urban planners and mobility-aid users, demonstrated through a case study on crosswalk detection. With a crosswalk detection accuracy of 97.5%, OmniAcc highlights the transformative potential of AI in improving navigation and fostering more inclusive urban spaces.
Related papers
- PEDESTRIAN: An Egocentric Vision Dataset for Obstacle Detection on Pavements [3.2069702190300617]
PEDESTRIAN dataset comprises egocentric data for 29 different obstacles commonly found on urban sidewalks.<n>A total of 340 videos were collected using mobile phone cameras, capturing a pedestrian's point of view.<n>We present the results of a series of experiments that involved training several state-of-the-art deep learning algorithms.
arXiv Detail & Related papers (2025-12-22T09:28:23Z) - MR.NAVI: Mixed-Reality Navigation Assistant for the Visually Impaired [42.45301319345154]
We present MR. NAVI, a mixed reality system that enhances spatial awareness for visually impaired users.<n>Our system combines computer vision algorithms for object detection and depth estimation with natural language processing to provide contextual scene descriptions.
arXiv Detail & Related papers (2025-05-28T14:02:56Z) - Mobile Robot Navigation Using Hand-Drawn Maps: A Vision Language Model Approach [5.009635912655658]
Hand-drawn maps can often contain inaccuracies such as scale distortions and missing landmarks.<n>This paper introduces a novel Hand-drawn Map Navigation (HAM-Nav) architecture that leverages pre-trained vision language models.<n>Ham-Nav integrates a unique Selective Visual Association Prompting approach for topological map-based position estimation and navigation planning.
arXiv Detail & Related papers (2025-01-31T19:03:33Z) - IN-Sight: Interactive Navigation through Sight [20.184155117341497]
IN-Sight is a novel approach to self-supervised path planning.
It calculates traversability scores and incorporates them into a semantic map.
To precisely navigate around obstacles, IN-Sight employs a local planner.
arXiv Detail & Related papers (2024-08-01T07:27:54Z) - Learning Robust Autonomous Navigation and Locomotion for Wheeled-Legged Robots [50.02055068660255]
Navigating urban environments poses unique challenges for robots, necessitating innovative solutions for locomotion and navigation.
This work introduces a fully integrated system comprising adaptive locomotion control, mobility-aware local navigation planning, and large-scale path planning within the city.
Using model-free reinforcement learning (RL) techniques and privileged learning, we develop a versatile locomotion controller.
Our controllers are integrated into a large-scale urban navigation system and validated by autonomous, kilometer-scale navigation missions conducted in Zurich, Switzerland, and Seville, Spain.
arXiv Detail & Related papers (2024-05-03T00:29:20Z) - Floor extraction and door detection for visually impaired guidance [78.94595951597344]
Finding obstacle-free paths in unknown environments is a big navigation issue for visually impaired people and autonomous robots.
New devices based on computer vision systems can help impaired people to overcome the difficulties of navigating in unknown environments in safe conditions.
In this work it is proposed a combination of sensors and algorithms that can lead to the building of a navigation system for visually impaired people.
arXiv Detail & Related papers (2024-01-30T14:38:43Z) - MSight: An Edge-Cloud Infrastructure-based Perception System for
Connected Automated Vehicles [58.461077944514564]
This paper presents MSight, a cutting-edge roadside perception system specifically designed for automated vehicles.
MSight offers real-time vehicle detection, localization, tracking, and short-term trajectory prediction.
Evaluations underscore the system's capability to uphold lane-level accuracy with minimal latency.
arXiv Detail & Related papers (2023-10-08T21:32:30Z) - ETPNav: Evolving Topological Planning for Vision-Language Navigation in
Continuous Environments [56.194988818341976]
Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments.
We propose ETPNav, which focuses on two critical skills: 1) the capability to abstract environments and generate long-range navigation plans, and 2) the ability of obstacle-avoiding control in continuous environments.
ETPNav yields more than 10% and 20% improvements over prior state-of-the-art on R2R-CE and RxR-CE datasets.
arXiv Detail & Related papers (2023-04-06T13:07:17Z) - OASIS: Automated Assessment of Urban Pedestrian Paths at Scale [16.675093530600154]
We develop a free and open-source automated mapping system to extract sidewalk network data using mobile physical devices.
We describe a prototype system trained and tested with imagery collected in real-world settings, alongside human surveyors who are part of the local transit pathway review team.
arXiv Detail & Related papers (2023-03-04T01:32:59Z) - Augmented reality navigation system for visual prosthesis [67.09251544230744]
We propose an augmented reality navigation system for visual prosthesis that incorporates a software of reactive navigation and path planning.
It consists on four steps: locating the subject on a map, planning the subject trajectory, showing it to the subject and re-planning without obstacles.
Results show how our augmented navigation system help navigation performance by reducing the time and distance to reach the goals, even significantly reducing the number of obstacles collisions.
arXiv Detail & Related papers (2021-09-30T09:41:40Z) - AI in Smart Cities: Challenges and approaches to enable road vehicle
automation and smart traffic control [56.73750387509709]
SCC ideates on a data-centered society aiming at improving efficiency by automating and optimizing activities and utilities.
This paper describes AI perspectives in SCC and gives an overview of AI-based technologies used in traffic to enable road vehicle automation and smart traffic control.
arXiv Detail & Related papers (2021-04-07T14:31:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.