"We are at the mercy of others' opinion": Supporting Blind People in Recreational Window Shopping with AI-infused Technology
- URL: http://arxiv.org/abs/2405.06611v1
- Date: Fri, 10 May 2024 17:15:24 GMT
- Title: "We are at the mercy of others' opinion": Supporting Blind People in Recreational Window Shopping with AI-infused Technology
- Authors: Rie Kamikubo, Hernisa Kacorri, Chieko Asakawa,
- Abstract summary: We investigate the information needs, challenges, and current approaches blind people have to recreational window shopping.
We find that there is a desire for push notifications of promotional information and pull notifications about shops of interest.
We translate these findings into specific information modalities and rendering in the context of two existing AI-infused assistive applications.
- Score: 14.159425426253577
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Engaging in recreational activities in public spaces poses challenges for blind people, often involving dependency on sighted help. Window shopping is a key recreational activity that remains inaccessible. In this paper, we investigate the information needs, challenges, and current approaches blind people have to recreational window shopping to inform the design of existing wayfinding and navigation technology for supporting blind shoppers in exploration and serendipitous discovery. We conduct a formative study with a total of 18 blind participants that include both focus groups (N=8) and interviews for requirements analysis (N=10). We find that there is a desire for push notifications of promotional information and pull notifications about shops of interest such as the targeted audience of a brand. Information about obstacles and points-of-interest required customization depending on one's mobility aid as well as presence of a crowd, children, and wheelchair users. We translate these findings into specific information modalities and rendering in the context of two existing AI-infused assistive applications: NavCog (a turn-by-turn navigation app) and Cabot (a navigation robot).
Related papers
- Floor extraction and door detection for visually impaired guidance [78.94595951597344]
Finding obstacle-free paths in unknown environments is a big navigation issue for visually impaired people and autonomous robots.
New devices based on computer vision systems can help impaired people to overcome the difficulties of navigating in unknown environments in safe conditions.
In this work it is proposed a combination of sensors and algorithms that can lead to the building of a navigation system for visually impaired people.
arXiv Detail & Related papers (2024-01-30T14:38:43Z) - Robot Patrol: Using Crowdsourcing and Robotic Systems to Provide Indoor
Navigation Guidance to The Visually Impaired [5.973995274784383]
We develop an integrated system that employs a combination of crowdsourcing, computer vision, and robotic frameworks to provide contextual information to the visually impaired.
The system is designed to provide information to the visually impaired about 1) potential obstacles on the route to their indoor destination, 2) information about indoor events on their route which they may wish to avoid or attend, and 3) any other contextual information that might support them to navigate to their indoor destinations safely and effectively.
arXiv Detail & Related papers (2023-06-05T12:49:52Z) - Emergence of Maps in the Memories of Blind Navigation Agents [68.41901534985575]
Animal navigation research posits that organisms build and maintain internal spatial representations, or maps, of their environment.
We ask if machines -- specifically, artificial intelligence (AI) navigation agents -- also build implicit (or'mental') maps.
Unlike animal navigation, we can judiciously design the agent's perceptual system and control the learning paradigm to nullify alternative navigation mechanisms.
arXiv Detail & Related papers (2023-01-30T20:09:39Z) - Exploring and Improving the Accessibility of Data Privacy-related
Information for People Who Are Blind or Low-vision [22.66113008033347]
We present a study of privacy attitudes and behaviors of people who are blind or low vision.
Our study involved in-depth interviews with 21 US participants.
One objective of the study is to better understand this user group's needs for more accessible privacy tools.
arXiv Detail & Related papers (2022-08-21T20:54:40Z) - Augmented reality navigation system for visual prosthesis [67.09251544230744]
We propose an augmented reality navigation system for visual prosthesis that incorporates a software of reactive navigation and path planning.
It consists on four steps: locating the subject on a map, planning the subject trajectory, showing it to the subject and re-planning without obstacles.
Results show how our augmented navigation system help navigation performance by reducing the time and distance to reach the goals, even significantly reducing the number of obstacles collisions.
arXiv Detail & Related papers (2021-09-30T09:41:40Z) - VisBuddy -- A Smart Wearable Assistant for the Visually Challenged [0.0]
VisBuddy is a voice-based assistant, where the user can give voice commands to perform specific tasks.
It uses the techniques of image captioning for describing the user's surroundings, optical character recognition (OCR) for reading the text in the user's view, object detection to search and find the objects in a room and web scraping to give the user the latest news.
arXiv Detail & Related papers (2021-08-17T17:15:23Z) - Deep Learning for Embodied Vision Navigation: A Survey [108.13766213265069]
"Embodied visual navigation" problem requires an agent to navigate in a 3D environment mainly rely on its first-person observation.
This paper attempts to establish an outline of the current works in the field of embodied visual navigation by providing a comprehensive literature survey.
arXiv Detail & Related papers (2021-07-07T12:09:04Z) - Diagnosing Vision-and-Language Navigation: What Really Matters [61.72935815656582]
Vision-and-language navigation (VLN) is a multimodal task where an agent follows natural language instructions and navigates in visual environments.
Recent studies witness a slow-down in the performance improvements in both indoor and outdoor VLN tasks.
In this work, we conduct a series of diagnostic experiments to unveil agents' focus during navigation.
arXiv Detail & Related papers (2021-03-30T17:59:07Z) - Active Visual Information Gathering for Vision-Language Navigation [115.40768457718325]
Vision-language navigation (VLN) is the task of entailing an agent to carry out navigational instructions inside photo-realistic environments.
One of the key challenges in VLN is how to conduct a robust navigation by mitigating the uncertainty caused by ambiguous instructions and insufficient observation of the environment.
This work draws inspiration from human navigation behavior and endows an agent with an active information gathering ability for a more intelligent VLN policy.
arXiv Detail & Related papers (2020-07-15T23:54:20Z) - Pedestrian Detection with Wearable Cameras for the Blind: A Two-way
Perspective [21.32921014061283]
Blind people have limited access to information about their surroundings, which is important for ensuring one's safety, managing social interactions, and identifying approaching pedestrians.
With advances in computer vision, wearable cameras can provide equitable access to such information.
The always-on nature of these assistive technologies poses privacy concerns for parties that may get recorded.
We explore this tension from both perspectives, those of passersby and blind users, taking into account camera visibility, in-person versus remote experience, and extracted visual information.
arXiv Detail & Related papers (2020-03-26T19:34:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.