Pedestrian Detection with Wearable Cameras for the Blind: A Two-way
Perspective
- URL: http://arxiv.org/abs/2003.12122v2
- Date: Fri, 22 May 2020 19:17:38 GMT
- Title: Pedestrian Detection with Wearable Cameras for the Blind: A Two-way
Perspective
- Authors: Kyungjun Lee, Daisuke Sato, Saki Asakawa, Hernisa Kacorri, Chieko
Asakawa
- Abstract summary: Blind people have limited access to information about their surroundings, which is important for ensuring one's safety, managing social interactions, and identifying approaching pedestrians.
With advances in computer vision, wearable cameras can provide equitable access to such information.
The always-on nature of these assistive technologies poses privacy concerns for parties that may get recorded.
We explore this tension from both perspectives, those of passersby and blind users, taking into account camera visibility, in-person versus remote experience, and extracted visual information.
- Score: 21.32921014061283
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Blind people have limited access to information about their surroundings,
which is important for ensuring one's safety, managing social interactions, and
identifying approaching pedestrians. With advances in computer vision, wearable
cameras can provide equitable access to such information. However, the
always-on nature of these assistive technologies poses privacy concerns for
parties that may get recorded. We explore this tension from both perspectives,
those of sighted passersby and blind users, taking into account camera
visibility, in-person versus remote experience, and extracted visual
information. We conduct two studies: an online survey with MTurkers (N=206) and
an in-person experience study between pairs of blind (N=10) and sighted (N=40)
participants, where blind participants wear a working prototype for pedestrian
detection and pass by sighted participants. Our results suggest that both of
the perspectives of users and bystanders and the several factors mentioned
above need to be carefully considered to mitigate potential social tensions.
Related papers
- AccessShare: Co-designing Data Access and Sharing with Blind People [13.405455952573005]
Blind people are often called to contribute image data to datasets for AI innovation.
Yet, the visual inspection of the contributed images is inaccessible.
To address this gap, we engage 10 blind participants in a scenario where they wear smartglasses and collect image data using an AI-infused application in their homes.
arXiv Detail & Related papers (2024-07-27T23:39:58Z) - Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - "We are at the mercy of others' opinion": Supporting Blind People in Recreational Window Shopping with AI-infused Technology [14.159425426253577]
We investigate the information needs, challenges, and current approaches blind people have to recreational window shopping.
We find that there is a desire for push notifications of promotional information and pull notifications about shops of interest.
We translate these findings into specific information modalities and rendering in the context of two existing AI-infused assistive applications.
arXiv Detail & Related papers (2024-05-10T17:15:24Z) - Floor extraction and door detection for visually impaired guidance [78.94595951597344]
Finding obstacle-free paths in unknown environments is a big navigation issue for visually impaired people and autonomous robots.
New devices based on computer vision systems can help impaired people to overcome the difficulties of navigating in unknown environments in safe conditions.
In this work it is proposed a combination of sensors and algorithms that can lead to the building of a navigation system for visually impaired people.
arXiv Detail & Related papers (2024-01-30T14:38:43Z) - Eye-tracked Virtual Reality: A Comprehensive Survey on Methods and
Privacy Challenges [33.50215933003216]
This survey focuses on eye tracking in virtual reality (VR) and the privacy implications of those possibilities.
We first cover major works in eye tracking, VR, and privacy areas between the years 2012 and 2022.
We focus on eye-based authentication as well as computational methods to preserve the privacy of individuals and their eye-tracking data in VR.
arXiv Detail & Related papers (2023-05-23T14:02:38Z) - A Survey on Computer Vision based Human Analysis in the COVID-19 Era [58.79053747159797]
The emergence of COVID-19 has had a global and profound impact, not only on society as a whole, but also on the lives of individuals.
Various prevention measures were introduced around the world to limit the transmission of the disease, including face masks, mandates for social distancing and regular disinfection in public spaces, and the use of screening applications.
These developments triggered the need for novel and improved computer vision techniques capable of (i) providing support to the prevention measures through an automated analysis of visual data, on the one hand, and (ii) facilitating normal operation of existing vision-based services, such as biometric authentication
arXiv Detail & Related papers (2022-11-07T17:20:39Z) - Exploring and Improving the Accessibility of Data Privacy-related
Information for People Who Are Blind or Low-vision [22.66113008033347]
We present a study of privacy attitudes and behaviors of people who are blind or low vision.
Our study involved in-depth interviews with 21 US participants.
One objective of the study is to better understand this user group's needs for more accessible privacy tools.
arXiv Detail & Related papers (2022-08-21T20:54:40Z) - Do Pedestrians Pay Attention? Eye Contact Detection in the Wild [75.54077277681353]
In urban environments, humans rely on eye contact for fast and efficient communication with nearby people.
In this paper, we focus on eye contact detection in the wild, i.e., real-world scenarios for autonomous vehicles with no control over the environment or the distance of pedestrians.
We introduce a model that leverages semantic keypoints to detect eye contact and show that this high-level representation achieves state-of-the-art results on the publicly-available dataset JAAD.
To study domain adaptation, we create LOOK: a large-scale dataset for eye contact detection in the wild, which focuses on diverse and un
arXiv Detail & Related papers (2021-12-08T10:21:28Z) - Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual
Observations [75.60524561611008]
This work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments.
We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data.
We validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate.
arXiv Detail & Related papers (2021-09-28T20:25:10Z) - Onfocus Detection: Identifying Individual-Camera Eye Contact from
Unconstrained Images [81.64699115587167]
Onfocus detection aims at identifying whether the focus of the individual captured by a camera is on the camera or not.
We build a large-scale onfocus detection dataset, named as the OnFocus Detection In the Wild (OFDIW)
We propose a novel end-to-end deep model, i.e., the eye-context interaction inferring network (ECIIN) for onfocus detection.
arXiv Detail & Related papers (2021-03-29T03:29:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.