Detect and Approach: Close-Range Navigation Support for People with
Blindness and Low Vision
- URL: http://arxiv.org/abs/2208.08477v1
- Date: Wed, 17 Aug 2022 18:38:20 GMT
- Title: Detect and Approach: Close-Range Navigation Support for People with
Blindness and Low Vision
- Authors: Yu Hao, Junchi Feng, John-Ross Rizzo, Yao Wang, Yi Fang
- Abstract summary: People with blindness and low vision (pBLV) experience significant challenges when locating final destinations or targeting specific objects in unfamiliar environments.
We develop a novel wearable navigation solution to provide real-time guidance for a user to approach a target object of interest efficiently and effectively in unfamiliar environments.
- Score: 13.478275180547925
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: People with blindness and low vision (pBLV) experience significant challenges
when locating final destinations or targeting specific objects in unfamiliar
environments. Furthermore, besides initially locating and orienting oneself to
a target object, approaching the final target from one's present position is
often frustrating and challenging, especially when one drifts away from the
initial planned path to avoid obstacles. In this paper, we develop a novel
wearable navigation solution to provide real-time guidance for a user to
approach a target object of interest efficiently and effectively in unfamiliar
environments. Our system contains two key visual computing functions: initial
target object localization in 3D and continuous estimation of the user's
trajectory, both based on the 2D video captured by a low-cost monocular camera
mounted on in front of the chest of the user. These functions enable the system
to suggest an initial navigation path, continuously update the path as the user
moves, and offer timely recommendation about the correction of the user's path.
Our experiments demonstrate that our system is able to operate with an error of
less than 0.5 meter both outdoor and indoor. The system is entirely
vision-based and does not need other sensors for navigation, and the
computation can be run with the Jetson processor in the wearable system to
facilitate real-time navigation assistance.
Related papers
- TOP-Nav: Legged Navigation Integrating Terrain, Obstacle and Proprioception Estimation [5.484041860401147]
TOP-Nav is a novel legged navigation framework that integrates a comprehensive path planner with Terrain awareness, Obstacle avoidance and close-loop Proprioception.
We show that TOP-Nav achieves open-world navigation that the robot can handle terrains or disturbances beyond the distribution of prior knowledge.
arXiv Detail & Related papers (2024-04-23T17:42:45Z) - Floor extraction and door detection for visually impaired guidance [78.94595951597344]
Finding obstacle-free paths in unknown environments is a big navigation issue for visually impaired people and autonomous robots.
New devices based on computer vision systems can help impaired people to overcome the difficulties of navigating in unknown environments in safe conditions.
In this work it is proposed a combination of sensors and algorithms that can lead to the building of a navigation system for visually impaired people.
arXiv Detail & Related papers (2024-01-30T14:38:43Z) - ETPNav: Evolving Topological Planning for Vision-Language Navigation in
Continuous Environments [56.194988818341976]
Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments.
We propose ETPNav, which focuses on two critical skills: 1) the capability to abstract environments and generate long-range navigation plans, and 2) the ability of obstacle-avoiding control in continuous environments.
ETPNav yields more than 10% and 20% improvements over prior state-of-the-art on R2R-CE and RxR-CE datasets.
arXiv Detail & Related papers (2023-04-06T13:07:17Z) - UNav: An Infrastructure-Independent Vision-Based Navigation System for
People with Blindness and Low vision [4.128685217530067]
We propose a vision-based localization pipeline for navigation support for end-users with blindness and low vision.
Given a query image taken by an end-user on a mobile application, the pipeline leverages a visual place recognition (VPR) algorithm to find similar images in a reference image database.
A customized user interface projects a 3D reconstructed sparse map, built from a sequence of images, to the corresponding a priori 2D floor plan.
arXiv Detail & Related papers (2022-09-22T22:21:37Z) - ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints [94.60414567852536]
Long-range navigation requires both planning and reasoning about local traversability.
We propose a learning-based approach that integrates learning and planning.
ViKiNG can leverage its image-based learned controller and goal-directed to navigate to goals up to 3 kilometers away.
arXiv Detail & Related papers (2022-02-23T02:14:23Z) - Explore before Moving: A Feasible Path Estimation and Memory Recalling
Framework for Embodied Navigation [117.26891277593205]
We focus on the navigation and solve the problem of existing navigation algorithms lacking experience and common sense.
Inspired by the human ability to think twice before moving and conceive several feasible paths to seek a goal in unfamiliar scenes, we present a route planning method named Path Estimation and Memory Recalling framework.
We show strong experimental results of PEMR on the EmbodiedQA navigation task.
arXiv Detail & Related papers (2021-10-16T13:30:55Z) - Augmented reality navigation system for visual prosthesis [67.09251544230744]
We propose an augmented reality navigation system for visual prosthesis that incorporates a software of reactive navigation and path planning.
It consists on four steps: locating the subject on a map, planning the subject trajectory, showing it to the subject and re-planning without obstacles.
Results show how our augmented navigation system help navigation performance by reducing the time and distance to reach the goals, even significantly reducing the number of obstacles collisions.
arXiv Detail & Related papers (2021-09-30T09:41:40Z) - Pushing it out of the Way: Interactive Visual Navigation [62.296686176988125]
We study the problem of interactive navigation where agents learn to change the environment to navigate more efficiently to their goals.
We introduce the Neural Interaction Engine (NIE) to explicitly predict the change in the environment caused by the agent's actions.
By modeling the changes while planning, we find that agents exhibit significant improvements in their navigational capabilities.
arXiv Detail & Related papers (2021-04-28T22:46:41Z) - Gaze-contingent decoding of human navigation intention on an autonomous
wheelchair platform [6.646253877148766]
We have pioneered the Where-You-Look-Is Where-You-Go approach to controlling mobility platforms.
We present a new solution, consisting of 1. deep computer vision to understand what object a user is looking at in their field of view.
Our decoding system ultimately determines whether the user wants to drive to e.g., a door or just looks at it.
arXiv Detail & Related papers (2021-03-04T14:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.