Augmented reality navigation system for visual prosthesis
- URL: http://arxiv.org/abs/2109.14957v1
- Date: Thu, 30 Sep 2021 09:41:40 GMT
- Title: Augmented reality navigation system for visual prosthesis
- Authors: Melani Sanchez-Garcia, Alejandro Perez-Yus, Ruben Martinez-Cantin,
Jose J. Guerrero
- Abstract summary: We propose an augmented reality navigation system for visual prosthesis that incorporates a software of reactive navigation and path planning.
It consists on four steps: locating the subject on a map, planning the subject trajectory, showing it to the subject and re-planning without obstacles.
Results show how our augmented navigation system help navigation performance by reducing the time and distance to reach the goals, even significantly reducing the number of obstacles collisions.
- Score: 67.09251544230744
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The visual functions of visual prostheses such as field of view, resolution
and dynamic range, seriously restrict the person's ability to navigate in
unknown environments. Implanted patients still require constant assistance for
navigating from one location to another. Hence, there is a need for a system
that is able to assist them safely during their journey. In this work, we
propose an augmented reality navigation system for visual prosthesis that
incorporates a software of reactive navigation and path planning which guides
the subject through convenient, obstacle-free route. It consists on four steps:
locating the subject on a map, planning the subject trajectory, showing it to
the subject and re-planning without obstacles. We have also designed a
simulated prosthetic vision environment which allows us to systematically study
navigation performance. Twelve subjects participated in the experiment.
Subjects were guided by the augmented reality navigation system and their
instruction was to navigate through different environments until they reached
two goals, cross the door and find an object (bin), as fast and accurately as
possible. Results show how our augmented navigation system help navigation
performance by reducing the time and distance to reach the goals, even
significantly reducing the number of obstacles collisions, compared to other
baseline methods.
Related papers
- Visuospatial navigation without distance, prediction, or maps [1.3812010983144802]
We show the sufficiency of a minimal feedforward framework in a classic visual navigation task.
While visual distance enables direct trajectories to the goal, two distinct algorithms develop to robustly navigate using visual angles alone.
Each of the three confers unique contextual tradeoffs as well as aligns with movement behavior observed in rodents, insects, fish, and sperm cells.
arXiv Detail & Related papers (2024-07-18T14:07:44Z) - TOP-Nav: Legged Navigation Integrating Terrain, Obstacle and Proprioception Estimation [5.484041860401147]
TOP-Nav is a novel legged navigation framework that integrates a comprehensive path planner with Terrain awareness, Obstacle avoidance and close-loop Proprioception.
We show that TOP-Nav achieves open-world navigation that the robot can handle terrains or disturbances beyond the distribution of prior knowledge.
arXiv Detail & Related papers (2024-04-23T17:42:45Z) - Floor extraction and door detection for visually impaired guidance [78.94595951597344]
Finding obstacle-free paths in unknown environments is a big navigation issue for visually impaired people and autonomous robots.
New devices based on computer vision systems can help impaired people to overcome the difficulties of navigating in unknown environments in safe conditions.
In this work it is proposed a combination of sensors and algorithms that can lead to the building of a navigation system for visually impaired people.
arXiv Detail & Related papers (2024-01-30T14:38:43Z) - Learning Navigational Visual Representations with Semantic Map
Supervision [85.91625020847358]
We propose a navigational-specific visual representation learning method by contrasting the agent's egocentric views and semantic maps.
Ego$2$-Map learning transfers the compact and rich information from a map, such as objects, structure and transition, to the agent's egocentric representations for navigation.
arXiv Detail & Related papers (2023-07-23T14:01:05Z) - Real-time Vision-based Navigation for a Robot in an Indoor Environment [0.0]
The system utilizes vision-based techniques and advanced path-planning algorithms to enable the robot to navigate toward the destination while avoiding obstacles.
The findings contribute to the advancement of indoor robot navigation, showcasing the potential of vision-based techniques for real-time, autonomous navigation.
arXiv Detail & Related papers (2023-07-02T21:01:56Z) - Detect and Approach: Close-Range Navigation Support for People with
Blindness and Low Vision [13.478275180547925]
People with blindness and low vision (pBLV) experience significant challenges when locating final destinations or targeting specific objects in unfamiliar environments.
We develop a novel wearable navigation solution to provide real-time guidance for a user to approach a target object of interest efficiently and effectively in unfamiliar environments.
arXiv Detail & Related papers (2022-08-17T18:38:20Z) - Explore before Moving: A Feasible Path Estimation and Memory Recalling
Framework for Embodied Navigation [117.26891277593205]
We focus on the navigation and solve the problem of existing navigation algorithms lacking experience and common sense.
Inspired by the human ability to think twice before moving and conceive several feasible paths to seek a goal in unfamiliar scenes, we present a route planning method named Path Estimation and Memory Recalling framework.
We show strong experimental results of PEMR on the EmbodiedQA navigation task.
arXiv Detail & Related papers (2021-10-16T13:30:55Z) - Deep Learning for Embodied Vision Navigation: A Survey [108.13766213265069]
"Embodied visual navigation" problem requires an agent to navigate in a 3D environment mainly rely on its first-person observation.
This paper attempts to establish an outline of the current works in the field of embodied visual navigation by providing a comprehensive literature survey.
arXiv Detail & Related papers (2021-07-07T12:09:04Z) - Active Visual Information Gathering for Vision-Language Navigation [115.40768457718325]
Vision-language navigation (VLN) is the task of entailing an agent to carry out navigational instructions inside photo-realistic environments.
One of the key challenges in VLN is how to conduct a robust navigation by mitigating the uncertainty caused by ambiguous instructions and insufficient observation of the environment.
This work draws inspiration from human navigation behavior and endows an agent with an active information gathering ability for a more intelligent VLN policy.
arXiv Detail & Related papers (2020-07-15T23:54:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.