Floor extraction and door detection for visually impaired guidance
- URL: http://arxiv.org/abs/2401.17056v1
- Date: Tue, 30 Jan 2024 14:38:43 GMT
- Title: Floor extraction and door detection for visually impaired guidance
- Authors: Bruno Berenguel-Baeta, Manuel Guerrero-Viu, Alejandro de Nova, Jesus
Bermudez-Cameo, Alejandro Perez-Yus, Jose J. Guerrero
- Abstract summary: Finding obstacle-free paths in unknown environments is a big navigation issue for visually impaired people and autonomous robots.
New devices based on computer vision systems can help impaired people to overcome the difficulties of navigating in unknown environments in safe conditions.
In this work it is proposed a combination of sensors and algorithms that can lead to the building of a navigation system for visually impaired people.
- Score: 78.94595951597344
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Finding obstacle-free paths in unknown environments is a big navigation issue
for visually impaired people and autonomous robots. Previous works focus on
obstacle avoidance, however they do not have a general view of the environment
they are moving in. New devices based on computer vision systems can help
impaired people to overcome the difficulties of navigating in unknown
environments in safe conditions. In this work it is proposed a combination of
sensors and algorithms that can lead to the building of a navigation system for
visually impaired people. Based on traditional systems that use RGB-D cameras
for obstacle avoidance, it is included and combined the information of a
fish-eye camera, which will give a better understanding of the user's
surroundings. The combination gives robustness and reliability to the system as
well as a wide field of view that allows to obtain many information from the
environment. This combination of sensors is inspired by human vision where the
center of the retina (fovea) provides more accurate information than the
periphery, where humans have a wider field of view. The proposed system is
mounted on a wearable device that provides the obstacle-free zones of the
scene, allowing the planning of trajectories for people guidance.
Related papers
- Multimodal Perception System for Real Open Environment [0.0]
The proposed system includes an embedded computation platform, cameras, ultrasonic sensors, GPS, and IMU devices.
Unlike the traditional frameworks, our system integrates multiple sensors with advanced computer vision algorithms to help users walk outside reliably.
arXiv Detail & Related papers (2024-10-10T13:53:42Z) - Robot Patrol: Using Crowdsourcing and Robotic Systems to Provide Indoor
Navigation Guidance to The Visually Impaired [5.973995274784383]
We develop an integrated system that employs a combination of crowdsourcing, computer vision, and robotic frameworks to provide contextual information to the visually impaired.
The system is designed to provide information to the visually impaired about 1) potential obstacles on the route to their indoor destination, 2) information about indoor events on their route which they may wish to avoid or attend, and 3) any other contextual information that might support them to navigate to their indoor destinations safely and effectively.
arXiv Detail & Related papers (2023-06-05T12:49:52Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Detect and Approach: Close-Range Navigation Support for People with
Blindness and Low Vision [13.478275180547925]
People with blindness and low vision (pBLV) experience significant challenges when locating final destinations or targeting specific objects in unfamiliar environments.
We develop a novel wearable navigation solution to provide real-time guidance for a user to approach a target object of interest efficiently and effectively in unfamiliar environments.
arXiv Detail & Related papers (2022-08-17T18:38:20Z) - Augmented reality navigation system for visual prosthesis [67.09251544230744]
We propose an augmented reality navigation system for visual prosthesis that incorporates a software of reactive navigation and path planning.
It consists on four steps: locating the subject on a map, planning the subject trajectory, showing it to the subject and re-planning without obstacles.
Results show how our augmented navigation system help navigation performance by reducing the time and distance to reach the goals, even significantly reducing the number of obstacles collisions.
arXiv Detail & Related papers (2021-09-30T09:41:40Z) - Vision-Based Mobile Robotics Obstacle Avoidance With Deep Reinforcement
Learning [49.04274612323564]
Obstacle avoidance is a fundamental and challenging problem for autonomous navigation of mobile robots.
In this paper, we consider the problem of obstacle avoidance in simple 3D environments where the robot has to solely rely on a single monocular camera.
We tackle the obstacle avoidance problem as a data-driven end-to-end deep learning approach.
arXiv Detail & Related papers (2021-03-08T13:05:46Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z) - BADGR: An Autonomous Self-Supervised Learning-Based Navigation System [158.6392333480079]
BadGR is an end-to-end learning-based mobile robot navigation system.
It can be trained with self-supervised off-policy data gathered in real-world environments.
BadGR can navigate in real-world urban and off-road environments with geometrically distracting obstacles.
arXiv Detail & Related papers (2020-02-13T18:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.