MagicEye: An Intelligent Wearable Towards Independent Living of Visually
Impaired
- URL: http://arxiv.org/abs/2303.13863v1
- Date: Fri, 24 Mar 2023 08:59:35 GMT
- Title: MagicEye: An Intelligent Wearable Towards Independent Living of Visually
Impaired
- Authors: Sibi C. Sethuraman, Gaurav R. Tadkapally, Saraju P. Mohanty, Gautam
Galada and Anitha Subramanian
- Abstract summary: Vision impairment can severely impair a person's ability to work, navigate, and retain independence.
We present MagicEye, a state-of-the-art intelligent wearable device designed to assist visually impaired individuals.
With a total of 35 classes, the neural network employed by MagicEye has been specifically designed to achieve high levels of efficiency and precision in object detection.
- Score: 0.17499351967216337
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Individuals with visual impairments often face a multitude of challenging
obstacles in their daily lives. Vision impairment can severely impair a
person's ability to work, navigate, and retain independence. This can result in
educational limits, a higher risk of accidents, and a plethora of other issues.
To address these challenges, we present MagicEye, a state-of-the-art
intelligent wearable device designed to assist visually impaired individuals.
MagicEye employs a custom-trained CNN-based object detection model, capable of
recognizing a wide range of indoor and outdoor objects frequently encountered
in daily life. With a total of 35 classes, the neural network employed by
MagicEye has been specifically designed to achieve high levels of efficiency
and precision in object detection. The device is also equipped with facial
recognition and currency identification modules, providing invaluable
assistance to the visually impaired. In addition, MagicEye features a GPS
sensor for navigation, allowing users to move about with ease, as well as a
proximity sensor for detecting nearby objects without physical contact. In
summary, MagicEye is an innovative and highly advanced wearable device that has
been designed to address the many challenges faced by individuals with visual
impairments. It is equipped with state-of-the-art object detection and
navigation capabilities that are tailored to the needs of the visually
impaired, making it one of the most promising solutions to assist those who are
struggling with visual impairments.
Related papers
- Improve accessibility for Low Vision and Blind people using Machine Learning and Computer Vision [0.0]
This project explores how machine learning and computer vision could be utilized to improve accessibility for people with visual impairments.
This project will concentrate on building a mobile application that helps blind people to orient in space by receiving audio and haptic feedback.
arXiv Detail & Related papers (2024-03-24T21:19:17Z) - Floor extraction and door detection for visually impaired guidance [78.94595951597344]
Finding obstacle-free paths in unknown environments is a big navigation issue for visually impaired people and autonomous robots.
New devices based on computer vision systems can help impaired people to overcome the difficulties of navigating in unknown environments in safe conditions.
In this work it is proposed a combination of sensors and algorithms that can lead to the building of a navigation system for visually impaired people.
arXiv Detail & Related papers (2024-01-30T14:38:43Z) - See, Hear, and Feel: Smart Sensory Fusion for Robotic Manipulation [49.925499720323806]
We study how visual, auditory, and tactile perception can jointly help robots to solve complex manipulation tasks.
We build a robot system that can see with a camera, hear with a contact microphone, and feel with a vision-based tactile sensor.
arXiv Detail & Related papers (2022-12-07T18:55:53Z) - Augmented reality navigation system for visual prosthesis [67.09251544230744]
We propose an augmented reality navigation system for visual prosthesis that incorporates a software of reactive navigation and path planning.
It consists on four steps: locating the subject on a map, planning the subject trajectory, showing it to the subject and re-planning without obstacles.
Results show how our augmented navigation system help navigation performance by reducing the time and distance to reach the goals, even significantly reducing the number of obstacles collisions.
arXiv Detail & Related papers (2021-09-30T09:41:40Z) - VisBuddy -- A Smart Wearable Assistant for the Visually Challenged [0.0]
VisBuddy is a voice-based assistant, where the user can give voice commands to perform specific tasks.
It uses the techniques of image captioning for describing the user's surroundings, optical character recognition (OCR) for reading the text in the user's view, object detection to search and find the objects in a room and web scraping to give the user the latest news.
arXiv Detail & Related papers (2021-08-17T17:15:23Z) - Deep Learning for Embodied Vision Navigation: A Survey [108.13766213265069]
"Embodied visual navigation" problem requires an agent to navigate in a 3D environment mainly rely on its first-person observation.
This paper attempts to establish an outline of the current works in the field of embodied visual navigation by providing a comprehensive literature survey.
arXiv Detail & Related papers (2021-07-07T12:09:04Z) - Gaze-contingent decoding of human navigation intention on an autonomous
wheelchair platform [6.646253877148766]
We have pioneered the Where-You-Look-Is Where-You-Go approach to controlling mobility platforms.
We present a new solution, consisting of 1. deep computer vision to understand what object a user is looking at in their field of view.
Our decoding system ultimately determines whether the user wants to drive to e.g., a door or just looks at it.
arXiv Detail & Related papers (2021-03-04T14:52:06Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Reactive Human-to-Robot Handovers of Arbitrary Objects [57.845894608577495]
We present a vision-based system that enables human-to-robot handovers of unknown objects.
Our approach combines closed-loop motion planning with real-time, temporally-consistent grasp generation.
We demonstrate the generalizability, usability, and robustness of our approach on a novel benchmark set of 26 diverse household objects.
arXiv Detail & Related papers (2020-11-17T21:52:22Z) - Towards Hardware-Agnostic Gaze-Trackers [0.5512295869673146]
We present a deep neural network architecture as an appearance-based method for constrained gaze-tracking.
Our system achieved an error of 1.8073cm on GazeCapture dataset without any calibration or device specific fine-tuning.
arXiv Detail & Related papers (2020-10-11T00:53:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.