DISHA: Low-Energy Sparse Transformer at Edge for Outdoor Navigation for the Visually Impaired Individuals
- URL: http://arxiv.org/abs/2406.15864v1
- Date: Sat, 22 Jun 2024 14:49:02 GMT
- Title: DISHA: Low-Energy Sparse Transformer at Edge for Outdoor Navigation for the Visually Impaired Individuals
- Authors: Praveen Nagil, Sumit K. Mandal,
- Abstract summary: We propose an end-to-end technology deployed on an edge device to assist visually impaired people.
Specifically, we propose a novel pruning technique for transformer algorithm which detects sidewalk.
Our proposed technology provides up to 32.49% improvement in accuracy and 1.4 hours of extension in battery life.
- Score: 0.09217021281095907
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Assistive technology for visually impaired individuals is extremely useful to make them independent of another human being in performing day-to-day chores and instill confidence in them. One of the important aspects of assistive technology is outdoor navigation for visually impaired people. While there exist several techniques for outdoor navigation in the literature, they are mainly limited to obstacle detection. However, navigating a visually impaired person through the sidewalk (while the person is walking outside) is important too. Moreover, the assistive technology should ensure low-energy operation to extend the battery life of the device. Therefore, in this work, we propose an end-to-end technology deployed on an edge device to assist visually impaired people. Specifically, we propose a novel pruning technique for transformer algorithm which detects sidewalk. The pruning technique ensures low latency of execution and low energy consumption when the pruned transformer algorithm is deployed on the edge device. Extensive experimental evaluation shows that our proposed technology provides up to 32.49% improvement in accuracy and 1.4 hours of extension in battery life with respect to a baseline technique.
Related papers
- Towards Real-Time Fast Unmanned Aerial Vehicle Detection Using Dynamic Vision Sensors [6.03212980984729]
Unmanned Aerial Vehicles (UAVs) are gaining popularity in civil and military applications.
prevention and detection of UAVs are pivotal to guarantee confidentiality and safety.
This paper presents F-UAV-D (Fast Unmanned Aerial Vehicle Detector), an embedded system that enables fast-moving drone detection.
arXiv Detail & Related papers (2024-03-18T15:27:58Z) - Floor extraction and door detection for visually impaired guidance [78.94595951597344]
Finding obstacle-free paths in unknown environments is a big navigation issue for visually impaired people and autonomous robots.
New devices based on computer vision systems can help impaired people to overcome the difficulties of navigating in unknown environments in safe conditions.
In this work it is proposed a combination of sensors and algorithms that can lead to the building of a navigation system for visually impaired people.
arXiv Detail & Related papers (2024-01-30T14:38:43Z) - Compressing Vision Transformers for Low-Resource Visual Learning [7.662469543657508]
Vision transformer (ViT) and its variants offer state-of-the-art accuracy in tasks such as image classification, object detection, and semantic segmentation.
These models are large and computation-heavy, making their deployment on mobile and edge scenarios limited.
We aim to take a step toward bringing vision transformers to the edge by utilizing popular model compression techniques such as distillation, pruning, and quantization.
arXiv Detail & Related papers (2023-09-05T23:33:39Z) - MagicEye: An Intelligent Wearable Towards Independent Living of Visually
Impaired [0.17499351967216337]
Vision impairment can severely impair a person's ability to work, navigate, and retain independence.
We present MagicEye, a state-of-the-art intelligent wearable device designed to assist visually impaired individuals.
With a total of 35 classes, the neural network employed by MagicEye has been specifically designed to achieve high levels of efficiency and precision in object detection.
arXiv Detail & Related papers (2023-03-24T08:59:35Z) - TransVisDrone: Spatio-Temporal Transformer for Vision-based
Drone-to-Drone Detection in Aerial Videos [57.92385818430939]
Drone-to-drone detection using visual feed has crucial applications, such as detecting drone collisions, detecting drone attacks, or coordinating flight with other drones.
Existing methods are computationally costly, follow non-end-to-end optimization, and have complex multi-stage pipelines, making them less suitable for real-time deployment on edge devices.
We propose a simple yet effective framework, itTransVisDrone, that provides an end-to-end solution with higher computational efficiency.
arXiv Detail & Related papers (2022-10-16T03:05:13Z) - A Novel Three-Dimensional Navigation Method for the Visually Impaired [0.0]
The visually impaired must currently rely on navigational aids to replace their sense of sight, like a white cane or GPS based navigation, both of which fail to work well indoors.
This research seeks to develop a 3D-imaging solution that enables contactless navigation through a complex indoor environment.
The device can pinpoint a user's position and orientation with 31% less error compared to previous approaches.
arXiv Detail & Related papers (2022-06-20T23:59:43Z) - Autonomous Aerial Robot for High-Speed Search and Intercept Applications [86.72321289033562]
A fully-autonomous aerial robot for high-speed object grasping has been proposed.
As an additional sub-task, our system is able to autonomously pierce balloons located in poles close to the surface.
Our approach has been validated in a challenging international competition and has shown outstanding results.
arXiv Detail & Related papers (2021-12-10T11:49:51Z) - Augmented reality navigation system for visual prosthesis [67.09251544230744]
We propose an augmented reality navigation system for visual prosthesis that incorporates a software of reactive navigation and path planning.
It consists on four steps: locating the subject on a map, planning the subject trajectory, showing it to the subject and re-planning without obstacles.
Results show how our augmented navigation system help navigation performance by reducing the time and distance to reach the goals, even significantly reducing the number of obstacles collisions.
arXiv Detail & Related papers (2021-09-30T09:41:40Z) - The Surprising Effectiveness of Visual Odometry Techniques for Embodied
PointGoal Navigation [100.08270721713149]
PointGoal navigation has been introduced in simulated Embodied AI environments.
Recent advances solve this PointGoal navigation task with near-perfect accuracy (99.6% success)
We show that integrating visual odometry techniques into navigation policies improves the state-of-the-art on the popular Habitat PointNav benchmark by a large margin.
arXiv Detail & Related papers (2021-08-26T02:12:49Z) - Deep Learning for Embodied Vision Navigation: A Survey [108.13766213265069]
"Embodied visual navigation" problem requires an agent to navigate in a 3D environment mainly rely on its first-person observation.
This paper attempts to establish an outline of the current works in the field of embodied visual navigation by providing a comprehensive literature survey.
arXiv Detail & Related papers (2021-07-07T12:09:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.