GreenEye: Development of Real-Time Traffic Signal Recognition System for Visual Impairments
- URL: http://arxiv.org/abs/2410.19840v1
- Date: Mon, 21 Oct 2024 06:27:22 GMT
- Title: GreenEye: Development of Real-Time Traffic Signal Recognition System for Visual Impairments
- Authors: Danu Kim,
- Abstract summary: The GreenEye system recognizes the traffic signals' color and tells the time left for pedestrians to cross the crosswalk in real-time.
The data imbalance caused low precision; extra labeling and database formation were performed to stabilize the number of images between different classes.
- Score: 0.6216023343793144
- License:
- Abstract: Recognizing a traffic signal, determining if the signal is green or red, and figuring out the time left to cross the crosswalk are significant challenges to visually impaired people. Previous research has focused on recognizing only two traffic signals, green and red lights, using machine learning techniques. The proposed method developed a GreenEye system that recognizes the traffic signals' color and tells the time left for pedestrians to cross the crosswalk in real-time. GreenEye's first training showed the highest precision of 74.6%; four classes reported 40% or lower recognition precision in this training session. The data imbalance caused low precision; thus, extra labeling and database formation were performed to stabilize the number of images between different classes. After the stabilization, all 14 classes showed excelling precision rate of 99.5%.
Related papers
- Weakly-supervised Camera Localization by Ground-to-satellite Image Registration [52.54992898069471]
We propose a weakly supervised learning strategy for ground-to-satellite image registration.
It derives positive and negative satellite images for each ground image.
We also propose a self-supervision strategy for cross-view image relative rotation estimation.
arXiv Detail & Related papers (2024-09-10T12:57:16Z) - Voice-Assisted Real-Time Traffic Sign Recognition System Using Convolutional Neural Network [0.0]
This study presents a voice-assisted real-time traffic sign recognition system which is capable of assisting drivers.
The detection and recognition of the traffic signs are carried out using a trained Convolutional Neural Network (CNN)
After recognizing the specific traffic sign, it is narrated to the driver as a voice message using a text-to-speech engine.
arXiv Detail & Related papers (2024-04-11T14:51:12Z) - Real-Time Traffic Sign Detection: A Case Study in a Santa Clara Suburban
Neighborhood [2.4087090457198435]
The project's primary objectives are to train the YOLOv5 model on a diverse dataset of traffic sign images and deploy the model on a suitable hardware platform.
The performance of the deployed system will be evaluated based on its accuracy in detecting traffic signs, real-time processing speed, and overall reliability.
arXiv Detail & Related papers (2023-10-14T17:52:28Z) - Automatic Signboard Recognition in Low Quality Night Images [1.6795461001108096]
This paper addresses the challenges of recognizing traffic signs from images captured in low light, noise, and blurriness.
The proposed method has achieved 5.40% increment in mAP@0.5 for low quality images on Yolov4.
It has also attained mAP@0.5 of 100% on the GTSDB dataset.
arXiv Detail & Related papers (2023-08-17T12:26:06Z) - Efficient Federated Learning with Spike Neural Networks for Traffic Sign
Recognition [70.306089187104]
We introduce powerful Spike Neural Networks (SNNs) into traffic sign recognition for energy-efficient and fast model training.
Numerical results indicate that the proposed federated SNN outperforms traditional federated convolutional neural networks in terms of accuracy, noise immunity, and energy efficiency as well.
arXiv Detail & Related papers (2022-05-28T03:11:48Z) - Rolling Colors: Adversarial Laser Exploits against Traffic Light
Recognition [18.271698365826552]
We study the feasibility of fooling traffic light recognition mechanisms by shedding laser interference on the camera.
By exploiting the rolling shutter of CMOS sensors, we inject a color stripe overlapped on the traffic light in the image, which can cause a red light to be recognized as a green light or vice versa.
Our evaluation reports a maximum success rate of 30% and 86.25% for Red-to-Green and Green-to-Red attacks.
arXiv Detail & Related papers (2022-04-06T08:57:25Z) - Driving-Signal Aware Full-Body Avatars [49.89791440532946]
We present a learning-based method for building driving-signal aware full-body avatars.
Our model is a conditional variational autoencoder that can be animated with incomplete driving signals.
We demonstrate the efficacy of our approach on the challenging problem of full-body animation for virtual telepresence.
arXiv Detail & Related papers (2021-05-21T16:22:38Z) - Road images augmentation with synthetic traffic signs using neural
networks [3.330229314824913]
We consider the task of rare traffic sign detection and classification.
We aim to solve that problem by using synthetic training data.
We propose three methods for making synthetic signs consistent with a scene in appearance.
arXiv Detail & Related papers (2021-01-13T08:10:33Z) - Deep traffic light detection by overlaying synthetic context on
arbitrary natural images [49.592798832978296]
We propose a method to generate artificial traffic-related training data for deep traffic light detectors.
This data is generated using basic non-realistic computer graphics to blend fake traffic scenes on top of arbitrary image backgrounds.
It also tackles the intrinsic data imbalance problem in traffic light datasets, caused mainly by the low amount of samples of the yellow state.
arXiv Detail & Related papers (2020-11-07T19:57:22Z) - Deep Traffic Sign Detection and Recognition Without Target Domain Real
Images [52.079665469286496]
We propose a novel database generation method that requires no real image from the target-domain, and (ii) templates of the traffic signs.
The method does not aim at overcoming the training with real data, but to be a compatible alternative when the real data is not available.
On large data sets, training with a fully synthetic data set almost matches the performance of training with a real one.
arXiv Detail & Related papers (2020-07-30T21:06:47Z) - Learning by Cheating [72.9701333689606]
We show that this challenging learning problem can be simplified by decomposing it into two stages.
We use the presented approach to train a vision-based autonomous driving system that substantially outperforms the state of the art.
Our approach achieves, for the first time, 100% success rate on all tasks in the original CARLA benchmark, sets a new record on the NoCrash benchmark, and reduces the frequency of infractions by an order of magnitude compared to the prior state of the art.
arXiv Detail & Related papers (2019-12-27T18:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.