Automatic Signboard Recognition in Low Quality Night Images
- URL: http://arxiv.org/abs/2308.08941v1
- Date: Thu, 17 Aug 2023 12:26:06 GMT
- Title: Automatic Signboard Recognition in Low Quality Night Images
- Authors: Manas Kagde, Priyanka Choudhary, Rishi Joshi and Somnath Dey
- Abstract summary: This paper addresses the challenges of recognizing traffic signs from images captured in low light, noise, and blurriness.
The proposed method has achieved 5.40% increment in mAP@0.5 for low quality images on Yolov4.
It has also attained mAP@0.5 of 100% on the GTSDB dataset.
- Score: 1.6795461001108096
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An essential requirement for driver assistance systems and autonomous driving
technology is implementing a robust system for detecting and recognizing
traffic signs. This system enables the vehicle to autonomously analyze the
environment and make appropriate decisions regarding its movement, even when
operating at higher frame rates. However, traffic sign images captured in
inadequate lighting and adverse weather conditions are poorly visible, blurred,
faded, and damaged. Consequently, the recognition of traffic signs in such
circumstances becomes inherently difficult. This paper addressed the challenges
of recognizing traffic signs from images captured in low light, noise, and
blurriness. To achieve this goal, a two-step methodology has been employed. The
first step involves enhancing traffic sign images by applying a modified MIRNet
model and producing enhanced images. In the second step, the Yolov4 model
recognizes the traffic signs in an unconstrained environment. The proposed
method has achieved 5.40% increment in mAP@0.5 for low quality images on
Yolov4. The overall mAP@0.5 of 96.75% has been achieved on the GTSRB dataset.
It has also attained mAP@0.5 of 100% on the GTSDB dataset for the broad
categories, comparable with the state-of-the-art work.
Related papers
- Human-in-the-loop Reasoning For Traffic Sign Detection: Collaborative Approach Yolo With Video-llava [0.0]
This paper proposes a method that combines video analysis and reasoning, prompting with a human-in-the-loop guide large vision model to improve YOLOs accuracy.
It is hypothesized that the guided prompting and reasoning abilities of Video-LLava can enhance YOLOs traffic sign detection capabilities.
arXiv Detail & Related papers (2024-10-07T14:50:56Z) - YOLO-PPA based Efficient Traffic Sign Detection for Cruise Control in Autonomous Driving [10.103731437332693]
It is very important to detect traffic signs efficiently and accurately in autonomous driving systems.
Existing object detection algorithms can hardly detect these small scaled signs.
A YOLO PPA based traffic sign detection algorithm is proposed in this paper.
arXiv Detail & Related papers (2024-09-05T07:49:21Z) - Cross-domain Few-shot In-context Learning for Enhancing Traffic Sign Recognition [49.20086587208214]
We propose a cross-domain few-shot in-context learning method based on the MLLM for enhancing traffic sign recognition.
By using description texts, our method reduces the cross-domain differences between template and real traffic signs.
Our approach requires only simple and uniform textual indications, without the need for large-scale traffic sign images and labels.
arXiv Detail & Related papers (2024-07-08T10:51:03Z) - Low-Light Image Enhancement Framework for Improved Object Detection in Fisheye Lens Datasets [4.170227455727819]
This study addresses the evolving challenges in urban traffic monitoring systems based on fisheye lens cameras.
Fisheye lenses provide wide and omnidirectional coverage in a single frame, making them a transformative solution.
Motivated by these challenges, this study proposes a novel approach that combines a ransformer-based image enhancement framework and ensemble learning technique.
arXiv Detail & Related papers (2024-04-15T18:32:52Z) - A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - Real-Time Traffic Sign Detection: A Case Study in a Santa Clara Suburban
Neighborhood [2.4087090457198435]
The project's primary objectives are to train the YOLOv5 model on a diverse dataset of traffic sign images and deploy the model on a suitable hardware platform.
The performance of the deployed system will be evaluated based on its accuracy in detecting traffic signs, real-time processing speed, and overall reliability.
arXiv Detail & Related papers (2023-10-14T17:52:28Z) - Unsupervised Foggy Scene Understanding via Self Spatial-Temporal Label
Diffusion [51.11295961195151]
We exploit the characteristics of the foggy image sequence of driving scenes to densify the confident pseudo labels.
Based on the two discoveries of local spatial similarity and adjacent temporal correspondence of the sequential image data, we propose a novel Target-Domain driven pseudo label Diffusion scheme.
Our scheme helps the adaptive model achieve 51.92% and 53.84% mean intersection-over-union (mIoU) on two publicly available natural foggy datasets.
arXiv Detail & Related papers (2022-06-10T05:16:50Z) - Towards Real-time Traffic Sign and Traffic Light Detection on Embedded
Systems [0.6143225301480709]
We propose a simple deep learning based end-to-end detection framework to tackle challenges inherent to traffic sign and traffic light detection.
The overall system achieves a high inference speed of 63 frames per second, demonstrating the capability of our system to perform in real-time.
CeyRo is the first ever large-scale traffic sign and traffic light detection dataset for the Sri Lankan context.
arXiv Detail & Related papers (2022-05-05T03:46:19Z) - Vision in adverse weather: Augmentation using CycleGANs with various
object detectors for robust perception in autonomous racing [70.16043883381677]
In autonomous racing, the weather can change abruptly, causing significant degradation in perception, resulting in ineffective manoeuvres.
In order to improve detection in adverse weather, deep-learning-based models typically require extensive datasets captured in such conditions.
We introduce an approach of using synthesised adverse condition datasets in autonomous racing (generated using CycleGAN) to improve the performance of four out of five state-of-the-art detectors.
arXiv Detail & Related papers (2022-01-10T10:02:40Z) - Driving-Signal Aware Full-Body Avatars [49.89791440532946]
We present a learning-based method for building driving-signal aware full-body avatars.
Our model is a conditional variational autoencoder that can be animated with incomplete driving signals.
We demonstrate the efficacy of our approach on the challenging problem of full-body animation for virtual telepresence.
arXiv Detail & Related papers (2021-05-21T16:22:38Z) - Deep traffic light detection by overlaying synthetic context on
arbitrary natural images [49.592798832978296]
We propose a method to generate artificial traffic-related training data for deep traffic light detectors.
This data is generated using basic non-realistic computer graphics to blend fake traffic scenes on top of arbitrary image backgrounds.
It also tackles the intrinsic data imbalance problem in traffic light datasets, caused mainly by the low amount of samples of the yellow state.
arXiv Detail & Related papers (2020-11-07T19:57:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.