Automatic Signboard Recognition in Low Quality Night Images
- URL: http://arxiv.org/abs/2308.08941v1
- Date: Thu, 17 Aug 2023 12:26:06 GMT
- Title: Automatic Signboard Recognition in Low Quality Night Images
- Authors: Manas Kagde, Priyanka Choudhary, Rishi Joshi and Somnath Dey
- Abstract summary: This paper addresses the challenges of recognizing traffic signs from images captured in low light, noise, and blurriness.
The proposed method has achieved 5.40% increment in mAP@0.5 for low quality images on Yolov4.
It has also attained mAP@0.5 of 100% on the GTSDB dataset.
- Score: 1.6795461001108096
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An essential requirement for driver assistance systems and autonomous driving
technology is implementing a robust system for detecting and recognizing
traffic signs. This system enables the vehicle to autonomously analyze the
environment and make appropriate decisions regarding its movement, even when
operating at higher frame rates. However, traffic sign images captured in
inadequate lighting and adverse weather conditions are poorly visible, blurred,
faded, and damaged. Consequently, the recognition of traffic signs in such
circumstances becomes inherently difficult. This paper addressed the challenges
of recognizing traffic signs from images captured in low light, noise, and
blurriness. To achieve this goal, a two-step methodology has been employed. The
first step involves enhancing traffic sign images by applying a modified MIRNet
model and producing enhanced images. In the second step, the Yolov4 model
recognizes the traffic signs in an unconstrained environment. The proposed
method has achieved 5.40% increment in mAP@0.5 for low quality images on
Yolov4. The overall mAP@0.5 of 96.75% has been achieved on the GTSRB dataset.
It has also attained mAP@0.5 of 100% on the GTSDB dataset for the broad
categories, comparable with the state-of-the-art work.
Related papers
- Learning Traffic Anomalies from Generative Models on Real-Time Observations [49.1574468325115]
We use the Spatiotemporal Generative Adversarial Network (STGAN) framework to capture complex spatial and temporal dependencies in traffic data.
We apply STGAN to real-time, minute-by-minute observations from 42 traffic cameras across Gothenburg, Sweden, collected over several months in 2020.
Our results demonstrate that the model effectively detects traffic anomalies with high precision and low false positive rates.
arXiv Detail & Related papers (2025-02-03T14:23:23Z) - Generative Adversarial Network on Motion-Blur Image Restoration [0.0]
We will focus on leveraging Generative Adrial Networks (GANs) to effectively deblur images affected by motion blur.
A GAN-based adversarialflow model is defined, training and evaluating by GoPro dataset.
Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) are the two evaluation metrics used to provide quantitative measures of image quality.
arXiv Detail & Related papers (2024-12-27T06:12:50Z) - Traffic Co-Simulation Framework Empowered by Infrastructure Camera Sensing and Reinforcement Learning [4.336971448707467]
Multi-agent reinforcement learning (MARL) is particularly effective for learning control strategies for traffic lights in a network using iterative simulations.
This study proposes a co-simulation framework integrating CARLA and SUMO, which combines high-fidelity 3D modeling with large-scale traffic flow simulation.
Experiments in the test-bed demonstrate the effectiveness of the proposed MARL approach in enhancing traffic conditions using real-time camera-based detection.
arXiv Detail & Related papers (2024-12-05T07:01:56Z) - YOLO-PPA based Efficient Traffic Sign Detection for Cruise Control in Autonomous Driving [10.103731437332693]
It is very important to detect traffic signs efficiently and accurately in autonomous driving systems.
Existing object detection algorithms can hardly detect these small scaled signs.
A YOLO PPA based traffic sign detection algorithm is proposed in this paper.
arXiv Detail & Related papers (2024-09-05T07:49:21Z) - Cross-domain Few-shot In-context Learning for Enhancing Traffic Sign Recognition [49.20086587208214]
We propose a cross-domain few-shot in-context learning method based on the MLLM for enhancing traffic sign recognition.
By using description texts, our method reduces the cross-domain differences between template and real traffic signs.
Our approach requires only simple and uniform textual indications, without the need for large-scale traffic sign images and labels.
arXiv Detail & Related papers (2024-07-08T10:51:03Z) - Low-Light Image Enhancement Framework for Improved Object Detection in Fisheye Lens Datasets [4.170227455727819]
This study addresses the evolving challenges in urban traffic monitoring systems based on fisheye lens cameras.
Fisheye lenses provide wide and omnidirectional coverage in a single frame, making them a transformative solution.
Motivated by these challenges, this study proposes a novel approach that combines a ransformer-based image enhancement framework and ensemble learning technique.
arXiv Detail & Related papers (2024-04-15T18:32:52Z) - A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - Real-Time Traffic Sign Detection: A Case Study in a Santa Clara Suburban
Neighborhood [2.4087090457198435]
The project's primary objectives are to train the YOLOv5 model on a diverse dataset of traffic sign images and deploy the model on a suitable hardware platform.
The performance of the deployed system will be evaluated based on its accuracy in detecting traffic signs, real-time processing speed, and overall reliability.
arXiv Detail & Related papers (2023-10-14T17:52:28Z) - Vision in adverse weather: Augmentation using CycleGANs with various
object detectors for robust perception in autonomous racing [70.16043883381677]
In autonomous racing, the weather can change abruptly, causing significant degradation in perception, resulting in ineffective manoeuvres.
In order to improve detection in adverse weather, deep-learning-based models typically require extensive datasets captured in such conditions.
We introduce an approach of using synthesised adverse condition datasets in autonomous racing (generated using CycleGAN) to improve the performance of four out of five state-of-the-art detectors.
arXiv Detail & Related papers (2022-01-10T10:02:40Z) - Driving-Signal Aware Full-Body Avatars [49.89791440532946]
We present a learning-based method for building driving-signal aware full-body avatars.
Our model is a conditional variational autoencoder that can be animated with incomplete driving signals.
We demonstrate the efficacy of our approach on the challenging problem of full-body animation for virtual telepresence.
arXiv Detail & Related papers (2021-05-21T16:22:38Z) - Deep traffic light detection by overlaying synthetic context on
arbitrary natural images [49.592798832978296]
We propose a method to generate artificial traffic-related training data for deep traffic light detectors.
This data is generated using basic non-realistic computer graphics to blend fake traffic scenes on top of arbitrary image backgrounds.
It also tackles the intrinsic data imbalance problem in traffic light datasets, caused mainly by the low amount of samples of the yellow state.
arXiv Detail & Related papers (2020-11-07T19:57:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.