Thermal infrared image based vehicle detection in low-level illumination
conditions using multi-level GANs
- URL: http://arxiv.org/abs/2209.09808v2
- Date: Sun, 25 Jun 2023 07:42:50 GMT
- Title: Thermal infrared image based vehicle detection in low-level illumination
conditions using multi-level GANs
- Authors: Shivom Bhargava, Sanjita Prajapati, and Pranamesh Chakraborty
- Abstract summary: Vehicle detection accuracy is fairly accurate in good-illumination conditions but susceptible to poor detection accuracy under low-light conditions.
The combined effect of low-light and glare from vehicle headlight or tail-light results in misses in vehicle detection more likely by state-of-the-art object detection models.
State-of-the-art GAN models have attempted to improve vehicle detection accuracy in night-time by converting infrared images to day-time RGB images.
- Score: 3.3223482500639845
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vehicle detection accuracy is fairly accurate in good-illumination conditions
but susceptible to poor detection accuracy under low-light conditions. The
combined effect of low-light and glare from vehicle headlight or tail-light
results in misses in vehicle detection more likely by state-of-the-art object
detection models. However, thermal infrared images are robust to illumination
changes and are based on thermal radiation. Recently, Generative Adversarial
Networks (GANs) have been extensively used in image domain transfer tasks.
State-of-the-art GAN models have attempted to improve vehicle detection
accuracy in night-time by converting infrared images to day-time RGB images.
However, these models have been found to under-perform during night-time
conditions compared to day-time conditions, as day-time infrared images looks
different than night-time infrared images. Therefore, this study attempts to
alleviate this shortcoming by proposing three different approaches based on
combination of GAN models at two different levels that try to reduce the
feature distribution gap between day-time and night-time infrared images.
Quantitative analysis to compare the performance of the proposed models with
the state-of-the-art models has been done by testing the models using
state-of-the-art object detection models. Both the quantitative and qualitative
analyses have shown that the proposed models outperform the state-of-the-art
GAN models for vehicle detection in night-time conditions, showing the efficacy
of the proposed models.
Related papers
- Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.
In this paper, we investigate how detection performance varies across model backbones, types, and datasets.
We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - SAR to Optical Image Translation with Color Supervised Diffusion Model [5.234109158596138]
This paper introduces an innovative generative model designed to transform SAR images into more intelligible optical images.
We employ SAR images as conditional guides in the sampling process and integrate color supervision to counteract color shift issues.
arXiv Detail & Related papers (2024-07-24T01:11:28Z) - Feature Corrective Transfer Learning: End-to-End Solutions to Object Detection in Non-Ideal Visual Conditions [11.90136900277127]
"Feature Corrective Transfer Learning" is a novel approach to facilitate the end-to-end detection of objects in challenging scenarios.
Non-ideal images are processed by comparing their feature maps against those from the initial ideal RGB model.
This approach refines the model's ability to perform object detection across varying conditions through direct feature map correction.
arXiv Detail & Related papers (2024-04-17T09:58:53Z) - Beyond Night Visibility: Adaptive Multi-Scale Fusion of Infrared and
Visible Images [49.75771095302775]
We propose an Adaptive Multi-scale Fusion network (AMFusion) with infrared and visible images.
First, we separately fuse spatial and semantic features from infrared and visible images, where the former are used for the adjustment of light distribution.
Second, we utilize detection features extracted by a pre-trained backbone that guide the fusion of semantic features.
Third, we propose a new illumination loss to constrain fusion image with normal light intensity.
arXiv Detail & Related papers (2024-03-02T03:52:07Z) - Simulating Nighttime Visible Satellite Imagery of Tropical Cyclones Using Conditional Generative Adversarial Networks [10.76837828367292]
Visible (VIS) imagery is important for monitoring Tropical Cyclones (TCs) but is unavailable at night.
This study presents a Conditional Generative Adversarial Networks (CGAN) model to generate nighttime VIS imagery.
arXiv Detail & Related papers (2024-01-22T03:44:35Z) - Thermal to Visible Image Synthesis under Atmospheric Turbulence [67.99407460140263]
In biometrics and surveillance, thermal imagining modalities are often used to capture images in low-light and nighttime conditions.
Such imaging systems often suffer from atmospheric turbulence, which introduces severe blur and deformation artifacts to the captured images.
An end-to-end reconstruction method is proposed which can directly transform thermal images into visible-spectrum images.
arXiv Detail & Related papers (2022-04-06T19:47:41Z) - Validation of object detection in UAV-based images using synthetic data [9.189702268557483]
Machine learning (ML) models for UAV-based detection are often validated using data curated for tasks unrelated to the UAV application.
Such errors arise due to differences in imaging conditions between images from UAVs and images in training.
Our work is focused on understanding the impact of different UAV-based imaging conditions on detection performance by using synthetic data generated using a game engine.
arXiv Detail & Related papers (2022-01-17T20:56:56Z) - Vision in adverse weather: Augmentation using CycleGANs with various
object detectors for robust perception in autonomous racing [70.16043883381677]
In autonomous racing, the weather can change abruptly, causing significant degradation in perception, resulting in ineffective manoeuvres.
In order to improve detection in adverse weather, deep-learning-based models typically require extensive datasets captured in such conditions.
We introduce an approach of using synthesised adverse condition datasets in autonomous racing (generated using CycleGAN) to improve the performance of four out of five state-of-the-art detectors.
arXiv Detail & Related papers (2022-01-10T10:02:40Z) - Object Detection in Thermal Spectrum for Advanced Driver-Assistance
Systems (ADAS) [0.5156484100374058]
Object detection in thermal infrared spectrum provides more reliable data source in low-lighting conditions and different weather conditions.
This paper is about exploring and adapting state-of-the-art object and vision framework on thermal vision with seven distinct classes for advanced driver-assistance systems (ADAS)
The trained network variants on public datasets are validated on test data with three different test approaches.
The efficacy of trained networks is tested on locally gathered novel test-data captured with an uncooled LWIR prototype thermal camera.
arXiv Detail & Related papers (2021-09-20T21:38:55Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Sparse Needlets for Lighting Estimation with Spherical Transport Loss [89.52531416604774]
NeedleLight is a new lighting estimation model that represents illumination with needlets and allows lighting estimation in both frequency domain and spatial domain jointly.
Extensive experiments show that NeedleLight achieves superior lighting estimation consistently across multiple evaluation metrics as compared with state-of-the-art methods.
arXiv Detail & Related papers (2021-06-24T15:19:42Z) - Exploring Thermal Images for Object Detection in Underexposure Regions
for Autonomous Driving [67.69430435482127]
Underexposure regions are vital to construct a complete perception of the surroundings for safe autonomous driving.
The availability of thermal cameras has provided an essential alternate to explore regions where other optical sensors lack in capturing interpretable signals.
This work proposes a domain adaptation framework which employs a style transfer technique for transfer learning from visible spectrum images to thermal images.
arXiv Detail & Related papers (2020-06-01T09:59:09Z) - Bayesian Fusion for Infrared and Visible Images [26.64101343489016]
In this paper, a novel Bayesian fusion model is established for infrared and visible images.
We aim at making the fused image satisfy human visual system.
Compared with the previous methods, the novel model can generate better fused images with high-light targets and rich texture details.
arXiv Detail & Related papers (2020-05-12T14:57:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.