SDNIA-YOLO: A Robust Object Detection Model for Extreme Weather Conditions
- URL: http://arxiv.org/abs/2406.12395v1
- Date: Tue, 18 Jun 2024 08:36:44 GMT
- Title: SDNIA-YOLO: A Robust Object Detection Model for Extreme Weather Conditions
- Authors: Yuexiong Ding, Xiaowei Luo,
- Abstract summary: This study proposes a stylization data-driven neural-image-adaptive YOLO (SDNIA-YOLO)
The developed SDNIA-YOLO achieves significant mAP@.5 improvements of at least 15% on the real-world foggy (RTTS) and lowlight (ExDark) test sets.
The experiments also highlight the outstanding potential of stylization data in simulating extreme weather conditions.
- Score: 1.4579344926652846
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Though current object detection models based on deep learning have achieved excellent results on many conventional benchmark datasets, their performance will dramatically decline on real-world images taken under extreme conditions. Existing methods either used image augmentation based on traditional image processing algorithms or applied customized and scene-limited image adaptation technologies for robust modeling. This study thus proposes a stylization data-driven neural-image-adaptive YOLO (SDNIA-YOLO), which improves the model's robustness by enhancing image quality adaptively and learning valuable information related to extreme weather conditions from images synthesized by neural style transfer (NST). Experiments show that the developed SDNIA-YOLOv3 achieves significant mAP@.5 improvements of at least 15% on the real-world foggy (RTTS) and lowlight (ExDark) test sets compared with the baseline model. Besides, the experiments also highlight the outstanding potential of stylization data in simulating extreme weather conditions. The developed SDNIA-YOLO remains excellent characteristics of the native YOLO to a great extent, such as end-to-end one-stage, data-driven, and fast.
Related papers
- Generative AI-based Pipeline Architecture for Increasing Training Efficiency in Intelligent Weed Control Systems [0.0]
This study presents a new approach for generating synthetic images to improve deep learning-based object detection models for intelligent weed control.
Our GenAI-based image generation pipeline integrates the Segment Anything Model (SAM) for zero-shot domain adaptation with a text-to-image Stable Diffusion Model.
We evaluate these synthetic datasets using lightweight YOLO models, measuring data efficiency with mAP50 and mAP50-95 scores.
arXiv Detail & Related papers (2024-11-01T12:58:27Z) - Super-resolving Real-world Image Illumination Enhancement: A New Dataset and A Conditional Diffusion Model [43.93772529301279]
We propose a SRRIIE dataset with an efficient conditional diffusion probabilistic models-based method.
We capture images using an ILDC camera and an optical zoom lens with exposure levels ranging from -6 EV to 0 EV and ISO levels ranging from 50 to 12800.
We show that most existing methods are less effective in preserving the structures and sharpness of restored images from complicated noises.
arXiv Detail & Related papers (2024-10-16T18:47:04Z) - SynFog: A Photo-realistic Synthetic Fog Dataset based on End-to-end Imaging Simulation for Advancing Real-World Defogging in Autonomous Driving [48.27575423606407]
We introduce an end-to-end simulation pipeline designed to generate photo-realistic foggy images.
We present a new synthetic fog dataset named SynFog, which features both sky light and active lighting conditions.
Experimental results demonstrate that models trained on SynFog exhibit superior performance in visual perception and detection accuracy.
arXiv Detail & Related papers (2024-03-25T18:32:41Z) - DiffYOLO: Object Detection for Anti-Noise via YOLO and Diffusion Models [4.7846759259287985]
We propose a framework in this paper that apply it on the YOLO models called DiffYOLO.
Specifically, we extract feature maps from the denoising diffusion probabilistic models to enhance the well-trained models.
Results proved this framework can not only prove the performance on noisy datasets, but also prove the detection results on high-quality test datasets.
arXiv Detail & Related papers (2024-01-03T10:35:35Z) - UAV-Sim: NeRF-based Synthetic Data Generation for UAV-based Perception [62.71374902455154]
We leverage recent advancements in neural rendering to improve static and dynamic novelview UAV-based image rendering.
We demonstrate a considerable performance boost when a state-of-the-art detection model is optimized primarily on hybrid sets of real and synthetic data.
arXiv Detail & Related papers (2023-10-25T00:20:37Z) - ExposureDiffusion: Learning to Expose for Low-light Image Enhancement [87.08496758469835]
This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model.
Our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models.
The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks.
arXiv Detail & Related papers (2023-07-15T04:48:35Z) - DeepDC: Deep Distance Correlation as a Perceptual Image Quality
Evaluator [53.57431705309919]
ImageNet pre-trained deep neural networks (DNNs) show notable transferability for building effective image quality assessment (IQA) models.
We develop a novel full-reference IQA (FR-IQA) model based exclusively on pre-trained DNN features.
We conduct comprehensive experiments to demonstrate the superiority of the proposed quality model on five standard IQA datasets.
arXiv Detail & Related papers (2022-11-09T14:57:27Z) - ART-SS: An Adaptive Rejection Technique for Semi-Supervised restoration
for adverse weather-affected images [24.03416814412226]
We study the effect of unlabeled data on the performance of an SSR method.
We develop a technique that rejects the unlabeled images that degrade the performance.
arXiv Detail & Related papers (2022-03-17T12:00:31Z) - Image-Adaptive YOLO for Object Detection in Adverse Weather Conditions [34.993786158059436]
We propose a novel Image-Adaptive YOLO (IA-YOLO) framework, where each image can be adaptively enhanced for better detection performance.
Specifically, a differentiable image processing (DIP) module is presented to take into account the adverse weather conditions for YOLO detector.
We learn CNN-PP and YOLOv3 jointly in an end-to-end fashion, which ensures CNN-PP can learn an appropriate DIP to enhance the image for detection in a weakly supervised manner.
arXiv Detail & Related papers (2021-12-15T12:54:17Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - FD-GAN: Generative Adversarial Networks with Fusion-discriminator for
Single Image Dehazing [48.65974971543703]
We propose a fully end-to-end Generative Adversarial Networks with Fusion-discriminator (FD-GAN) for image dehazing.
Our model can generator more natural and realistic dehazed images with less color distortion and fewer artifacts.
Experiments have shown that our method reaches state-of-the-art performance on both public synthetic datasets and real-world images.
arXiv Detail & Related papers (2020-01-20T04:36:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.