Image-Adaptive YOLO for Object Detection in Adverse Weather Conditions
- URL: http://arxiv.org/abs/2112.08088v1
- Date: Wed, 15 Dec 2021 12:54:17 GMT
- Title: Image-Adaptive YOLO for Object Detection in Adverse Weather Conditions
- Authors: Wenyu Liu, Gaofeng Ren, Runsheng Yu, Shi Guo, Jianke Zhu, Lei Zhang
- Abstract summary: We propose a novel Image-Adaptive YOLO (IA-YOLO) framework, where each image can be adaptively enhanced for better detection performance.
Specifically, a differentiable image processing (DIP) module is presented to take into account the adverse weather conditions for YOLO detector.
We learn CNN-PP and YOLOv3 jointly in an end-to-end fashion, which ensures CNN-PP can learn an appropriate DIP to enhance the image for detection in a weakly supervised manner.
- Score: 34.993786158059436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Though deep learning-based object detection methods have achieved promising
results on the conventional datasets, it is still challenging to locate objects
from the low-quality images captured in adverse weather conditions. The
existing methods either have difficulties in balancing the tasks of image
enhancement and object detection, or often ignore the latent information
beneficial for detection. To alleviate this problem, we propose a novel
Image-Adaptive YOLO (IA-YOLO) framework, where each image can be adaptively
enhanced for better detection performance. Specifically, a differentiable image
processing (DIP) module is presented to take into account the adverse weather
conditions for YOLO detector, whose parameters are predicted by a small
convolutional neural net-work (CNN-PP). We learn CNN-PP and YOLOv3 jointly in
an end-to-end fashion, which ensures that CNN-PP can learn an appropriate DIP
to enhance the image for detection in a weakly supervised manner. Our proposed
IA-YOLO approach can adaptively process images in both normal and adverse
weather conditions. The experimental results are very encouraging,
demonstrating the effectiveness of our proposed IA-YOLO method in both foggy
and low-light scenarios.
Related papers
- SDNIA-YOLO: A Robust Object Detection Model for Extreme Weather Conditions [1.4579344926652846]
This study proposes a stylization data-driven neural-image-adaptive YOLO (SDNIA-YOLO)
The developed SDNIA-YOLO achieves significant mAP@.5 improvements of at least 15% on the real-world foggy (RTTS) and lowlight (ExDark) test sets.
The experiments also highlight the outstanding potential of stylization data in simulating extreme weather conditions.
arXiv Detail & Related papers (2024-06-18T08:36:44Z) - D-YOLO a robust framework for object detection in adverse weather conditions [0.0]
Adverse weather conditions including haze, snow and rain lead to decline in image qualities, which often causes a decline in performance for deep-learning based detection networks.
To better integrate image restoration and object detection tasks, we designed a double-route network with an attention feature fusion module.
We also proposed a subnetwork to provide haze-free features to the detection network. Specifically, our D-YOLO improves the performance of the detection network by minimizing the distance between the clear feature extraction subnetwork and detection network.
arXiv Detail & Related papers (2024-03-14T09:57:15Z) - FogGuard: guarding YOLO against fog using perceptual loss [5.868532677577194]
FogGuard is a fog-aware object detection network designed to address the challenges posed by foggy weather conditions.
FogGuard compensates for foggy conditions in the scene by incorporating YOLOv3 as the baseline algorithm.
Our network significantly improves performance, achieving a 69.43% mAP compared to YOLOv3's 57.78% on the RTTS dataset.
arXiv Detail & Related papers (2024-03-13T20:13:25Z) - YOLO-World: Real-Time Open-Vocabulary Object Detection [87.08732047660058]
We introduce YOLO-World, an innovative approach that enhances YOLO with open-vocabulary detection capabilities.
Our method excels in detecting a wide range of objects in a zero-shot manner with high efficiency.
YOLO-World achieves 35.4 AP with 52.0 FPS on V100, which outperforms many state-of-the-art methods in terms of both accuracy and speed.
arXiv Detail & Related papers (2024-01-30T18:59:38Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - PE-YOLO: Pyramid Enhancement Network for Dark Object Detection [9.949687351946038]
We propose a pyramid enhanced network (PENet) and joint it with YOLOv3 to build a dark object detection framework named PE-YOLO.
PE-YOLO adopts an end-to-end joint training approach and only uses normal detection loss to simplify the training process.
Results: PE-YOLO achieves 78.0% in mAP and 53.6 in FPS, respectively, which can adapt to object detection under different low-light conditions.
arXiv Detail & Related papers (2023-07-20T15:25:55Z) - GDIP: Gated Differentiable Image Processing for Object-Detection in
Adverse Conditions [15.327704761260131]
We present a Gated Differentiable Image Processing (GDIP) block, a domain-agnostic network architecture.
Our proposed GDIP block learns to enhance images directly through the downstream object detection loss.
We demonstrate significant improvement in detection performance over several state-of-the-art methods.
arXiv Detail & Related papers (2022-09-29T16:43:13Z) - Exploring Resolution and Degradation Clues as Self-supervised Signal for
Low Quality Object Detection [77.3530907443279]
We propose a novel self-supervised framework to detect objects in degraded low resolution images.
Our methods has achieved superior performance compared with existing methods when facing variant degradation situations.
arXiv Detail & Related papers (2022-08-05T09:36:13Z) - A lightweight and accurate YOLO-like network for small target detection
in Aerial Imagery [94.78943497436492]
We present YOLO-S, a simple, fast and efficient network for small target detection.
YOLO-S exploits a small feature extractor based on Darknet20, as well as skip connection, via both bypass and concatenation.
YOLO-S has an 87% decrease of parameter size and almost one half FLOPs of YOLOv3, making practical the deployment for low-power industrial applications.
arXiv Detail & Related papers (2022-04-05T16:29:49Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.