FriendNet: Detection-Friendly Dehazing Network
- URL: http://arxiv.org/abs/2403.04443v1
- Date: Thu, 7 Mar 2024 12:19:04 GMT
- Title: FriendNet: Detection-Friendly Dehazing Network
- Authors: Yihua Fan, Yongzhen Wang, Mingqiang Wei, Fu Lee Wang, and Haoran Xie
- Abstract summary: We propose an effective architecture that bridges image dehazing and object detection together via guidance information and task-driven learning.
FriendNet aims to deliver both high-quality perception and high detection capacity.
- Score: 24.372610892854283
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adverse weather conditions often impair the quality of captured images,
inevitably inducing cutting-edge object detection models for advanced driver
assistance systems (ADAS) and autonomous driving. In this paper, we raise an
intriguing question: can the combination of image restoration and object
detection enhance detection performance in adverse weather conditions? To
answer it, we propose an effective architecture that bridges image dehazing and
object detection together via guidance information and task-driven learning to
achieve detection-friendly dehazing, termed FriendNet. FriendNet aims to
deliver both high-quality perception and high detection capacity. Different
from existing efforts that intuitively treat image dehazing as pre-processing,
FriendNet establishes a positive correlation between these two tasks. Clean
features generated by the dehazing network potentially contribute to
improvements in object detection performance. Conversely, object detection
crucially guides the learning process of the image dehazing network under the
task-driven learning scheme. We shed light on how downstream tasks can guide
upstream dehazing processes, considering both network architecture and learning
objectives. We design Guidance Fusion Block (GFB) and Guidance Attention Block
(GAB) to facilitate the integration of detection information into the network.
Furthermore, the incorporation of the detection task loss aids in refining the
optimization process. Additionally, we introduce a new Physics-aware Feature
Enhancement Block (PFEB), which integrates physics-based priors to enhance the
feature extraction and representation capabilities. Extensive experiments on
synthetic and real-world datasets demonstrate the superiority of our method
over state-of-the-art methods on both image quality and detection precision.
Our source code is available at https://github.com/fanyihua0309/FriendNet.
Related papers
- D-YOLO a robust framework for object detection in adverse weather conditions [0.0]
Adverse weather conditions including haze, snow and rain lead to decline in image qualities, which often causes a decline in performance for deep-learning based detection networks.
To better integrate image restoration and object detection tasks, we designed a double-route network with an attention feature fusion module.
We also proposed a subnetwork to provide haze-free features to the detection network. Specifically, our D-YOLO improves the performance of the detection network by minimizing the distance between the clear feature extraction subnetwork and detection network.
arXiv Detail & Related papers (2024-03-14T09:57:15Z) - Joint Perceptual Learning for Enhancement and Object Detection in
Underwater Scenarios [41.34564703212461]
We propose a bilevel optimization formulation for jointly learning underwater object detection and image enhancement.
Our method outputs visually favoring images and higher detection accuracy.
arXiv Detail & Related papers (2023-07-07T11:54:06Z) - An Interactively Reinforced Paradigm for Joint Infrared-Visible Image
Fusion and Saliency Object Detection [59.02821429555375]
This research focuses on the discovery and localization of hidden objects in the wild and serves unmanned systems.
Through empirical analysis, infrared and visible image fusion (IVIF) enables hard-to-find objects apparent.
multimodal salient object detection (SOD) accurately delineates the precise spatial location of objects within the picture.
arXiv Detail & Related papers (2023-05-17T06:48:35Z) - GDIP: Gated Differentiable Image Processing for Object-Detection in
Adverse Conditions [15.327704761260131]
We present a Gated Differentiable Image Processing (GDIP) block, a domain-agnostic network architecture.
Our proposed GDIP block learns to enhance images directly through the downstream object detection loss.
We demonstrate significant improvement in detection performance over several state-of-the-art methods.
arXiv Detail & Related papers (2022-09-29T16:43:13Z) - TogetherNet: Bridging Image Restoration and Object Detection Together
via Dynamic Enhancement Learning [20.312198020027957]
Adverse weather conditions such as haze, rain, and snow often impair the quality of captured images.
We propose an effective yet unified detection paradigm that bridges image restoration and object detection.
We show that our TogetherNet outperforms the state-of-the-art detection approaches by a large margin both quantitatively and qualitatively.
arXiv Detail & Related papers (2022-09-03T09:06:13Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Paint and Distill: Boosting 3D Object Detection with Semantic Passing
Network [70.53093934205057]
3D object detection task from lidar or camera sensors is essential for autonomous driving.
We propose a novel semantic passing framework, named SPNet, to boost the performance of existing lidar-based 3D detection models.
arXiv Detail & Related papers (2022-07-12T12:35:34Z) - Correlation-Aware Deep Tracking [83.51092789908677]
We propose a novel target-dependent feature network inspired by the self-/cross-attention scheme.
Our network deeply embeds cross-image feature correlation in multiple layers of the feature network.
Our model can be flexibly pre-trained on abundant unpaired images, leading to notably faster convergence than the existing methods.
arXiv Detail & Related papers (2022-03-03T11:53:54Z) - Self-Supervised Object Detection via Generative Image Synthesis [106.65384648377349]
We present the first end-to-end analysis-by synthesis framework with controllable GANs for the task of self-supervised object detection.
We use collections of real world images without bounding box annotations to learn to synthesize and detect objects.
Our work advances the field of self-supervised object detection by introducing a successful new paradigm of using controllable GAN-based image synthesis for it.
arXiv Detail & Related papers (2021-10-19T11:04:05Z) - One-Shot Object Affordance Detection in the Wild [76.46484684007706]
Affordance detection refers to identifying the potential action possibilities of objects in an image.
We devise a One-Shot Affordance Detection Network (OSAD-Net) that estimates the human action purpose and then transfers it to help detect the common affordance from all candidate images.
With complex scenes and rich annotations, our PADv2 dataset can be used as a test bed to benchmark affordance detection methods.
arXiv Detail & Related papers (2021-08-08T14:53:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.