Task-Driven Deep Image Enhancement Network for Autonomous Driving in Bad
Weather
- URL: http://arxiv.org/abs/2110.07206v1
- Date: Thu, 14 Oct 2021 08:03:33 GMT
- Title: Task-Driven Deep Image Enhancement Network for Autonomous Driving in Bad
Weather
- Authors: Younkwan Lee, Jihyo Jeon, Yeongmin Ko, Byunggwan Jeon, Moongu Jeon
- Abstract summary: In bad weather, visual perception is greatly affected by several degrading effects.
We introduce a new task-driven training strategy to guide the high-level task model suitable for both high-quality restoration of images and highly accurate perception.
Experiment results demonstrate that the proposed method improves the performance among lane and 2D object detection, and depth estimation largely under adverse weather.
- Score: 5.416049433853457
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual perception in autonomous driving is a crucial part of a vehicle to
navigate safely and sustainably in different traffic conditions. However, in
bad weather such as heavy rain and haze, the performance of visual perception
is greatly affected by several degrading effects. Recently, deep learning-based
perception methods have addressed multiple degrading effects to reflect
real-world bad weather cases but have shown limited success due to 1) high
computational costs for deployment on mobile devices and 2) poor relevance
between image enhancement and visual perception in terms of the model ability.
To solve these issues, we propose a task-driven image enhancement network
connected to the high-level vision task, which takes in an image corrupted by
bad weather as input. Specifically, we introduce a novel low memory network to
reduce most of the layer connections of dense blocks for less memory and
computational cost while maintaining high performance. We also introduce a new
task-driven training strategy to robustly guide the high-level task model
suitable for both high-quality restoration of images and highly accurate
perception. Experiment results demonstrate that the proposed method improves
the performance among lane and 2D object detection, and depth estimation
largely under adverse weather in terms of both low memory and accuracy.
Related papers
- Real-Time Multi-Scene Visibility Enhancement for Promoting Navigational Safety of Vessels Under Complex Weather Conditions [48.529493393948435]
The visible-light camera has emerged as an essential imaging sensor for marine surface vessels in intelligent waterborne transportation systems.
The visual imaging quality inevitably suffers from several kinds of degradations under complex weather conditions.
We develop a general-purpose multi-scene visibility enhancement method to restore degraded images captured under different weather conditions.
arXiv Detail & Related papers (2024-09-02T23:46:27Z) - How to deal with glare for improved perception of Autonomous Vehicles [0.0]
Vision sensors are versatile and can capture a wide range of visual cues, such as color, texture, shape, and depth.
vision-based environment perception systems can be easily affected by glare in the presence of a bright source of light.
arXiv Detail & Related papers (2024-04-17T02:05:05Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - DADFNet: Dual Attention and Dual Frequency-Guided Dehazing Network for
Video-Empowered Intelligent Transportation [79.18450119567315]
Adverse weather conditions pose severe challenges for video-based transportation surveillance.
We propose a dual attention and dual frequency-guided dehazing network (termed DADFNet) for real-time visibility enhancement.
arXiv Detail & Related papers (2023-04-19T11:55:30Z) - Generating Clear Images From Images With Distortions Caused by Adverse
Weather Using Generative Adversarial Networks [0.0]
We presented a method for improving computer vision tasks on images affected by adverse weather conditions, including distortions caused by adherent raindrops.
We trained an appropriate generative adversarial network and showed that it was effective at removing the effect of the distortions.
arXiv Detail & Related papers (2022-11-01T05:02:44Z) - RestoreX-AI: A Contrastive Approach towards Guiding Image Restoration
via Explainable AI Systems [8.430502131775722]
Weather corruptions can hinder the object detectability and pose a serious threat to their navigation and reliability.
We propose a contrastive approach towards mitigating this problem, by evaluating images generated by restoration models during and post training.
Our approach achieves an averaged 178 percent increase in mAP between the input and restored images under adverse weather conditions.
arXiv Detail & Related papers (2022-04-03T12:45:00Z) - An End-to-End Cascaded Image Deraining and Object Detection Neural
Network [13.314467453715517]
In this paper, we explore the combination of the low-level vision task with the high-level vision task.
We propose an end-to-end object detection network for reducing the impact of rainfall.
Our network surpasses the state-of-the-art with a significant improvement in metrics.
arXiv Detail & Related papers (2022-02-23T02:48:34Z) - Enhanced Spatio-Temporal Interaction Learning for Video Deraining: A
Faster and Better Framework [93.37833982180538]
Video deraining is an important task in computer vision as the unwanted rain hampers the visibility of videos and deteriorates the robustness of most outdoor vision systems.
We present a new end-to-end deraining framework, named Enhanced Spatio-Temporal Interaction Network (ESTINet)
ESTINet considerably boosts current state-of-the-art video deraining quality and speed.
arXiv Detail & Related papers (2021-03-23T05:19:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.