LVRNet: Lightweight Image Restoration for Aerial Images under Low
Visibility
- URL: http://arxiv.org/abs/2301.05434v1
- Date: Fri, 13 Jan 2023 08:43:11 GMT
- Title: LVRNet: Lightweight Image Restoration for Aerial Images under Low
Visibility
- Authors: Esha Pahwa, Achleshwar Luthra, Pratik Narang
- Abstract summary: Low visibility conditions cause by high pollution/smoke, poor air quality index, low light, atmospheric scattering, and haze during a blizzard become more important to prevent accidents.
It is crucial to form a solution that can result in a high-quality image and is efficient enough to be deployed for everyday use.
We introduce a lightweight deep learning model called Low-Visibility Restoration Network (LVRNet)
It outperforms previous image restoration methods with low latency, achieving a PSNR value of 25.744 and an SSIM of 0.905, making our approach scalable and ready for practical use.
- Score: 6.785107765806355
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Learning to recover clear images from images having a combination of
degrading factors is a challenging task. That being said, autonomous
surveillance in low visibility conditions caused by high pollution/smoke, poor
air quality index, low light, atmospheric scattering, and haze during a
blizzard becomes even more important to prevent accidents. It is thus crucial
to form a solution that can result in a high-quality image and is efficient
enough to be deployed for everyday use. However, the lack of proper datasets
available to tackle this task limits the performance of the previous methods
proposed. To this end, we generate the LowVis-AFO dataset, containing 3647
paired dark-hazy and clear images. We also introduce a lightweight deep
learning model called Low-Visibility Restoration Network (LVRNet). It
outperforms previous image restoration methods with low latency, achieving a
PSNR value of 25.744 and an SSIM of 0.905, making our approach scalable and
ready for practical use. The code and data can be found at
https://github.com/Achleshwar/LVRNet.
Related papers
- You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - DarkShot: Lighting Dark Images with Low-Compute and High-Quality [11.256790804961563]
This paper proposes a lightweight network that outperforms existing state-of-the-art (SOTA) methods in low-light enhancement tasks.
Our model can restore a UHD 4K resolution image with minimal computation while keeping SOTA restoration quality.
arXiv Detail & Related papers (2023-12-28T03:26:50Z) - Enhancing Low-Light Images Using Infrared-Encoded Images [81.8710581927427]
Previous arts mainly focus on the low-light images captured in the visible spectrum using pixel-wise loss.
We propose a novel approach to increase the visibility of images captured under low-light environments by removing the in-camera infrared (IR) cut-off filter.
arXiv Detail & Related papers (2023-07-09T08:29:19Z) - NoiSER: Noise is All You Need for Enhancing Low-Light Images Without
Task-Related Data [103.04999391668753]
We show that it is possible to enhance a low-light image without any task-related training data.
Technically, we propose a new, magical, effective and efficient method, termed underlineNoise underlineSElf-underlineRegression (NoiSER)
Our NoiSER is highly competitive to current task-related data based LLIE models in terms of quantitative and visual results.
arXiv Detail & Related papers (2022-11-09T06:18:18Z) - Learning to restore images degraded by atmospheric turbulence using
uncertainty [93.72048616001064]
Atmospheric turbulence can significantly degrade the quality of images acquired by long-range imaging systems.
We propose a deep learning-based approach for restring a single image degraded by atmospheric turbulence.
arXiv Detail & Related papers (2022-07-07T17:24:52Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - BLNet: A Fast Deep Learning Framework for Low-Light Image Enhancement
with Noise Removal and Color Restoration [14.75902042351609]
We propose a very fast deep learning framework called Bringing the Lightness (denoted as BLNet)
Based on Retinex Theory, the decomposition net in our model can decompose low-light images into reflectance and illumination.
We conduct extensive experiments to demonstrate that our approach achieves a promising effect with good rubustness and generalization.
arXiv Detail & Related papers (2021-06-30T10:06:16Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Low-light Image Restoration with Short- and Long-exposure Raw Pairs [14.643663950015334]
We propose a new low-light image restoration method by using the complementary information of short- and long-exposure images.
We first propose a novel data generation method to synthesize realistic short- and longexposure raw images.
Then, we design a new long-short-exposure fusion network (LSFNet) to deal with the problems of low-light image fusion.
arXiv Detail & Related papers (2020-07-01T03:22:26Z) - Contextual Residual Aggregation for Ultra High-Resolution Image
Inpainting [12.839962012888199]
We propose a Contextual Residual Aggregation (CRA) mechanism that can produce high-frequency residuals for missing contents.
CRA mechanism produces high-frequency residuals for missing contents by weighted aggregating residuals from contextual patches.
We train the proposed model on small images with resolutions 512x512 and perform inference on high-resolution images, achieving compelling inpainting quality.
arXiv Detail & Related papers (2020-05-19T18:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.