Lightweight HDR Camera ISP for Robust Perception in Dynamic Illumination
Conditions via Fourier Adversarial Networks
- URL: http://arxiv.org/abs/2204.01795v1
- Date: Mon, 4 Apr 2022 18:48:51 GMT
- Title: Lightweight HDR Camera ISP for Robust Perception in Dynamic Illumination
Conditions via Fourier Adversarial Networks
- Authors: Pranjay Shyam, Sandeep Singh Sengar, Kuk-Jin Yoon and Kyung-Soo Kim
- Abstract summary: We propose a lightweight two-stage image enhancement algorithm sequentially balancing illumination and noise removal.
We also propose a Fourier spectrum-based adversarial framework (AFNet) for consistent image enhancement under varying illumination conditions.
Based on quantitative and qualitative evaluations, we also examine the practicality and effects of image enhancement techniques on the performance of common perception tasks.
- Score: 35.532434169432776
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The limited dynamic range of commercial compact camera sensors results in an
inaccurate representation of scenes with varying illumination conditions,
adversely affecting image quality and subsequently limiting the performance of
underlying image processing algorithms. Current state-of-the-art (SoTA)
convolutional neural networks (CNN) are developed as post-processing techniques
to independently recover under-/over-exposed images. However, when applied to
images containing real-world degradations such as glare, high-beam, color
bleeding with varying noise intensity, these algorithms amplify the
degradations, further degrading image quality. We propose a lightweight
two-stage image enhancement algorithm sequentially balancing illumination and
noise removal using frequency priors for structural guidance to overcome these
limitations. Furthermore, to ensure realistic image quality, we leverage the
relationship between frequency and spatial domain properties of an image and
propose a Fourier spectrum-based adversarial framework (AFNet) for consistent
image enhancement under varying illumination conditions. While current
formulations of image enhancement are envisioned as post-processing techniques,
we examine if such an algorithm could be extended to integrate the
functionality of the Image Signal Processing (ISP) pipeline within the camera
sensor benefiting from RAW sensor data and lightweight CNN architecture. Based
on quantitative and qualitative evaluations, we also examine the practicality
and effects of image enhancement techniques on the performance of common
perception tasks such as object detection and semantic segmentation in varying
illumination conditions.
Related papers
- Retinex-RAWMamba: Bridging Demosaicing and Denoising for Low-Light RAW Image Enhancement [71.13353154514418]
Low-light image enhancement, particularly in cross-domain tasks such as mapping from the raw domain to the sRGB domain, remains a significant challenge.
We present a novel Mamba scanning mechanism, called RAWMamba, to effectively handle raw images with different CFAs.
We also present a Retinex Decomposition Module (RDM) grounded in Retinex prior, which decouples illumination from reflectance to facilitate more effective denoising and automatic non-linear exposure correction.
arXiv Detail & Related papers (2024-09-11T06:12:03Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Hybrid Training of Denoising Networks to Improve the Texture Acutance of Digital Cameras [3.400056739248712]
We propose a mixed training procedure for image restoration neural networks, relying on both natural and synthetic images, that yields a strong improvement of this acutance metric without impairing fidelity terms.
The feasibility of the approach is demonstrated both on the denoising of RGB images and the full development of RAW images, opening the path to a systematic improvement of the texture acutance of real imaging devices.
arXiv Detail & Related papers (2024-02-20T10:47:06Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - GDIP: Gated Differentiable Image Processing for Object-Detection in
Adverse Conditions [15.327704761260131]
We present a Gated Differentiable Image Processing (GDIP) block, a domain-agnostic network architecture.
Our proposed GDIP block learns to enhance images directly through the downstream object detection loss.
We demonstrate significant improvement in detection performance over several state-of-the-art methods.
arXiv Detail & Related papers (2022-09-29T16:43:13Z) - Burst Imaging for Light-Constrained Structure-From-Motion [4.125187280299246]
We develop an image processing technique for aiding 3D reconstruction from images acquired in low light conditions.
Our technique, based on burst photography, uses direct methods for image registration within bursts of short exposure time images.
Our method is a significant step towards allowing robots to operate in low light conditions, with potential applications to robots operating in environments such as underground mines and night time operation.
arXiv Detail & Related papers (2021-08-23T02:12:40Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Thermal Image Processing via Physics-Inspired Deep Networks [21.094006629684376]
DeepIR combines physically accurate sensor modeling with deep network-based image representation.
DeepIR requires neither training data nor periodic ground-truth calibration with a known black body target.
Simulated and real data experiments demonstrate that DeepIR can perform high-quality non-uniformity correction with as few as three images.
arXiv Detail & Related papers (2021-08-18T04:57:48Z) - Deep Bilateral Retinex for Low-Light Image Enhancement [96.15991198417552]
Low-light images suffer from poor visibility caused by low contrast, color distortion and measurement noise.
This paper proposes a deep learning method for low-light image enhancement with a particular focus on handling the measurement noise.
The proposed method is very competitive to the state-of-the-art methods, and has significant advantage over others when processing images captured in extremely low lighting conditions.
arXiv Detail & Related papers (2020-07-04T06:26:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.