Burst Photography for Learning to Enhance Extremely Dark Images
- URL: http://arxiv.org/abs/2006.09845v2
- Date: Fri, 19 Nov 2021 20:09:40 GMT
- Title: Burst Photography for Learning to Enhance Extremely Dark Images
- Authors: Ahmet Serdar Karadeniz and Erkut Erdem and Aykut Erdem
- Abstract summary: In this paper, we aim to leverage burst photography to boost the performance and obtain much sharper and more accurate RGB images from extremely dark raw images.
The backbone of our proposed framework is a novel coarse-to-fine network architecture that generates high-quality outputs progressively.
Our experiments demonstrate that our approach leads to perceptually more pleasing results than the state-of-the-art methods by producing more detailed and considerably higher quality images.
- Score: 19.85860245798819
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Capturing images under extremely low-light conditions poses significant
challenges for the standard camera pipeline. Images become too dark and too
noisy, which makes traditional enhancement techniques almost impossible to
apply. Recently, learning-based approaches have shown very promising results
for this task since they have substantially more expressive capabilities to
allow for improved quality. Motivated by these studies, in this paper, we aim
to leverage burst photography to boost the performance and obtain much sharper
and more accurate RGB images from extremely dark raw images. The backbone of
our proposed framework is a novel coarse-to-fine network architecture that
generates high-quality outputs progressively. The coarse network predicts a
low-resolution, denoised raw image, which is then fed to the fine network to
recover fine-scale details and realistic textures. To further reduce the noise
level and improve the color accuracy, we extend this network to a permutation
invariant structure so that it takes a burst of low-light images as input and
merges information from multiple images at the feature-level. Our experiments
demonstrate that our approach leads to perceptually more pleasing results than
the state-of-the-art methods by producing more detailed and considerably higher
quality images.
Related papers
- DARK: Denoising, Amplification, Restoration Kit [0.7670170505111058]
This paper introduces a novel lightweight computational framework for enhancing images under low-light conditions.
Our model is designed to be lightweight, ensuring low computational demand and suitability for real-time applications on standard consumer hardware.
arXiv Detail & Related papers (2024-05-21T16:01:13Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Exposure Bracketing is All You Need for Unifying Image Restoration and Enhancement Tasks [50.822601495422916]
We propose to utilize exposure bracketing photography to unify image restoration and enhancement tasks.
Due to the difficulty in collecting real-world pairs, we suggest a solution that first pre-trains the model with synthetic paired data.
In particular, a temporally modulated recurrent network (TMRNet) and self-supervised adaptation method are proposed.
arXiv Detail & Related papers (2024-01-01T14:14:35Z) - LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.
Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.
We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - CDAN: Convolutional dense attention-guided network for low-light image enhancement [2.2530496464901106]
Low-light images pose challenges of diminished clarity, muted colors, and reduced details.
This paper introduces the Convolutional Dense Attention-guided Network (CDAN), a novel solution for enhancing low-light images.
CDAN integrates an autoencoder-based architecture with convolutional and dense blocks, complemented by an attention mechanism and skip connections.
arXiv Detail & Related papers (2023-08-24T16:22:05Z) - Simplifying Low-Light Image Enhancement Networks with Relative Loss
Functions [14.63586364951471]
We introduce FLW-Net (Fast and LightWeight Network) and two relative loss functions to make learning easier in low-light image enhancement.
We first recognize the challenges of the need for a large receptive field to obtain global contrast.
Then, we propose an efficient global feature information extraction component and two loss functions based on relative information to overcome these challenges.
arXiv Detail & Related papers (2023-04-06T10:05:54Z) - High Dynamic Range and Super-Resolution from Raw Image Bursts [52.341483902624006]
This paper introduces the first approach to reconstruct high-resolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing.
The proposed algorithm is fast, with low memory requirements compared to state-of-the-art learning-based approaches to image restoration.
Experiments demonstrate its excellent performance with super-resolution factors of up to $times 4$ on real photographs taken in the wild with hand-held cameras.
arXiv Detail & Related papers (2022-07-29T13:31:28Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - Zoom-to-Inpaint: Image Inpainting with High-Frequency Details [39.582275854002994]
We propose applying super-resolution to coarsely reconstructed outputs, refining them at high resolution, and then downscaling the output to the original resolution.
By introducing high-resolution images to the refinement network, our framework is able to reconstruct finer details that are usually smoothed out due to spectral bias.
Our zoom-in, refine and zoom-out strategy, combined with high-resolution supervision and progressive learning, constitutes a framework-agnostic approach for enhancing high-frequency details.
arXiv Detail & Related papers (2020-12-17T05:39:37Z) - Deep Bilateral Retinex for Low-Light Image Enhancement [96.15991198417552]
Low-light images suffer from poor visibility caused by low contrast, color distortion and measurement noise.
This paper proposes a deep learning method for low-light image enhancement with a particular focus on handling the measurement noise.
The proposed method is very competitive to the state-of-the-art methods, and has significant advantage over others when processing images captured in extremely low lighting conditions.
arXiv Detail & Related papers (2020-07-04T06:26:44Z) - Burst Denoising of Dark Images [19.85860245798819]
We propose a deep learning framework for obtaining clean and colorful RGB images from extremely dark raw images.
The backbone of our framework is a novel coarse-to-fine network architecture that generates high-quality outputs in a progressive manner.
Our experiments demonstrate that the proposed approach leads to perceptually more pleasing results than state-of-the-art methods.
arXiv Detail & Related papers (2020-03-17T17:17:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.