Long Scale Error Control in Low Light Image and Video Enhancement Using
Equivariance
- URL: http://arxiv.org/abs/2206.01334v1
- Date: Thu, 2 Jun 2022 23:13:32 GMT
- Title: Long Scale Error Control in Low Light Image and Video Enhancement Using
Equivariance
- Authors: Sara Aghajanzadeh and David Forsyth
- Abstract summary: Current methods learn a mapping using real dark-bright image pairs.
A recent paper has shown that simulated data pairs produce real improvements in restoration.
We show that our approach produces improvements on video restoration as well.
- Score: 6.85316573653194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image frames obtained in darkness are special. Just multiplying by a constant
doesn't restore the image. Shot noise, quantization effects and camera
non-linearities mean that colors and relative light levels are estimated
poorly. Current methods learn a mapping using real dark-bright image pairs.
These are very hard to capture. A recent paper has shown that simulated data
pairs produce real improvements in restoration, likely because huge volumes of
simulated data are easy to obtain. In this paper, we show that respecting
equivariance -- the color of a restored pixel should be the same, however the
image is cropped -- produces real improvements over the state of the art for
restoration. We show that a scale selection mechanism can be used to improve
reconstructions. Finally, we show that our approach produces improvements on
video restoration as well. Our methods are evaluated both quantitatively and
qualitatively.
Related papers
- Toward Efficient Deep Blind RAW Image Restoration [56.41827271721955]
We design a new realistic degradation pipeline for training deep blind RAW restoration models.
Our pipeline considers realistic sensor noise, motion blur, camera shake, and other common degradations.
The models trained with our pipeline and data from multiple sensors, can successfully reduce noise and blur, and recover details in RAW images captured from different cameras.
arXiv Detail & Related papers (2024-09-26T18:34:37Z) - Denoising Monte Carlo Renders with Diffusion Models [5.228564799458042]
Physically-based renderings contain Monte-Carlo noise, with variance that increases as the number of rays per pixel decreases.
This noise, while zero-mean for good moderns, can have heavy tails.
We demonstrate that a diffusion model can denoise low fidelity renders successfully.
arXiv Detail & Related papers (2024-03-30T23:19:40Z) - Inversion by Direct Iteration: An Alternative to Denoising Diffusion for
Image Restoration [22.709205282657617]
Inversion by Direct Iteration (InDI) is a new formulation for supervised image restoration.
It produces more realistic and detailed images than existing regression-based methods.
arXiv Detail & Related papers (2023-03-20T20:28:17Z) - Towards Robust Low Light Image Enhancement [6.85316573653194]
We study the problem of making brighter images from dark images found in the wild.
The images are dark because they are taken in dim environments. They suffer from color shifts caused by quantization and from sensor noise.
We use a supervised learning method, relying on a straightforward simulation of an imaging pipeline to generate usable dataset for training and testing.
arXiv Detail & Related papers (2022-05-17T20:14:18Z) - Neural Global Shutter: Learn to Restore Video from a Rolling Shutter
Camera with Global Reset Feature [89.57742172078454]
Rolling shutter (RS) image sensors suffer from geometric distortion when the camera and object undergo motion during capture.
In this paper, we investigate using rolling shutter with a global reset feature (RSGR) to restore clean global shutter (GS) videos.
This feature enables us to turn the rectification problem into a deblur-like one, getting rid of inaccurate and costly explicit motion estimation.
arXiv Detail & Related papers (2022-04-03T02:49:28Z) - Image Reconstruction from Events. Why learn it? [11.773972029187433]
We show how tackling the joint problem of motion estimation leads us to model event-based image reconstruction as a linear inverse problem.
We propose classical and learning-based image priors can be used to solve the problem and remove artifacts from the reconstructed images.
arXiv Detail & Related papers (2021-12-12T14:01:09Z) - Low-light Image Enhancement via Breaking Down the Darkness [8.707025631892202]
This paper presents a novel framework inspired by the divide-and-rule principle.
We propose to convert an image from the RGB space into a luminance-chrominance one.
An adjustable noise suppression network is designed to eliminate noise in the brightened luminance.
The enhanced luminance further serves as guidance for the chrominance mapper to generate realistic colors.
arXiv Detail & Related papers (2021-11-30T16:50:59Z) - Contrastive Feature Loss for Image Prediction [55.373404869092866]
Training supervised image synthesis models requires a critic to compare two images: the ground truth to the result.
We introduce an information theory based approach to measuring similarity between two images.
We show that our formulation boosts the perceptual realism of output images when used as a drop-in replacement for the L1 loss.
arXiv Detail & Related papers (2021-11-12T20:39:52Z) - Low-Light Image Enhancement with Normalizing Flow [92.52290821418778]
In this paper, we investigate to model this one-to-many relationship via a proposed normalizing flow model.
An invertible network that takes the low-light images/features as the condition and learns to map the distribution of normally exposed images into a Gaussian distribution.
The experimental results on the existing benchmark datasets show our method achieves better quantitative and qualitative results, obtaining better-exposed illumination, less noise and artifact, and richer colors.
arXiv Detail & Related papers (2021-09-13T12:45:08Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Nighttime Dehazing with a Synthetic Benchmark [147.21955799938115]
We propose a novel synthetic method called 3R to simulate nighttime hazy images from daytime clear images.
We generate realistic nighttime hazy images by sampling real-world light colors from a prior empirical distribution.
Experiment results demonstrate their superiority over state-of-the-art methods in terms of both image quality and runtime.
arXiv Detail & Related papers (2020-08-10T02:16:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.