Robust Scene Inference under Noise-Blur Dual Corruptions
- URL: http://arxiv.org/abs/2207.11643v1
- Date: Sun, 24 Jul 2022 02:52:00 GMT
- Title: Robust Scene Inference under Noise-Blur Dual Corruptions
- Authors: Bhavya Goyal, Jean-Fran\c{c}ois Lalonde, Yin Li, Mohit Gupta
- Abstract summary: Scene inference under low-light is a challenging problem due to severe noise in the captured images.
With the rise of cameras capable of capturing multiple exposures of the same scene simultaneously, it is possible to overcome this trade-off.
We propose a method to leverage these multi exposure captures for robust inference under low-light and motion.
- Score: 20.0721386176278
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Scene inference under low-light is a challenging problem due to severe noise
in the captured images. One way to reduce noise is to use longer exposure
during the capture. However, in the presence of motion (scene or camera
motion), longer exposures lead to motion blur, resulting in loss of image
information. This creates a trade-off between these two kinds of image
degradations: motion blur (due to long exposure) vs. noise (due to short
exposure), also referred as a dual image corruption pair in this paper. With
the rise of cameras capable of capturing multiple exposures of the same scene
simultaneously, it is possible to overcome this trade-off. Our key observation
is that although the amount and nature of degradation varies for these
different image captures, the semantic content remains the same across all
images. To this end, we propose a method to leverage these multi exposure
captures for robust inference under low-light and motion. Our method builds on
a feature consistency loss to encourage similar results from these individual
captures, and uses the ensemble of their final predictions for robust visual
recognition. We demonstrate the effectiveness of our approach on simulated
images as well as real captures with multiple exposures, and across the tasks
of object detection and image classification.
Related papers
- Motion Blur Decomposition with Cross-shutter Guidance [33.72961622720793]
Motion blur is an artifact under insufficient illumination where exposure time has to be prolonged so as to collect more photons for a bright enough image.
Recent researches have aimed at decomposing a blurry image into multiple sharp images with spatial and temporal coherence.
We propose to utilize the ordered scanline-wise delay in a rolling shutter image to robustify motion decomposition of a single blurry image.
arXiv Detail & Related papers (2024-04-01T13:55:40Z) - Dual-Camera Joint Deblurring-Denoising [24.129908866882346]
We propose a novel dual-camera method for obtaining a high-quality image.
Our method uses a synchronized burst of short exposure images captured by one camera and a long exposure image simultaneously captured by another.
Our method is able to achieve state-of-the-art results on synthetic dual-camera images from the GoPro dataset with five times fewer training parameters compared to the next best method.
arXiv Detail & Related papers (2023-09-16T00:58:40Z) - Recovering Continuous Scene Dynamics from A Single Blurry Image with
Events [58.7185835546638]
An Implicit Video Function (IVF) is learned to represent a single motion blurred image with concurrent events.
A dual attention transformer is proposed to efficiently leverage merits from both modalities.
The proposed network is trained only with the supervision of ground-truth images of limited referenced timestamps.
arXiv Detail & Related papers (2023-04-05T18:44:17Z) - Joint Video Multi-Frame Interpolation and Deblurring under Unknown
Exposure Time [101.91824315554682]
In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame and deblurring under unknown exposure time.
We first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames.
We then build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement.
arXiv Detail & Related papers (2023-03-27T09:43:42Z) - Self-Supervised Image Restoration with Blurry and Noisy Pairs [66.33313180767428]
Images with high ISO usually have inescapable noise, while the long-exposure ones may be blurry due to camera shake or object motion.
Existing solutions generally suggest to seek a balance between noise and blur, and learn denoising or deblurring models under either full- or self-supervision.
We propose jointly leveraging the short-exposure noisy image and the long-exposure blurry image for better image restoration.
arXiv Detail & Related papers (2022-11-14T12:57:41Z) - Learning Spatially Varying Pixel Exposures for Motion Deblurring [49.07867902677453]
We present a novel approach of leveraging spatially varying pixel exposures for motion deblurring.
Our work illustrates the promising role that focal-plane sensor--processors can play in the future of computational imaging.
arXiv Detail & Related papers (2022-04-14T23:41:49Z) - Clean Images are Hard to Reblur: A New Clue for Deblurring [56.28655168605079]
We propose a novel low-level perceptual loss to make image sharper.
To better focus on image blurriness, we train a reblurring module amplifying the unremoved motion blur.
The supervised reblurring loss at training stage compares the amplified blur between the deblurred image and the reference sharp image.
The self-blurring loss at inference stage inspects if the deblurred image still contains noticeable blur to be amplified.
arXiv Detail & Related papers (2021-04-26T15:49:21Z) - Exposure Trajectory Recovery from Motion Blur [90.75092808213371]
Motion blur in dynamic scenes is an important yet challenging research topic.
In this paper, we define exposure trajectories, which represent the motion information contained in a blurry image.
A novel motion offset estimation framework is proposed to model pixel-wise displacements of the latent sharp image.
arXiv Detail & Related papers (2020-10-06T05:23:33Z) - Deep Exposure Fusion with Deghosting via Homography Estimation and
Attention Learning [29.036754445277314]
We propose a deep network for exposure fusion to deal with ghosting artifacts and detail loss caused by camera motion or moving objects.
Experiments on real-world photos taken using handheld mobile phones show that the proposed method can generate high-quality images with faithful detail and vivid color rendition in both dark and bright areas.
arXiv Detail & Related papers (2020-04-20T07:00:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.