Dual-Camera Joint Deblurring-Denoising
- URL: http://arxiv.org/abs/2309.08826v1
- Date: Sat, 16 Sep 2023 00:58:40 GMT
- Title: Dual-Camera Joint Deblurring-Denoising
- Authors: Shayan Shekarforoush, Amanpreet Walia, Marcus A. Brubaker,
Konstantinos G. Derpanis, Alex Levinshtein
- Abstract summary: We propose a novel dual-camera method for obtaining a high-quality image.
Our method uses a synchronized burst of short exposure images captured by one camera and a long exposure image simultaneously captured by another.
Our method is able to achieve state-of-the-art results on synthetic dual-camera images from the GoPro dataset with five times fewer training parameters compared to the next best method.
- Score: 24.129908866882346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent image enhancement methods have shown the advantages of using a pair of
long and short-exposure images for low-light photography. These image
modalities offer complementary strengths and weaknesses. The former yields an
image that is clean but blurry due to camera or object motion, whereas the
latter is sharp but noisy due to low photon count. Motivated by the fact that
modern smartphones come equipped with multiple rear-facing camera sensors, we
propose a novel dual-camera method for obtaining a high-quality image. Our
method uses a synchronized burst of short exposure images captured by one
camera and a long exposure image simultaneously captured by another. Having a
synchronized short exposure burst alongside the long exposure image enables us
to (i) obtain better denoising by using a burst instead of a single image, (ii)
recover motion from the burst and use it for motion-aware deblurring of the
long exposure image, and (iii) fuse the two results to further enhance quality.
Our method is able to achieve state-of-the-art results on synthetic dual-camera
images from the GoPro dataset with five times fewer training parameters
compared to the next best method. We also show that our method qualitatively
outperforms competing approaches on real synchronized dual-camera captures.
Related papers
- Exposure Bracketing is All You Need for Unifying Image Restoration and Enhancement Tasks [50.822601495422916]
We propose to utilize exposure bracketing photography to unify image restoration and enhancement tasks.
Due to the difficulty in collecting real-world pairs, we suggest a solution that first pre-trains the model with synthetic paired data.
In particular, a temporally modulated recurrent network (TMRNet) and self-supervised adaptation method are proposed.
arXiv Detail & Related papers (2024-01-01T14:14:35Z) - Recovering Continuous Scene Dynamics from A Single Blurry Image with
Events [58.7185835546638]
An Implicit Video Function (IVF) is learned to represent a single motion blurred image with concurrent events.
A dual attention transformer is proposed to efficiently leverage merits from both modalities.
The proposed network is trained only with the supervision of ground-truth images of limited referenced timestamps.
arXiv Detail & Related papers (2023-04-05T18:44:17Z) - Joint Video Multi-Frame Interpolation and Deblurring under Unknown
Exposure Time [101.91824315554682]
In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame and deblurring under unknown exposure time.
We first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames.
We then build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement.
arXiv Detail & Related papers (2023-03-27T09:43:42Z) - Shakes on a Plane: Unsupervised Depth Estimation from Unstabilized
Photography [54.36608424943729]
We show that in a ''long-burst'', forty-two 12-megapixel RAW frames captured in a two-second sequence, there is enough parallax information from natural hand tremor alone to recover high-quality scene depth.
We devise a test-time optimization approach that fits a neural RGB-D representation to long-burst data and simultaneously estimates scene depth and camera motion.
arXiv Detail & Related papers (2022-12-22T18:54:34Z) - Learn to See Faster: Pushing the Limits of High-Speed Camera with Deep
Underexposed Image Denoising [12.507566152678857]
The ability to record high-fidelity videos at high acquisition rates is central to the study of fast moving phenomena.
The difficulty of imaging fast moving scenes lies in a trade-off between motion blur and underexposure noise.
We propose to address this trade-off by treating the problem of high-speed imaging as an underexposed image denoising problem.
arXiv Detail & Related papers (2022-11-29T09:10:50Z) - Self-Supervised Image Restoration with Blurry and Noisy Pairs [66.33313180767428]
Images with high ISO usually have inescapable noise, while the long-exposure ones may be blurry due to camera shake or object motion.
Existing solutions generally suggest to seek a balance between noise and blur, and learn denoising or deblurring models under either full- or self-supervision.
We propose jointly leveraging the short-exposure noisy image and the long-exposure blurry image for better image restoration.
arXiv Detail & Related papers (2022-11-14T12:57:41Z) - Perceptual Image Enhancement for Smartphone Real-Time Applications [60.45737626529091]
We propose LPIENet, a lightweight network for perceptual image enhancement.
Our model can deal with noise artifacts, diffraction artifacts, blur, and HDR overexposure.
Our model can process 2K resolution images under 1 second in mid-level commercial smartphones.
arXiv Detail & Related papers (2022-10-24T19:16:33Z) - Robust Scene Inference under Noise-Blur Dual Corruptions [20.0721386176278]
Scene inference under low-light is a challenging problem due to severe noise in the captured images.
With the rise of cameras capable of capturing multiple exposures of the same scene simultaneously, it is possible to overcome this trade-off.
We propose a method to leverage these multi exposure captures for robust inference under low-light and motion.
arXiv Detail & Related papers (2022-07-24T02:52:00Z) - Face Deblurring using Dual Camera Fusion on Mobile Phones [23.494813096697815]
Motion blur of fast-moving subjects is a longstanding problem in photography.
We develop a novel face deblurring system based on the dual camera fusion technique for mobile phones.
Our algorithm runs efficiently on Google Pixel 6, which takes 463 ms overhead per shot.
arXiv Detail & Related papers (2022-07-23T22:50:46Z) - Digital Gimbal: End-to-end Deep Image Stabilization with Learnable
Exposure Times [2.6396287656676733]
We digitally emulate a mechanically stabilized system from the input of a fast unstabilized camera.
To exploit the trade-off between motion blur at long exposures and low SNR at short exposures, we train a CNN that estimates a sharp high-SNR image.
arXiv Detail & Related papers (2020-12-08T16:04:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.