Learn to See Faster: Pushing the Limits of High-Speed Camera with Deep
Underexposed Image Denoising
- URL: http://arxiv.org/abs/2211.16034v1
- Date: Tue, 29 Nov 2022 09:10:50 GMT
- Title: Learn to See Faster: Pushing the Limits of High-Speed Camera with Deep
Underexposed Image Denoising
- Authors: Weihao Zhuang, Tristan Hascoet, Ryoichi Takashima, Tetsuya Takiguchi
- Abstract summary: The ability to record high-fidelity videos at high acquisition rates is central to the study of fast moving phenomena.
The difficulty of imaging fast moving scenes lies in a trade-off between motion blur and underexposure noise.
We propose to address this trade-off by treating the problem of high-speed imaging as an underexposed image denoising problem.
- Score: 12.507566152678857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to record high-fidelity videos at high acquisition rates is
central to the study of fast moving phenomena. The difficulty of imaging fast
moving scenes lies in a trade-off between motion blur and underexposure noise:
On the one hand, recordings with long exposure times suffer from motion blur
effects caused by movements in the recorded scene. On the other hand, the
amount of light reaching camera photosensors decreases with exposure times so
that short-exposure recordings suffer from underexposure noise. In this paper,
we propose to address this trade-off by treating the problem of high-speed
imaging as an underexposed image denoising problem. We combine recent advances
on underexposed image denoising using deep learning and adapt these methods to
the specificity of the high-speed imaging problem. Leveraging large external
datasets with a sensor-specific noise model, our method is able to speedup the
acquisition rate of a High-Speed Camera over one order of magnitude while
maintaining similar image quality.
Related papers
- Dual-Camera Joint Deblurring-Denoising [24.129908866882346]
We propose a novel dual-camera method for obtaining a high-quality image.
Our method uses a synchronized burst of short exposure images captured by one camera and a long exposure image simultaneously captured by another.
Our method is able to achieve state-of-the-art results on synthetic dual-camera images from the GoPro dataset with five times fewer training parameters compared to the next best method.
arXiv Detail & Related papers (2023-09-16T00:58:40Z) - Joint Video Multi-Frame Interpolation and Deblurring under Unknown
Exposure Time [101.91824315554682]
In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame and deblurring under unknown exposure time.
We first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames.
We then build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement.
arXiv Detail & Related papers (2023-03-27T09:43:42Z) - Self-Supervised Image Restoration with Blurry and Noisy Pairs [66.33313180767428]
Images with high ISO usually have inescapable noise, while the long-exposure ones may be blurry due to camera shake or object motion.
Existing solutions generally suggest to seek a balance between noise and blur, and learn denoising or deblurring models under either full- or self-supervision.
We propose jointly leveraging the short-exposure noisy image and the long-exposure blurry image for better image restoration.
arXiv Detail & Related papers (2022-11-14T12:57:41Z) - Robust Scene Inference under Noise-Blur Dual Corruptions [20.0721386176278]
Scene inference under low-light is a challenging problem due to severe noise in the captured images.
With the rise of cameras capable of capturing multiple exposures of the same scene simultaneously, it is possible to overcome this trade-off.
We propose a method to leverage these multi exposure captures for robust inference under low-light and motion.
arXiv Detail & Related papers (2022-07-24T02:52:00Z) - Learning Spatially Varying Pixel Exposures for Motion Deblurring [49.07867902677453]
We present a novel approach of leveraging spatially varying pixel exposures for motion deblurring.
Our work illustrates the promising role that focal-plane sensor--processors can play in the future of computational imaging.
arXiv Detail & Related papers (2022-04-14T23:41:49Z) - ESL: Event-based Structured Light [62.77144631509817]
Event cameras are bio-inspired sensors providing significant advantages over standard cameras.
We propose a novel structured-light system using an event camera to tackle the problem of accurate and high-speed depth sensing.
arXiv Detail & Related papers (2021-11-30T15:47:39Z) - Digital Gimbal: End-to-end Deep Image Stabilization with Learnable
Exposure Times [2.6396287656676733]
We digitally emulate a mechanically stabilized system from the input of a fast unstabilized camera.
To exploit the trade-off between motion blur at long exposures and low SNR at short exposures, we train a CNN that estimates a sharp high-SNR image.
arXiv Detail & Related papers (2020-12-08T16:04:20Z) - Exposure Trajectory Recovery from Motion Blur [90.75092808213371]
Motion blur in dynamic scenes is an important yet challenging research topic.
In this paper, we define exposure trajectories, which represent the motion information contained in a blurry image.
A novel motion offset estimation framework is proposed to model pixel-wise displacements of the latent sharp image.
arXiv Detail & Related papers (2020-10-06T05:23:33Z) - Deep Bilateral Retinex for Low-Light Image Enhancement [96.15991198417552]
Low-light images suffer from poor visibility caused by low contrast, color distortion and measurement noise.
This paper proposes a deep learning method for low-light image enhancement with a particular focus on handling the measurement noise.
The proposed method is very competitive to the state-of-the-art methods, and has significant advantage over others when processing images captured in extremely low lighting conditions.
arXiv Detail & Related papers (2020-07-04T06:26:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.