Face Deblurring using Dual Camera Fusion on Mobile Phones
- URL: http://arxiv.org/abs/2207.11617v1
- Date: Sat, 23 Jul 2022 22:50:46 GMT
- Title: Face Deblurring using Dual Camera Fusion on Mobile Phones
- Authors: Wei-Sheng Lai, YiChang Shih, Lun-Cheng Chu, Xiaotong Wu, Sung-Fang
Tsai, Michael Krainin, Deqing Sun, Chia-Kai Liang
- Abstract summary: Motion blur of fast-moving subjects is a longstanding problem in photography.
We develop a novel face deblurring system based on the dual camera fusion technique for mobile phones.
Our algorithm runs efficiently on Google Pixel 6, which takes 463 ms overhead per shot.
- Score: 23.494813096697815
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Motion blur of fast-moving subjects is a longstanding problem in photography
and very common on mobile phones due to limited light collection efficiency,
particularly in low-light conditions. While we have witnessed great progress in
image deblurring in recent years, most methods require significant
computational power and have limitations in processing high-resolution photos
with severe local motions. To this end, we develop a novel face deblurring
system based on the dual camera fusion technique for mobile phones. The system
detects subject motion to dynamically enable a reference camera, e.g.,
ultrawide angle camera commonly available on recent premium phones, and
captures an auxiliary photo with faster shutter settings. While the main shot
is low noise but blurry, the reference shot is sharp but noisy. We learn ML
models to align and fuse these two shots and output a clear photo without
motion blur. Our algorithm runs efficiently on Google Pixel 6, which takes 463
ms overhead per shot. Our experiments demonstrate the advantage and robustness
of our system against alternative single-image, multi-frame, face-specific, and
video deblurring algorithms as well as commercial products. To the best of our
knowledge, our work is the first mobile solution for face motion deblurring
that works reliably and robustly over thousands of images in diverse motion and
lighting conditions.
Related papers
- Towards Real-world Event-guided Low-light Video Enhancement and Deblurring [39.942568142125126]
Event cameras have emerged as a promising solution for improving image quality in low-light environments.
We introduce an end-to-end framework to effectively handle these tasks.
Our framework incorporates a module to efficiently leverage temporal information from events and frames.
arXiv Detail & Related papers (2024-08-27T09:44:54Z) - MobileMEF: Fast and Efficient Method for Multi-Exposure Fusion [0.6261722394141346]
We propose a new method for multi-exposure fusion based on an encoder-decoder deep learning architecture.
Our model is capable of processing 4K resolution images in less than 2 seconds on mid-range smartphones.
arXiv Detail & Related papers (2024-08-15T05:03:14Z) - ExBluRF: Efficient Radiance Fields for Extreme Motion Blurred Images [58.24910105459957]
We present ExBluRF, a novel view synthesis method for extreme motion blurred images.
Our approach consists of two main components: 6-DOF camera trajectory-based motion blur formulation and voxel-based radiance fields.
Compared with the existing works, our approach restores much sharper 3D scenes with the order of 10 times less training time and GPU memory consumption.
arXiv Detail & Related papers (2023-09-16T11:17:25Z) - Dual-Camera Joint Deblurring-Denoising [24.129908866882346]
We propose a novel dual-camera method for obtaining a high-quality image.
Our method uses a synchronized burst of short exposure images captured by one camera and a long exposure image simultaneously captured by another.
Our method is able to achieve state-of-the-art results on synthetic dual-camera images from the GoPro dataset with five times fewer training parameters compared to the next best method.
arXiv Detail & Related papers (2023-09-16T00:58:40Z) - Panoramas from Photons [22.437940699523082]
We present a method capable of estimating extreme scene motion under challenging conditions, such as low light or high dynamic range.
Our method relies on grouping and aggregating frames after-the-fact, in a stratified manner.
We demonstrate the creation of high-quality panoramas under fast motion and extremely low light, and super-resolution results using a custom single-photon camera prototype.
arXiv Detail & Related papers (2023-09-07T16:07:31Z) - Computational Long Exposure Mobile Photography [1.5553309483771411]
We describe a computational burst photography system that operates in a hand-held smartphone camera app.
Our approach first detects and segments the salient subject.
We capture an under-exposed burst and select the subset of input frames that will produce blur trails of controlled length, regardless of scene or camera motion velocity.
arXiv Detail & Related papers (2023-08-02T18:36:54Z) - Real-Time Under-Display Cameras Image Restoration and HDR on Mobile
Devices [81.61356052916855]
The images captured by under-display cameras (UDCs) are degraded by the screen in front of them.
Deep learning methods for image restoration can significantly reduce the degradation of captured images.
We propose a lightweight model for blind UDC Image Restoration and HDR, and we also provide a benchmark comparing the performance and runtime of different methods on smartphones.
arXiv Detail & Related papers (2022-11-25T11:46:57Z) - Perceptual Image Enhancement for Smartphone Real-Time Applications [60.45737626529091]
We propose LPIENet, a lightweight network for perceptual image enhancement.
Our model can deal with noise artifacts, diffraction artifacts, blur, and HDR overexposure.
Our model can process 2K resolution images under 1 second in mid-level commercial smartphones.
arXiv Detail & Related papers (2022-10-24T19:16:33Z) - High Dynamic Range and Super-Resolution from Raw Image Bursts [52.341483902624006]
This paper introduces the first approach to reconstruct high-resolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing.
The proposed algorithm is fast, with low memory requirements compared to state-of-the-art learning-based approaches to image restoration.
Experiments demonstrate its excellent performance with super-resolution factors of up to $times 4$ on real photographs taken in the wild with hand-held cameras.
arXiv Detail & Related papers (2022-07-29T13:31:28Z) - Learning Spatially Varying Pixel Exposures for Motion Deblurring [49.07867902677453]
We present a novel approach of leveraging spatially varying pixel exposures for motion deblurring.
Our work illustrates the promising role that focal-plane sensor--processors can play in the future of computational imaging.
arXiv Detail & Related papers (2022-04-14T23:41:49Z) - From two rolling shutters to one global shutter [57.431998188805665]
We explore a surprisingly simple camera configuration that makes it possible to undo the rolling shutter distortion.
Such a setup is easy and cheap to build and it possesses the geometric constraints needed to correct rolling shutter distortion.
We derive equations that describe the underlying geometry for general and special motions and present an efficient method for finding their solutions.
arXiv Detail & Related papers (2020-06-02T22:18:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.