Point-and-Shoot All-in-Focus Photo Synthesis from Smartphone Camera Pair
- URL: http://arxiv.org/abs/2304.04917v1
- Date: Tue, 11 Apr 2023 01:09:54 GMT
- Title: Point-and-Shoot All-in-Focus Photo Synthesis from Smartphone Camera Pair
- Authors: Xianrui Luo, Juewen Peng, Weiyue Zhao, Ke Xian, Hao Lu, and Zhiguo Cao
- Abstract summary: We introduce a new task of AIF synthesis from main (wide) and ultra-wide cameras.
The goal is to recover sharp details from defocused regions in the main-camera photo with the help of the ultra-wide-camera one.
For the first time, we demonstrate point-and-shoot AIF photo synthesis successfully from main and ultra-wide cameras.
- Score: 25.863069406779125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: All-in-Focus (AIF) photography is expected to be a commercial selling point
for modern smartphones. Standard AIF synthesis requires manual, time-consuming
operations such as focal stack compositing, which is unfriendly to ordinary
people. To achieve point-and-shoot AIF photography with a smartphone, we expect
that an AIF photo can be generated from one shot of the scene, instead of from
multiple photos captured by the same camera. Benefiting from the multi-camera
module in modern smartphones, we introduce a new task of AIF synthesis from
main (wide) and ultra-wide cameras. The goal is to recover sharp details from
defocused regions in the main-camera photo with the help of the
ultra-wide-camera one. The camera setting poses new challenges such as
parallax-induced occlusions and inconsistent color between cameras. To overcome
the challenges, we introduce a predict-and-refine network to mitigate
occlusions and propose dynamic frequency-domain alignment for color correction.
To enable effective training and evaluation, we also build an AIF dataset with
2686 unique scenes. Each scene includes two photos captured by the main camera,
one photo captured by the ultrawide camera, and a synthesized AIF photo.
Results show that our solution, termed EasyAIF, can produce high-quality AIF
photos and outperforms strong baselines quantitatively and qualitatively. For
the first time, we demonstrate point-and-shoot AIF photo synthesis successfully
from main and ultra-wide cameras.
Related papers
- Towards Smart Point-and-Shoot Photography [16.192062592740154]
We present a first of its kind smart point and shoot (SPAS) system to help users to take good photos.<n>Our SPAS proposes to help users to compose a good shot of a scene by automatically guiding the users to adjust the camera pose live on the scene.<n>We will present extensive results to demonstrate the performances of our SPAS system using publicly available image composition datasets.
arXiv Detail & Related papers (2025-05-06T15:40:14Z) - Dual-Camera All-in-Focus Neural Radiance Fields [54.19848043744996]
We present the first framework capable of synthesizing the all-in-focus neural radiance field (NeRF) from inputs without manual refocusing.
We introduce the dual-camera from smartphones, where the ultra-wide camera has a wider depth-of-field (DoF) and the main camera possesses a higher resolution.
The dual camera pair saves the high-fidelity details from the main camera and uses the ultra-wide camera's deep DoF as reference for all-in-focus restoration.
arXiv Detail & Related papers (2025-04-23T11:55:02Z) - EvMAPPER: High Altitude Orthomapping with Event Cameras [58.86453514045072]
This work introduces the first orthomosaic approach using event cameras.
In contrast to existing methods relying only on CMOS cameras, our approach enables map generation even in challenging light conditions.
arXiv Detail & Related papers (2024-09-26T17:57:15Z) - UC-NeRF: Neural Radiance Field for Under-Calibrated Multi-view Cameras
in Autonomous Driving [32.03466915786333]
UC-NeRF is a novel method tailored for novel view synthesis in under-calibrated multi-view camera systems.
We propose a layer-based color correction to rectify the color inconsistency in different image regions.
Second, we propose virtual warping to generate more viewpoint-diverse but consistent views for color correction and 3D recovery.
arXiv Detail & Related papers (2023-11-28T16:47:59Z) - Dual-Camera Joint Deblurring-Denoising [24.129908866882346]
We propose a novel dual-camera method for obtaining a high-quality image.
Our method uses a synchronized burst of short exposure images captured by one camera and a long exposure image simultaneously captured by another.
Our method is able to achieve state-of-the-art results on synthetic dual-camera images from the GoPro dataset with five times fewer training parameters compared to the next best method.
arXiv Detail & Related papers (2023-09-16T00:58:40Z) - $\text{DC}^2$: Dual-Camera Defocus Control by Learning to Refocus [38.24734623691387]
We propose a system for defocus control for synthetically varying camera aperture, focus distance and arbitrary defocus effects.
Our key insight is to leverage real-world smartphone camera dataset by using image refocus as a proxy task for learning to control defocus.
We demonstrate creative post-capture defocus control enabled by our method, including tilt-shift and content-based defocus effects.
arXiv Detail & Related papers (2023-04-06T17:59:58Z) - Perceptual Image Enhancement for Smartphone Real-Time Applications [60.45737626529091]
We propose LPIENet, a lightweight network for perceptual image enhancement.
Our model can deal with noise artifacts, diffraction artifacts, blur, and HDR overexposure.
Our model can process 2K resolution images under 1 second in mid-level commercial smartphones.
arXiv Detail & Related papers (2022-10-24T19:16:33Z) - High Dynamic Range and Super-Resolution from Raw Image Bursts [52.341483902624006]
This paper introduces the first approach to reconstruct high-resolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing.
The proposed algorithm is fast, with low memory requirements compared to state-of-the-art learning-based approaches to image restoration.
Experiments demonstrate its excellent performance with super-resolution factors of up to $times 4$ on real photographs taken in the wild with hand-held cameras.
arXiv Detail & Related papers (2022-07-29T13:31:28Z) - Face Deblurring using Dual Camera Fusion on Mobile Phones [23.494813096697815]
Motion blur of fast-moving subjects is a longstanding problem in photography.
We develop a novel face deblurring system based on the dual camera fusion technique for mobile phones.
Our algorithm runs efficiently on Google Pixel 6, which takes 463 ms overhead per shot.
arXiv Detail & Related papers (2022-07-23T22:50:46Z) - LenslessPiCam: A Hardware and Software Platform for Lensless
Computational Imaging with a Raspberry Pi [14.690546891460235]
LenslessPiCam provides a framework to enable researchers, hobbyists, and students to implement and explore lensless imaging.
We provide detailed guides and exercises so that LenslessPiCam can be used as an educational resource, and point to results from our graduate-level signal processing course.
arXiv Detail & Related papers (2022-06-03T07:39:21Z) - Dual Adversarial Adaptation for Cross-Device Real-World Image
Super-Resolution [114.26933742226115]
Super-resolution (SR) models trained on images from different devices could exhibit distinct imaging patterns.
We propose an unsupervised domain adaptation mechanism for real-world SR, named Dual ADversarial Adaptation (DADA)
We empirically conduct experiments under six Real to Real adaptation settings among three different cameras, and achieve superior performance compared with existing state-of-the-art approaches.
arXiv Detail & Related papers (2022-05-07T02:55:39Z) - Aliasing is your Ally: End-to-End Super-Resolution from Raw Image Bursts [70.80220990106467]
This presentation addresses the problem of reconstructing a high-resolution image from multiple lower-resolution snapshots captured from slightly different viewpoints in space and time.
Key challenges for solving this problem include (i) aligning the input pictures with sub-pixel accuracy, (ii) handling raw (noisy) images for maximal faithfulness to native camera data, and (iii) designing/learning an image prior (regularizer) well suited to the task.
arXiv Detail & Related papers (2021-04-13T13:39:43Z) - DSEC: A Stereo Event Camera Dataset for Driving Scenarios [55.79329250951028]
This work presents the first high-resolution, large-scale stereo dataset with event cameras.
The dataset contains 53 sequences collected by driving in a variety of illumination conditions.
It provides ground truth disparity for the development and evaluation of event-based stereo algorithms.
arXiv Detail & Related papers (2021-03-10T12:10:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.